Skip To Content

Table Of Contents

Click for DHHS Home Page
Click for the SAMHSA Home Page
Click for the OAS Drug Abuse Statistics Home Page
Click for What's New
Click for Recent Reports and HighlightsClick for Information by Topic Click for OAS Data Systems and more Pubs Click for Data on Specific Drugs of Use Click for Short Reports and Facts Click for Frequently Asked Questions Click for Publications Click to send OAS Comments, Questions and Requests Click for OAS Home Page Click for Substance Abuse and Mental Health Services Administration Home Page Click to Search Our Site

2000 National Survey on Drug Use & Health

Appendix A: Description of the Survey

A.1. Sample Design

The 2000 NHSDA sample design was part of a coordinated five-year sample design which will provide estimates for all 50 states plus the District of Columbia for the years 1999 through 2003. The coordinated design facilitates 50 percent overlap in first stage units (area segments) between each two successive years.

For the five-year 50-state design, eight states were designated as large sample states (California, Florida, Illinois, Michigan, New York, Ohio, Pennsylvania and Texas) with samples large enough to support direct state estimates. Sample sizes in these states ranged from 3,478 to 5,022. For the remaining 42 states and the District of Columbia, smaller, but adequate, samples were selected to support state estimates using small area estimation (SAE) techniques. Sample sizes in these states ranged from 828 to 1,200.

States were first stratified into a total of 900 Field Interviewer (FI) regions (48 regions in each large sample state and 12 regions in each small sample state). These regions were contiguous geographic areas designed to yield the same number of interviews on average. Within FI regions, adjacent Census blocks were combined to form the first stage sampling units, called area segments. A total of 96 segments per FI region were selected with probability proportional to population size in order to support the five-year sample and any supplemental studies SAMHSA may choose to field. Eight sample segments per FI region were fielded during the 2000 survey year.

These sampled segments were allocated equally into four separate samples, one for each three month period during the year, so that the survey is essentially continuous in the field. In each of these area segments a listing of all addresses was made, from which a sample of 215,860 addresses was selected. Of these, 182,576 were determined to be eligible sample units. In these sample units (which can be either households or units within group quarters), sample persons were randomly selected using an automated screening procedure programmed in a hand-held computer carried by the interviewers. The number of sample units completing the screening was 169,769. Youth (aged 12 to 17 years) and young adults (aged 18 to 25 years) were oversampled at this stage. Because of the large sample size associated with this sample, there was no need to oversample race/ethnicity groups, as was done on NHSDAs prior to 1999. A total of 91,961 persons were selected nationwide. Consistent with previous NHSDAs, the final respondent sample of 71,764 persons was representative of the U.S. general population (since 1991, the civilian noninstitutional population) ages 12 and older. In addition, state samples were representative of their respective state populations. More detailed information on the disposition of the national screening and interview sample can be found in Appendix B. Also, additional tables showing sample sizes and estimated population counts for various demographic and geographic subgroups are presented in Appendix E.

The survey covers residents of households (living in houses/townhouses, apartments, condominiums, etc.), noninstitutional group quarters (e.g., shelters, rooming/boarding houses, college dormitories, migratory workers' camps, halfway houses, etc), and civilians living on military bases. While the survey covers these types of units (they are given a nonzero probability of selection), sample sizes of most specific groups are too small to provide separate estimates. Persons excluded from the survey include homeless people who do not use shelters, active military personnel, and residents of institutional group quarters, such as correctional facilities, nursing homes, mental institutions, and hospitals.

Unlike the 1999 NHSDA, which also included a supplemental sample using the paper and pencil interviewing (PAPI) mode for the purposes of measuring trends with estimates comparable to 1998 and prior years, the 2000 NHSDA was fielded entirely using computer-assisted interviewing (CAI).

A.2. Data Collection Methodology

The data collection method used in the NHSDA involves in-person interviews with sample persons, incorporating procedures that would be likely to increase respondents' cooperation and willingness to report honestly about their illicit drug use behavior. Confidentiality is stressed in all written and verbal communications with potential respondents, respondents' names are not collected with the data, and computer-assisted interviewing (CAI) including audio computer-assisted self-interviewing (ACASI) are used to provide a private and confidential setting to complete the interview.

Introductory letters are sent to sampled addresses, followed by an interviewer visit. A five-minute screening procedure conducted using a hand-held computer involves listing all household members along with their basic demographic data. The computer uses the demographic data in a preprogrammed selection algorithm to select 0-2 sample person(s), depending on the composition of the household. This selection process is designed to provide the necessary sample sizes for the specified population age groupings.

Interviewers attempt to immediately conduct the NHSDA interview with each selected person in the household. The interviewer requests the selected respondent to identify a private area in the home away from other household members to conduct the interview. The interview averages about an hour, and includes a combination of CAPI (computer-assisted personal interviewing) and ACASI. The interview begins in CAPI mode with the Field Interviewer (FI) reading the questions from the computer screen and entering the respondent's replies into the computer. The interview then transitions to the ACASI mode for the sensitive questions. In this mode the respondent can read the questions silently on the computer screen and/or listen to the questions read through headphones and enter their responses directly into the computer. At the conclusion of the ACASI section, the interview returns to the CAPI mode with the interviewer completing the questionnaire.

No personal identifying information is captured in the CAI record for the respondent. At the end of the day when an interviewer has completed one or more interviews, he/she transmits the data to Research Triangle Institute (RTI) via home telephone lines.

A.3. Data Processing (CAI)

Interviewers initiate nightly data transmissions of interview data and call records on days when they work. Computers at RTI direct the information to a raw data file that consists of one record for each completed interview. Even though much editing and consistency checking is done by the CAI program during the interview, additional more complex edits and consistency checks were completed at RTI. Resolution of most inconsistencies and missing data was done using machine editing routines that were developed specifically for the CAI instrument. Cases were retained only if the respondent provided data on lifetime use of cigarettes and at least 9 other substances.

Statistical Imputation

For some key variables that still have missing values after the application of editing, statistical imputation is used to replace missing data with appropriate response codes.

Considerable changes in the imputation procedures that have been used in past NHSDAs were introduced beginning with the 1999 CAI sample. Three types of statistical imputation procedures are used: a standard unweighted sequential hot-deck imputation, a univariate combination of weighted regression imputation and a random nearest neighbor hot-deck imputation (which could be viewed as a univariate predictive mean neighborhood method), and a combination of weighted regression and a random nearest neighbor hot-deck imputation using a neighborhood where imputation is accomplished on several response variables at once (which could be viewed as a multivariate predictive mean neighborhood method). Since the primary demographic variables (e.g., age, gender, race/ethnicity, employment, education) are imputed first, few variables are available for model-based imputation. Moreover, most demographic variables have a very low level of missingness. Hence, unweighted sequential hot deck is used to impute missing values for demographic variables. The demographic variables can then be used as covariates in models for drug use measures. These models also include other drug use variables as covariates. For example, the model for cocaine use includes cigarette, alcohol, and marijuana use as covariates. The univariate predictive mean neighborhood method is used as an intermediate imputation procedure for recency of use, 12-month frequency of use, 30-day frequency of use, and 30-day binge drinking frequency for all drugs where these variables occur. The final imputed values for these variables are determined using multivariate predictive mean neighborhoods. The final imputed values for age at first use for all drugs and age at first daily cigarette use are determined using univariate predictive mean neighborhoods.

Hot-deck imputation involves the replacement of a missing value with a valid code taken from another respondent who is "similar" and has complete data. Responding and non-responding units are sorted together by a variable or collection of variables closely related to the variable of interest Y. For sequential hot-deck imputation, a missing value of Y is replaced by the nearest responding value preceding it in the sequence. With random nearest neighbor hot-deck imputation, the missing value of Y is replaced by a responding value from a donor randomly selected from a set of potential donors close to the unit with the missing value according to some distance metric. The predictive mean neighborhood imputation involves determining a predicted mean using a model such as a linear regression or logistic regression, depending on the response variable, where the models incorporate the design weights. In the univariate case, the neighborhood of potential donors is determined by calculating the relative distance between the predicted mean for an item non-respondent and the predicted mean for each potential donor, and choosing those within a small preset value (this is the "distance metric"). The pool of donors is further restricted to satisfy logical constraints whenever necessary (e.g., age of first crack use must not be younger than age of first cocaine use). Whenever possible, more than one response variable was considered at a time. In that (multivariate) case, the Mahalanobis distance across a vector of several response variables' predicted means is calculated between a given item non-respondent and each candidate donor. The k smallest Mahalanobis distances, say 30, determine the neighborhood of candidate donors, and the nonrespondent's missing values in this vector are replaced by those of the randomly selected donor. A respondent may only be missing some of the responses within this vector of response variables; in that case, only the missing values were replaced, and donors were restricted to be logically consistent with the response variables that were not missing.

Although statistical imputation could not proceed separately within each state due to insufficient pools of donors, information about the state of residence of each respondent is incorporated in the modeling and hot deck steps. For most drugs, respondents were separated into three state usage categories for each drug depending on the response variable of interest; respondents from states with high usage of a given drug were placed in one category, respondents from medium usage states into another, and the remainder into a third category. This categorical "state rank" variable was used as one set of covariates in the imputation models. In addition, eligible donors for each item non-respondent were restricted to be of the same state usage category (the same "state rank") as the item non-respondent.

During the processing of the 2000 NHSDA data, an error was detected in the computer programs that assigned imputed values for drug use variables that had missing information in the 1999 NHSDA data file. These variables are used in making estimates of substance use incidence and prevalence. In preparing this report, the 1999 data were adjusted to correct for the error. For most substance use measures, the impact of the revision is small. Estimates of lifetime use of substances were not affected at all. Estimates of past year and past month use were all revised, but the updated numbers in many cases are nearly identical to the old ones. The effects of the error are noticeable for only four substances (alcohol, marijuana, inhalants, and heroin), in addition to the composite measures "any illicit drug use" and "any illicit drug other than marijuana." For these substances, all of the revised estimates are lower than the previous ones. For inhalants, the revised estimates are considerably lower, especially among youth. See Appendix B for more detailed information on how the error occurred, how it was corrected, and its impact on prevalence estimates.

Development of Analysis Weights

The general approach to developing and calibrating analysis weights involved developing design-based weights, dk , as the inverse of the selection probabilities of the households and persons. Adjustment factors, ak, were then applied to the design-based weights to adjust for nonresponse, to control for extreme weights when necessary, and to poststratify to known population control totals. In view of the importance of state-level estimates with the new 50-state design, it was necessary to control for a much larger number of known population totals. Several other modifications to the general weight adjustment strategy that had been used in past NHSDAs were also implemented for the first time beginning with the 1999 CAI sample.

Weight adjustments were based on a generalization of Deville and Sarndal's (1992) logit model. This generalized exponential model (GEM) (Folsom and Singh, 2000) incorporates unit-specific bounds (lk, uk), ks, for the adjustment factor ak() as follows:


where ck are pre-specified centering constants, such that lk < ck < uk and Ak = (uk - lk)/(uk - ck)(ck -lk). The variables lk, ck, and uk are user-specified bounds, and is the column vector of p model parameters corresponding to the p covariates x. The -parameters are estimated by solving

where denotes control totals which could be either nonrandom as is generally the case with poststratification, or random as is generally the case for nonresponse adjustment.

The final weights wk = dkak() minimize the distance function (w,d) defined as

This general approach was used at several stages of the weight adjustment process including: (1) adjustment of household weights for nonresponse at the screener level, (2) poststratification of household weights to meet population controls for various demographic groups by state, (3) adjustment of household weights for extremes, (4) poststratification of selected person weights, (5) adjustment of person weights for nonresponse at the questionnaire level, (6) poststratification of person weights, and (7) adjustment of person weights for extremes.

Every effort was made to include as many relevant state-specific covariates (typically defined by demographic domains within states) as possible in the multi-variate models used to calibrate the weights (nonresponse adjustment and poststratification steps). Because further subdivision of state samples by demographic covariates often produced small cell sample sizes, it was not possible to retain all state-specific covariates and still estimate the necessary model parameters with reasonable precision. Therefore, a hierarchical structure was used in grouping states with covariates defined at the national level, at the census division level within the nation, at the state-group within census division, and, whenever possible, at the state level. In every case, the controls for total population within state and the five age groups within state were maintained. Census control totals by age and race were required for the civilian noninstitutionalized population of each state. Published Census projections (U.S. Bureau of the Census, 2000) reflected the total residential population (which includes military and institutionalized). The 1990 census 5% public use micro data file (U.S. Bureau of the Census, 1992) was used to distribute the state residential population into two groups, and then the method of raking-ratio adjustment was used to get the desired domain-level counts such that they respect both the state-level residential population counts as well as the national-level civilian and noncivilian counts for each domain. This was done for the midpoint of each NHSDA data collection period (i.e., quarter) such that counts aggregated over the quarters correspond to the annual counts.

Several other enhancements to the weighting procedures were also implemented starting in 1999. The control of extreme weights through winsorization was incorporated into the calibration processes. Winsorization truncates extreme values at prespecified levels and distributes the trimmed portions of weights to the nontrucated cases; note that this process was carried out using the GEM model discussed above. A step was added to poststratify the household-level weights to obtain census-consistent estimates based on the household rosters from all screened households; these household roster-based estimates then provided the control totals needed to calibrate the respondent pair weights for subsequent planned analyses. An additional step poststratified the selected persons sample to conform with the adjusted roster estimates. The final step in poststratification related the respondent person sample to external census data (defined within state whenever possible as discussed above).

  Table of Contents

This page was last updated on June 03, 2008.