Skip To Content

Table Of Contents

Click for DHHS Home Page
Click for the SAMHSA Home Page
Click for the OAS Drug Abuse Statistics Home Page
Click for What's New
Click for Recent Reports and HighlightsClick for Information by Topic Click for OAS Data Systems and more Pubs Click for Data on Specific Drugs of Use Click for Short Reports and Facts Click for Frequently Asked Questions Click for Publications Click to send OAS Comments, Questions and Requests Click for OAS Home Page Click for Substance Abuse and Mental Health Services Administration Home Page Click to Search Our Site

Computer Assisted Interviewing for SAMHSA's National Household Survey on Drug Abuse

5. Summary of the Design and Conduct of the 1997 Field Experiment

The design of the 1997 field experiment was based on the results of the 1996 feasibility experiment, laboratory testing, power calculations, and discussions as to the operational feasibility of various designs for the field experiment. The experiment compared alternative versions of the computer-assisted interviewing (CAI) interview in a factorial design. These alternatives were compared to each other and to the results from the paper-and-pencil interview (PAPI) and self-administered answer sheets. To reduce the overall cost of the field experiment, we conducted the experiment in the fourth quarter of 1997 and used the Quarter 4 1997 NHSDA survey results as a comparison group.

In this chapter, we review the overall design of the CAI experimental treatments, sampling design, development of the electronic screener, screening and interview response rates, and analysis weights.

5.1 Experimental Design for Computer-Assisted Interviewing

We considered examining experimentally a number of factors, including the use of alternative skip patterns (i.e., contingent questioning structures), respondent-completed consistency and range checks, variations in the procedures of administering the 12-month frequency questions, audio computer-assisted self-interviewing (ACASI) versus computer-assisted personal interviewing (CAPI) administration of mental health questions, question wordings that were tailored or not tailored to the CAI environment, extra questions for nonusers of particular substances, and the effects of alternative voices on respondent preferences and responses. Other factors considered were examination of differences in responses under a "mark all that apply" questioning strategy versus the current "multiple question" strategy, asking income questions using ACASI, translating the alternative CAI instruments into Spanish, and specifically designing multiple questions on use to permit response variance modeling. Based on the cost and complexity of conducting such a study and the need to increase the power of the experimental comparisons, the number of experimental factors was reduced to three.

5.1.1 Structure of Alternative CAI Instruments

Several types of changes were made in the CAI instruments for all respondents participating in the 1997 field experiment:

  1. The mental health items were administered via ACASI in all versions of the CAI questionnaire. In the 1997 NHSDA, these questions were administered by the interviewers using a PAPI interview. These items were moved to the ACASI section because other research has shown that self-administration of these items is likely to be easy and to improve response. Also, it was felt that it was not necessary to examine this change experimentally because these questions are not part of the core interview.

  2. Question and response category wordings were tailored for CAI across all versions of the CAI questionnaire. A workable CAI version requires that the wording be tailored to the CAI environment; not doing so would create an awkward CAI interview.

In addition, the following decisions were made:

  1. No additional questions would be added for nonusers. We considered including additional questions for nonusers in the core sections to prevent respondents from giving false negative answers to initial questions on use of a particular substance in order to either (a) avoid having to answer additional questions or (b) because they felt that answering a number of questions about a substance compromised their privacy. The latter could occur because the respondent reasoned as follows: "If the interviewer or an observer notices that I am taking a long time to answer questions, the interviewer or observer will conclude that I have used an illegal substance." We decided not to include additional questions for nonusers because (a) the CAI environment presents questions one by one, making it hard for respondents to know in advance that they will have to answer additional questions if they respond "yes" to the overall use question; (b) there was already considerable variability across answer sheets in the number of questions that follow a "yes" response; and (c) one of the contingent questioning structures that was to be tested required nonusers to answer several questions for each substance before they were routed to the next substance.

  2. There was no experimental treatment for the 12-month frequency of use question. An alternative way of asking the 12-month frequency question was needed for the ACASI administration. An unfolding strategy was used in the fall 1996 feasibility experiment. Based on laboratory testing, we decided to have all respondents report 12-month frequency using the two-stage process in which they first indicated the metric that would be easiest for them (days per year, days per month, or days per week) and then reported the number of days for that period.

  3. There was no Spanish translation. No Spanish translation was used because only about 150 Spanish-speaking respondents were likely to be included, and we felt that it was not cost effective to translate eight different versions for so few respondents.

  4. The income questions were retained in the interviewer-administered portion of the CAI instrument across all versions. Income is a sensitive issue and could possibly benefit from being asked in ACASI; however, because the sample person is not always the person who knows the income and because of the complexity of the question, we decided to leave these questions as interviewer-administered.

Experimental features. We chose to examine questioning strategies that had the potential for reducing respondent burden and improving accuracy using a 2x2x2 factorial design. Random halves of the sample were assigned one of two levels within three experimental factors. These factors are described in the following paragraphs.

Factor 1: Structure of the contingent questioning in the CAI interview. Under a contingent questioning strategy, respondents are routed past detailed questions if they indicate they have not used the substance in earlier questions. The 1996 feasibility experiment indicated that the contingent questioning was likely to yield prevalence estimates that were the same or higher than those obtained using answer sheets and the "answer every question" strategy. Thus, two versions were tested in 1997: use of a single gate question and use of multiple gate questions. In the single gate question version, respondents were first asked if they had ever used a substance and were immediately routed to the next section if they had not. Respondents who reported some use were routed through the appropriate follow-up questions. Under the multiple gate question version, every respondent answered three gate questions for each substance: use in the past 30 days, use in the past 12 months, and lifetime use. Only those respondents who answered "no" to each of the three questions were routed to the next section, and those respondents answering "yes" to any of the three gate questions were routed to the appropriate follow-up questions.

Rationale: We examined whether having the respondent answer three gate questions rather than one reduced threats to complete reporting for the following reasons: The single gate question was tested in the fall of 1996 and resulted in a significantly shorter interview and increased reporting. However, allowing respondents only one opportunity to report use might result in decreased prevalence estimates because (a) the respondents mistakenly answer "no" to the sole gate question, (b) they can reduce the number of questions they answer by misreporting actual use to a single question, or (c) they may feel their privacy is compromised by taking a long time to answer questions.

Factor 2: Data quality checks within the ACASI interview. We examined the potential for improving data quality by having a random half of the respondents resolve inconsistent and questionable data during the interview. For a random half of the respondents, the ACASI program included additional questions that followed up on inconsistent answers and questionable reports, such as a suspiciously low age of first use for a substance. Consistency checks were included for the following inconsistent reports:

  1. The 30-day frequency of use was greater than the 12-month frequency of use for cigarettes, alcohol, marijuana, cocaine, crack, heroin, hallucinogens, or inhalants.

  2. A response of zero days used in the past 30 days was reported by persons previously reporting some use within the past 30 days for cigarettes, alcohol, marijuana, cocaine, crack, heroin, hallucinogens, or inhalants.

  3. Age at first use was suspiciously low for cigarettes, alcohol, marijuana, cocaine, crack, heroin, hallucinogens, or inhalants.

  4. Age at first use was greater than or equal to current age: cigarettes, alcohol, marijuana, cocaine, crack, heroin, hallucinogens, or inhalants.

  5. The 12-month frequency of being very high or drunk was greater than the 12-month frequency of use for alcohol.

  6. The number of days consumed five or more drinks on the same occasion was greater than the 30-day frequency of use for alcohol.

  7. The last use of LSD was more recent than the last use of any hallucinogen.

  8. The last use of phencyclidine (PCP) was more recent than the last use of any hallucinogen.

Rationale: Clearly, it is preferable to have respondents correct any inconsistencies in their data rather than having an analyst determine how to edit the data after the fact. In addition, although considerable effort must be expended to program these data quality checks, they have the potential to reduce the post-survey processing by reducing the number of edits. However, we were uncertain as to whether respondents would be able or willing to provide this type of information and speculated that it could increase either the number of breakoffs and or the overall length of the interview.

Factor 3: Number of chances to report 30-day and 12-month use. This factor was included at two levels: a single opportunity to report use and multiple opportunities to report use. Under the single opportunity to report use, regardless of the version of contingent questioning used, respondents were only asked once to indicate use during the past 30 days or during the past 12 months.9 With the multiple opportunities, respondents who indicated at least lifetime use of a substance were routed through the additional follow-up questions even though they had not indicated use in the particular time period. For example, respondents who reported that their last use was more than 30 days ago were asked to report the number of days they had used a substance in the past 30 days in spite of this report. Similarly, respondents who reported that their most recent use was more than 12 months ago but within the past 3 years were routed to the question on frequency of use in the past 12 months. In addition, respondents who reported no cocaine use were asked about crack in spite of their denial of using any form of cocaine.

Rationale: With the paper answer sheets, respondents had many "opportunities" to indicate use beyond the basic lifetime, 12-month, and 30-day questions because there was no contingent questioning that routes them around questions that did not apply to them. When answering these other questions, respondents sometimes were inconsistent and indicated that they may in fact be a user of the substance. A significant number of respondents were classified as more frequent users based on editing rules. With only one opportunity for respondents to report 12-month or 30-day use, we thought that we might see a decline in prevalence rates. By adding a second question for these items, we could determine how the prevalence rates were likely to be affected by eliminating multiple questions on use during the 30-day and 12-month reporting periods.

While deciding whether it was necessary to include this factor, we examined the results from the 1996 NHSDA to determine how often respondents gave inconsistent answers on use within the core answer sheets. In the 1996 NHSDA, only 84% of the edited past month alcohol users indicated that they had used within the past month when they were responding to the recency question. The corresponding percentages for marijuana and cocaine are 77% and 58.7%, respectively.

In addition to the experimental factors described above, we included respondent and interviewer debriefing questions. The respondent debriefing questions gathered information on respondents' computer knowledge, attitudes and preferences, and perceptions of privacy and confidentiality. The interviewer debriefing questions consisted of a short set of questions for the interviewer on his or her impression of the interview. These focused on questions raised by the respondent, problems encountered, possible reasons for consistency checks being tripped, appraisal of the respondent's interest in and understanding of the interview, and so on.

5.1.2 Assignment to Treatments

The sample was designed to yield a total of 2,256 respondents. Exhibit 5.1.1 presents the expected number of respondents in each treatment combination.

The goal of the design was to yield 1,128 respondents for each of the major factors (main effects) in the experiment. In addition, because of the pressing need to understand substance abuse among youths, the sample was designed so that half of the respondents were expected to be 12 to 17 year olds. The CAI application included a case management system (CMS) that randomly assigned each person who agreed to be interviewed to one of the eight versions of the questionnaire.

5.1.3 Comparison Group

To reduce costs, we decided to use the 1997 NHSDA Quarter 4 respondents as the control group. This comparison group was restricted to those 1997 NHSDA respondents who were in the same primary sampling units (PSUs) that contained the 1997 field experiment sample. There were both positive and negative aspects of doing so. On the positive side, it provided a large sample size for the comparisons from data that were already being collected. A large sample size had the potential for both increasing the power of the comparisons and reducing the overall cost of the field experiment. Having interviewers conduct both the CAI and the PAPI versions would have required training the field staff to use two data collection methods. Thus, by selecting a subsample of 1997 NHSDA Quarter 4 respondents as the comparison group, we avoided the costs of this extra training. On the negative side, we felt that comparisons of overall response rates at the sample person level would be confounded by the fact that there would be two different interviewing teams collecting the data. We would be limited in our ability to disentangle any observed differences in response rates between the Quarter 4 NHSDA and the 1997 field experiment and to determine if these were due to the CAI interviewing, interviewer experience, or interviewing teams. On balance, it was felt that it was more important to have a large sample size for comparing the alternative questioning strategies rather than to focus on the response rate. The survey staff agreed that if CAI were adopted for the survey, they would be able to find procedures to achieve equivalent response rates.

It turned out that we achieved lower response rates in the 1997 field experiment than were achieved in the comparison group, but we are not able to clearly determine that this was not due to the use of electronic instruments.

To parallel the debriefing questions included in the field experiment, we selected a subsample of 1997 Quarter 4 respondents and administered an ACASI respondent debriefing questionnaire to them. This subsample was designed to yield 750 respondents for Quarter 4.

Exhibit 5.1.1 Desired Distribution of 1997 Field Experiment Sample

   

ACASI Treatment Groups

   

Contingent Questioning Structure

   

Single Gate Questions

Multiple Gate Questions

Respondent Characteristics

Consistency Checks

Consistency Checks

   

Absent

Present

Absent

Present

   

Multiple Use Questions

Multiple Use Questions

Multiple Use Questions

Multiple Use Questions

   

Absent

Present

Absent

Present

Absent

Present

Absent

Present

 

Treatment Version

1

2

3

4

5

6

7

8

                   
   

Number of Respondents

Total

282

282

282

282

282

282

282

282

                   

Age Group

               
 

12-17

141

141

141

141

141

141

141

141

 

18+

141

141

141

141

141

141

141

141

                   

Race/Ethnicity

               
 

Hispanic

71

71

71

71

71

71

71

71

 

Non-Hisp., Black

71

71

71

71

71

71

71

71

Non-Hisp., All Other Races

141

141

141

141

141

141

141

141

Source: National Household Survey on Drug Abuse: Development of Computer-Assisted Interviewing Procedures; 1997 Field Experiment.

5.2 Sample Design

In the following paragraphs, we describe the sampling design for the 1997 field experiment and the sampling design for the 1997 NHSDA debriefing subsample. A summary of the sampling plans for the 1997 Quarter 4 NHSDA, the field experiment, and the Quarter 4 debriefing sample is given in Exhibit 5.2.1.

5.2.1 1997 Field Experiment Sample Design

The sample for the field experiment was confined to 99 purposely selected, geographic PSUs that had been previously selected for the national 1997 NHSDA.10 Hence, the respondent universe for this component of the field experiment is defined as the civilian, noninstitutionalized population aged 12 years old or older residing in the 99 geographic PSUs. Consistent with the national NHSDA, the respondent universe included civilians residing in military installations and residents of noninstitutional group quarters (e.g., college dormitories, homeless shelters, and rooming houses). The 99 PSUs were selected to be a representative subset of the type of geographic areas that comprise the national NHSDA sample frame and consisted of the 43 PSUs that were selected with certainty for the 1997 national survey and 56 of the 72 noncertainty PSUs. The exclusion of 16 of the 72 noncertainty PSUs was done to reduce field costs.

The 115 PSUs in the national sample consisted of single counties or groups of adjacent counties. There were 43 certainty PSUs, which were so designated because of their high concentration of the Hispanic population, and 72 noncertainty PSUs that were randomly selected with probability proportionate to a composite size with minimum replacement. The composite size measure facilitated oversampling of blacks and Hispanics while simultaneously equalizing the interviewers' workload between PSUs. Controlled ordering of the noncertainty PSUs during the selection process produced a deep implicit stratification based on geography, metropolitan statistical area (MSA) size, and percentage minority. This implicit stratification helped ensure that the sample of PSUs was as representative of the Nation as possible.

At the second stage of sampling for the 1997 national NHSDA, each of the 115 sample PSUs were partitioned into noncompact clusters of dwelling units, called segments, that were constructed to yield an average of at least 18 responding individuals. A sample "dwelling unit" in the NHSDA refers to either a housing unit or a group quarters listing unit, such as a dormitory room or a shelter bed. The segments were constructed using 1990 Decennial Census data supplemented with revised population counts obtained from several outside sources. Again, segments within PSUs were selected with probability proportionate to a composite size measure with minimum replacement. Altogether, for the national 1997 NHSDA, 1,940 sample segments were selected from the 115 PSUs. After the segment sample was identified, staff visited each area in order to count and list all existing dwelling units that were contained within the segment's geographic boundaries. This list of dwelling units (within the 99 PSUs selected for the field experiment) became the sampling frame for the third stage of selection.

It was possible to use the segments that were listed for the 1997 NHSDA because of two features of the national design. First, the segments in the national 1997 NHSDA contained enough dwelling units for at least two national NHSDAs. This was done to reduce the costs of counting and listing geographic segments because it permitted counting and listing to occur just every other year for the NHSDA series of studies. Because the 1997 NHSDA segments had not been used for a prior NHSDA, some of the listed dwelling units had not been used and were available for use in the 1997 field experiment. Second, the 1997 NHSDA sample segments were assigned to a particular quarter of data collection. Because segments were randomly assigned to the four quarter panels of the national NHSDA, any set of quarters is also a national sample. Thus, for the field experiment, which was to be conducted in Quarter 4, we selected 282 segments from the sample segments that had been used in Quarters 1 through 3 of the 1997 NHSDA. Confining the sample to these segments ensured that the field experiment's data collection effort did not interfere with the national Quarter 4 NHSDA. Thus, for the 1997 field experiment sample, any dwelling unit that was counted and listed within these 282 segments and not selected for the 1997 national NHSDA was eligible for selection. More than enough dwelling units were available because of the practice of counting and listing a number sufficient to conduct two rounds of the national NHSDA.

Sample persons were selected using procedures similar to that used for the national NHSDA. After dwellings units were selected within each segment, an interviewer visited each selected dwelling unit and attempted to collect demographic information on all survey-eligible people residing there. This information was used to determine which racial/ethnic and age groups were represented in the dwelling unit, which in turn was used to classify the dwelling unit into one of 96 household types. As described below, sample selection at the dwelling unit level was completed electronically using the Newton. These 96 selection types were loaded into Apple Newton computers by the sampling staff, and a table lookup procedure that exactly mimicked the 1997 NHSDA paper procedure was used to determine the number of people selected in the dwelling unit. The number selected was either 0, 1, or 2.

After individuals were selected for the field experiment and after they agreed to participate, the computer randomly assigned the respondents to a treatment combination.

Subsampling segments. Initially, some 16,000 dwelling units were selected for the 1997 field experiment. At the end of October 1997, we determined that it was unlikely that the field staff could screen all of these dwelling units. Thus, we eliminated 40 segments that had not yet been released to the field interviewers (FIs). This action resulted in a sample of 14,327 dwelling units for the 1997 field experiment, which, at that time, we thought would still yield more than 2,296 interviews. This projection was based on the following expected yields for the sample: (a) eligibility rate: 80.6%, (b) dwelling unit response rate: 92.9%, (c) selected persons per dwelling unit: 30.5%, and (d) interview response rate: 80.0%.

5.2.2 1997 NHSDA Debriefing Subsample Design

The respondents in the 1997 field experiment received a debriefing questionnaire that asked them about the CAI interviewing experience. To have comparable information from those who had responded using the paper-and-pencil interviewing (PAPI) version of the 1997 NHSDA, a subsample of the 1997 Quarter 4 national NHSDA sample was selected. The debriefing subsample was confined to 66 PSUs and included the 43 PSUs selected with certainty for the 1997 national survey, as well as 23 PSUs that were randomly selected from the 72 noncertainty PSUs selected for the national 1997 survey. Within these PSUs, sampling rates were set to yield a total of 750 individuals. Again, the number of PSUs was limited to minimize interviewer training and data collection costs. Because it was a random sample of the 1997 NHSDA, the target population was the civilian, noninstitutionalized population aged 12 years old or older within the United States. Within these 66 PSUs, 150 segments were selected, with probability proportionate to a composite size, for the debriefing subsample.

At the time we selected this subsample, we anticipated that it would yield 1,068 eligible sample persons and 750 respondents. The actual yield was smaller than expected, and only 713 eligibles were selected.

As noted in Section 5.1.3, the part of this NHSDA Quarter 4 sample that was in the 99 PSUs selected for the field experiment became the comparison group for the field experiment.

Exhibit 5.2.1 Summary of 1997 NHSDA Field Experiment Sample Design: Expected Sample Yields

Sample Stage

National Quarter 4, NHSDA* Sample

Field Experiment Sample
(Selection of 2,256 Interviews)

1997 Q4 Subsample
(Subselecting 750 Interviews from National Q4 for Debriefing)

Certainty PSUs

Noncertainty
PSUs

Total

Certainty PSUs

Noncertainty PSUs

Total

Certainty PSUs

Noncertainty PSUs

Total

First Stage - Select PSUs

PSUs are counties or groups of counties. Field experiment sample and Q4 subsample are subsamples of the national sample.

Total PSUs

43

72

115

43

56

99

43

23

66

Second Stage - Select Segments

Field experiment sample PSUs selected from Q1, Q2, and Q4 national samples; 1997 subsample Q4 national sample.

Total Segments

269

216

485

153

129

282

82

69

150

Third Stage - Select Dwellings

Field experiment sample selected from dwellings not previously selected for national NHSDA.

Total Dwelling Units

9,802

8,096

17,898

9,071

7,109

16,179

3,016

2,363

5,379

Estimated Eligibility Rate

84.00

84.00

84.00

84.00

84.00

84.00

84.00

84.00

84.00

Estimated Response Rate

94.00

94.00

94.00

94.00

94.00

94.00

94.00

94.00

94.00

Total Completed Screenings

7,740

6,393

14,132

7,162

5,613

12,775

2,381

1,866

4,247

Fourth/Fifth Stage - Select People

Total People Selected

3,486

2,926

6,412

1,731

1,483

3,214

575

493

1,068

Estimated Selection Error Rate

90.00

90.00

90.00

90.00

990.00

90.00

90.00

90.00

90.00

Estimated Response Rate

78.00

78.00

78.00

78.00

78.00

78.00

78.00

78.00

78.00

Total Completed Interviews

2,447

2,054

4,501

1,215

1,041

2,256

404

346

750

*The part of this NHSDA Quarter 4 sample that was in the 99 PSUs selected for the field experiment became the comparison group for the field experiment.

Sources: National Household Survey on Drug Abuse: Development of Computer-Assisted Interviewing Procedures; 1997 Field Experiment. 1997 National Household Survey on Drug Abuse: Quarter 4.

5.3 Electronic Screener

An electronic screener was also used in the 1997 field experiment. In early 1997, we reviewed hardware options and selected the Apple Newton 2000 handheld computer for the application. We used FormLogic from Wright Strategies, Inc., San Diego, California, to develop the screening program. This package offers an MS Windows development platform, suitable for the screener design requirements. Furthermore, Wright Strategies was willing to work with RTI programming staff to ensure timely and robust software support throughout the entire development process.

Development of the Newton NHSDA screening application began in early 1997. The development was an iterative process with extensive usability testing conducted after each cycle. In May 1997, we conducted a small feasibility test in Durham, North Carolina, to examine the application in realistic field conditions. Following this feasibility test, additional development was undertaken to add a number of case management features to the Newton application. Subsequent testing in August 1997 indicated that some of these additional case management features were unstable in the field interviewing environment, and they were dropped from the application. However, the cumulative results of our testing convinced us that the basic Newton screener application was ready to be tested. We developed a contact work sheet for handling the dropped case management activities.

Each interviewer received a Newton that contained the case ID numbers and addresses for all dwelling units in his or her field assignment. When visiting the dwelling unit, the interviewer accessed the address by tapping the specific line containing the address. The screening application guided the interviewers through a series of questions to determine and record in the Newton the number of persons aged 12 or older in the household along with their age, gender, marital status, race/ethnicity, and military status.11 Using this information, the screening application consulted sample selection tables and indicated to the interviewer whether none, one, or two respondents had been selected for the interview. It displayed the characteristics of the selected persons to the interviewer. Because no names are collected, the sample persons were identified by their age, gender, marital status, and race/ethnicity.

5.4 Screening and Interview Response Rates

The screening and interview response rates in the 1997 field experiment were lower than those we achieved in the main study. Exhibit 5.4.1 displays the screening rates for the 1997 field experiment and for two comparison groups: 1) the NHSDA Quarter 4 sample that was in the same PSUs as the field experiment and 2) those test Quarters 1 through 3 of the NHSDA for the same sample segments as were used for the field experiment. If we consider group quarters, which were not included in the Newton application, as ineligible for the 1997 field experiment, the overall screening response rate was 86%, which is about 7% lower than that achieved in similar areas in the national NHSDA. About 2.5% of this shortfall was due to the failure to obtain access to restricted housing; 3.5% was due to increased refusals. It is unlikely that the electronic screener contributed to the failure to obtain access to restricted housing. However, we are not able, from this study alone, to verify that using the Newton did not contribute to increased refusals to the screening.

In Exhibit 5.4.2, we present the results of attempting to interview the selected persons. To obtain a consistent picture of the coverage of the population for the 1997 field experiment and the main NHSDA, we counted people who had completed the main NHSDA in Spanish as nonrespondents because a Spanish interview was not available for the field experiment. Overall, only about 62.7% of the selected persons in the 1997 field experiment were interviewed, whereas the equivalent percentage for the NHSDA main study is 75.6%, which is a 12.9% difference. Again, it is not possible to determine if this difference is due to using electronic instruments.

Exhibit 5.4.3 presents the distribution of the 1997 field experiment's respondents by treatment. The overall sample size was 1,982; 56% of the respondents were youths aged 12 to 17, which was higher than we planned for and is due to youths responding at a higher rate than adults.

We compared the demographics of the comparison group to those of the field experiment and show the results in Exhibits 5.4.4 through 5.4.6. Except for the planned-for larger numbers of youths in the field experiment, there are few differences. Thus, the comparisons are likely to be valid.

Exhibit 5.4.1 Screening Response Rates: Comparison of 1997 Field Experiment to Selected 1997 NHSDA Experience

 

Percent of Dwelling Units

Number of Dwelling Units

Response Rates

1997 Field Experiment

NHSDA Q4 Comparison Group

Quarters 1-3
Field Experiment
Segments

1997 Field Experiment

NHSDA Q4
Comparison Group

Quarters 1-3 Field Experiment Segments

Ineligible Dwelling Unit

16.28

15.30

16.75

2,333

8,782

1,415

Vacant

11.8

11.32

11.35

1,690

6,500

959

Not a Primary Residence

1.63

1.79

2.66

234

1,028

225

Not a Dwelling Unit

2.3

2.07

2.62

329

1,187

221

Other

0.56

0.12

0.12

80

67

10

Eligible Dwelling Unit, Not Screened for Eligible Persons

14.6

6.37

5.35

1,751

3,099

376

No One at Home

2.08

2.09

1.98

250

1,014

139

Refusal

5.84

2.46

2.34

701

1,196

163

Denied Access

2.91

0.59

0.26

349

287

18

Newton Screener Problem

0.57

0.0

0.0

63

0

0

Other Nonresponse (Group Quarters)1

1.39

N/A

N/A

167

N/A

N/A

Other Nonresponse

1.80

1.24

0.8

216

602

56

Eligible Dwelling Units, Screened for Eligible Persons

85.4

93.63

94.65

10,243

45,529

6,658

             

Total Lines Selected

100.0

100.0

100.0

14,327

57,410

8,449

1The Newton application used for the 1997 field experiment did not handle group quarters.

Sources: National Household Survey on Drug Abuse: Development of Computer-Assisted Interviewing Procedures; 1997 Field Experiment. 1997 National Household Survey on Drug Abuse: Quarter 4.

Exhibit 5.4.2 Distribution of Final Response Codes for Selected Persons

Selected Persons

1997 Field Experiment

NHSDA Q 4 Comparison Group

Qtrs 1-3 NHSDA Field Experiment Segments

Sample Size

% of Total
Selected

% of Total
Non-complete

Sample Size

% of Total
Selected

% of Total
Non-complete

Sample Size

% of Total
Selected

% of Total
Non-complete

                   

Total

3,163

100.00%

 

4,110

100.00%

 

2,891

100.00%

 
                   

Respondents

1,982

62.66%

 

3,105

75.55%

 

2,186

75.61%

 
                   

Nonrespondents

1,181

37.34%

100.00%

1,005

24.45%

100.00%

705

24.39%

100.00%

No One Home, R Unavailable

180

5.69%

15.24%

293

7.13%

29.15%

179

6.19%

25.39%

Physical / Mentally Incompetent

57

1.80%

4.83%

37

0.90%

3.68%

22

0.76%

3.12%

Language Barrier

188

5.94%

15.92%

268

6.52%

26.67%

215

7.44%

30.50%

Refusal

625

19.76%

52.92%

358

8.71%

35.62%

256

8.86%

36.31%

Other

131

4.14%

11.09%

49

1.19%

4.88%

33

1.14%

4.68%

Sources: National Household Survey on Drug Abuse: Development of Computer-Assisted Interviewing Procedures; 1997 Field Experiment. 1997 National Household Survey on Drug Abuse.

Exhibit 5.4.3 Distribution of 1997 Field Experiment Respondents

Respondent Characteristics

Treatment Version

ACASI Treatment Groups

Contingent Questioning Structure

Total ACASI

Single Gate Questions

Multiple Gate Questions

Consistency Checks

Consistency Checks

Absent

Present

Absent

Present

Multiple Use Questions

Multiple Use Questions

Multiple Use Questions

Multiple Use Questions

Absent

Present

Absent

Present

Absent

Present

Absent

Present

1

2

3

4

5

6

7

8

                   

Total

208

314

285

264

245

240

219

207

1,982

                   

Age Group

                 

12-17

118

179

157

148

142

142

118

113

1,117

18 +

90

135

128

116

103

98

101

94

865

                   

Gender

                 

Males

112

139

138

123

119

110

98

88

927

Females

96

175

147

141

126

130

121

119

1,055

                   

Race/Ethnicity

                 

Hispanic

45

73

66

62

63

61

49

51

470

Non-Hisp., Black

55

76

79

70

58

67

63

63

531

Non-Hisp., All Other Races

108

165

140

132

124

112

107

93

981

                   

Education1

                 

< High School

26

28

20

32

25

27

22

19

199

High School

35

52

49

41

31

34

38

41

321

> High School

29

55

59

43

47

37

41

34

345

1 Education includes only individuals aged 18 or older.

Sources: National Household Survey on Drug Abuse: Development of Computer-Assisted Interviewing Procedures; 1997 Field Experiment. 1997 National Household Survey on Drug Abuse: Quarter 4.

Exhibit 5.4.4 Demographic Distributions of Field Experiment and 1997 NHSDA Comparison Group

Respondent Characteristics

1997 Field Experiment

Comparison Group

CAPI / ACASI

1997 Quarter 4 PAPI / SAQ

           
   

Number of Respondents

Percentage

Number of Respondents

Percentage

           

Total

 

1,982

100.0%

3,105

100.0%

           

Age Group

         
 

12-17

1,117

56.36%

979

31.53%

 

18+

865

43.64%

2,126

68.47%

           

Gender

         
 

Males

927

46.77%

1,265

40.74%

 

Females

1,055

53.23%

1,840

59.26%

           

Race / Ethnicity

         
 

Hispanic

470

23.71%

572

18.42%

 

Non-Hisp., Black

531

26.79%

1,023

32.95%

 

Non-Hisp., All Other Races

981

49.50%

1,510

48.63%

           

Education1

         
 

< High School

199

23.01%

406

19.10%

 

High School

321

37.11%

790

37.16%

 

> High School

345

39.88%

930

43.74%

1Education includes only individuals aged 18 or older.

Sources: National Household Survey on Drug Abuse: Development of Computer-Assisted Interviewing Procedures; 1997 Field Experiment. 1997 National Household Survey on Drug Abuse: Quarter 4.

Exhibit 5.4.5 Distribution of Field Experiment and Comparison Group Respondents, by Work Status

Work Status

1997 Field Experiment

Comparison Group

CAPI / ACASI

1997 Quarter 4 PAPI / SAQ

           
   

Number of Respondents

Percentage

Number of Respondents

Percentage

           

Total

 

1,979

100.0%

3,091

100.0%

           

Employed

 

755

38.15%

1,552

50.21%

 

Full-Time

497

25.11%

1,161

37.56%

 

Part-Time

228

11.52%

368

11.91%

 

Extended Leave

30

1.52%

23

0.74%

           

Not Employed

 

1,224

61.85%

1,539

49.79%

 

Looking

56

2.83%

113

3.66%

 

Not Looking

25

1.26%

24

0.78%

 

Student

979

49.47%

958

30.99%

 

Disabled

19

0.96%

70

2.26%

 

Other1

145

7.33%

374

12.10%

1Other includes full-time homemaker, retired, and not specified.

Sources: National Household Survey on Drug Abuse: Development of Computer-Assisted Interviewing Procedures; 1997 Field Experiment. 1997 National Household Survey on Drug Abuse: Quarter 4.

Exhibit 5.4.6 Distribution of 1997 Field Experiment and Comparison Group Respondents, by Income

Income ($ US)

1997 Field Experiment

Comparison Group

CAPI / ACASI

1997 Quarter 4 PAPI / SAQ

           
   

Number of Respondents

Percentage

Number of Respondents

Percentage

           

Total

 

1,957

100.0%

3,080

100.0%

 

< 20,000

1,497

76.49%

2,220

72.08%

 

20,000 to 50,000

307

15.69%

617

20.03%

 

> 50,000

153

7.82%

243

7.89%

Sources: National Household Survey on Drug Abuse: Development of Computer Assisted Interviewing Procedures; 1997 Field Experiment. 1997 National Household Survey on Drug Abuse: Quarter 4.

5.5 Analysis Weights

The analysis weights for the 1997 field experiment sample were constructed as follows. Control totals were created by summing the fully adjusted national sample weights for the 99 PSUs that contained the field experiment sample. All four quarters of data were used to create the control totals. An initial weight was calculated as the ratio of the total sum of weights divided by the sum of weights for the respondents. These weights were adjusted to the control totals within age, race/ethnicity, and gender subgroups using a raking procedure at both the household and person at both the household and personal level.

9 Because the structure of the questionnaire required respondents to first indicate the time period of their most recent use and to then indicate the number of days used in that period, there are some implicit multiple use questions in every interview and these were analyzed as well.

10 The PSUs and segments (segments are the geographic second stage sampling units in the NHSDA) selected for the 1997 NHSDA California/Arizona supplemental sample were not considered during the design of the 1997 field experiment.

11 People who are on active military duty are not eligible for the NHSDA.

Top Of PageTable of Contents

This page was last updated on June 16, 2008.