Skip To Content

Table Of Contents

Click for DHHS Home Page
Click for the SAMHSA Home Page
Click for the OAS Drug Abuse Statistics Home Page
Click for What's New
Click for Recent Reports and HighlightsClick for Information by Topic Click for OAS Data Systems and more Pubs Click for Data on Specific Drugs of Use Click for Short Reports and Facts Click for Frequently Asked Questions Click for Publications Click to send OAS Comments, Questions and Requests Click for OAS Home Page Click for Substance Abuse and Mental Health Services Administration Home Page Click to Search Our Site

Computer Assisted Interviewing for SAMHSA's National Household Survey on Drug Abuse

8. Development and Testing of an Electronic Screener: 1997 Field Experiment And Subsequent Testing

In this chapter, we describe the development and testing of the electronic screener. The development and testing process included

  1. development and small-scale testing of the basic sample selection application,

  2. addition of selected case management activities to the application and testing in the 1997 field experiment,

  3. revision of the application to include additional case management activities and field testing in August 1998, and

  4. final development for the 1999 NHSDA.

8.1 Need for Electronic Screening: Research Issues

To achieve the desired precision for important subpopulations defined by age and race/ethnicity, a large number of households must be screened to identify enough eligible respondents in these subpopulations. Field interviewers (FIs) visit each sample dwelling unit and determine how many of the residents are eligible for the survey and their basic demographic characteristics. Based on this roster of residents, none, one, or two people are selected for the survey. Prior to 1999, the FIs consulted a complex selection table to determine who, if anybody, was selected for the survey. The electronic screener was developed to achieve the following:

  1. Eliminate interviewer errors in the selection process, including accidental errors and intentional tampering with the roster by the FI to achieve certain selections.

  2. Develop the capability to include more variables in the respondent selection algorithms.

  3. Reduce data editing, data entry, and shipping costs.

  4. Provide more detailed and timely information on field activities.

8.2 Developing and Testing of the Screener for the 1997 Field Experiment

8.2.1 Developing the Application

We first needed to choose a piece of hardware for the application. Several hardware options were explored to determine which would be the most feasible and cost effective for the desired goal. Among these were the Slate-Top, a laptop, and the Newton. After a careful review, the Newton 2000 handheld computer was selected as the best choice. The Slate-Top was considered for both the screening and the computer-assisted interviewing (CAI) and found to be unable to handle the computer-assisted personal interviewing (CAPI), and its cost was excessive for hardware that was to perform only screenings. In addition, because of its size and weight, we concluded that using a laptop was not feasible.

FormLogic from Wright Strategies, Inc., San Diego, California, was the software selected to develop the screening program. This software package offered a Microsoft Windows development platform, suitable for the screener design requirements. Furthermore, Wright Strategies was willing to work with Research Triangle Institute (RTI) programming staff to ensure timely and robust software support throughout the entire development process.

Development of the Newton NHSDA screening application began in early 1997. The development was an iterative process with extensive usability testing conducted after each cycle of development. These initial development stages focused on programming the basic rostering and sample selection activities that are done after a member of a dwelling unit has been contacted. This was the key function that had to be successfully completed for the electronic screener to be a viable alternative to the paper-based procedures. However, a number of other case management activities are integrated with the screening procedures during the NHSDA, including determination of vacancy status, record of contacts and attempted contacts and their outcomes, and so on. This additional functionality was incrementally added to the application.

In May 1997, we conducted a small field feasibility test in Durham, North Carolina, to test the Newton application in realistic field conditions. Following this feasibility test, additional development was completed, and the Newton application for the 1997 field experiment was finalized in September 1997.

The cumulative results of our testing convinced us that the basic rostering and sample selection functions were ready to be tested in the 1997 field experiment. However, near the start of the field experiment, the full application was discovered to still contain too many bugs for production use. Therefore, the application was scaled back to just the critical functions of rostering households and selecting respondents. We transferred the other tasks to a paper-and-pencil contact worksheet and to the case management system (CMS) that was on the Toshiba laptop. Also, we found that the Newton handwriting feature was not always reliable and user friendly; therefore, we made the option of entering numeric or alphabetic responses via a keypad or keyboard available to the FIs.

8.2.2 Training Interviewers on Use of Electronic Screener

During three training sessions, a total of 171 interviewers were trained in the use of the Newton in the 1997 field experiment. From September 22 through 28, 1998, 144 interviewers were trained in Los Angeles and Research Triangle Park (RTP) in North Carolina. Two additional sessions were held at RTI from November 6 to 11 and from November 30 through December 3, 1997, with 23 and 4 additional interviewers, respectively.

In the first training session, five training rooms at each site were each staffed by a lead trainer, two assistant trainers, and a computer support technician. The number of interviewers in a training room ranged from 12 to 17. The follow-up training sessions at RTI were completed in one training room with all interviewers present. Initial orientation to the Newton focused on introducing the hardware and covered general information on how to turn the Newton on and off, care for it, change its batteries, calibrate the pen, use the pen to select responses, use the backlighting feature, and connect it to the Toshiba laptop for nightly transmissions. A basic introduction to the NHSDA project and locating sample dwelling units was then given, after which the FIs were returned to the Newton training to perform group exercises of increasing difficulty and to participate in paired mock trials.

Both trainers and interviewers were asked to evaluate the training session. Interviewers provided their input via a standardized questionnaire completed at the end of training. Input from the 30 staff, who worked as trainers, was solicited in an electronic mail message after all trainers had left the training site; trainer responses were simply open-ended text. The comments from both groups are summarized below.

In general, trainers felt the portion of the training session devoted to the Newton went well; however, they noted two specific problem areas. First, they noted that training on the Newton was difficult because the Newton screen could not be projected for the entire class to see in the same way that a laptop screen can be projected: The Newton has no screen "output" functionality. During the course of preparing for the training, several possible ways for training interviewers to use the Newton were discussed, including using no visuals during the Newton portion of training, preparing overhead transparencies of the required screens, displaying the screens using PowerPoint, and using small television cameras to display the screens. We chose to display the screens using PowerPoint because we believed this allowed for maximum flexibility at a reasonable cost. However, this decision tied the trainers to the specific scenarios prepared in advance of training. Trainers had more difficulty leading their interviewers through impromptu exercises because the interviewers could not look at a display to know that they were in the right place.

Second, nearly half the trainers thought was that the screening exercises were too complicated. In our attempt to cover all possible field situations that an interviewer might encounter, trainers felt the screening exercises became too difficult for the interviewers to handle while at the same time trying to become familiar with the Newton. Most trainers suggested that the first few screening exercises be straightforward (e.g., no addition of family members after the rostering is complete, no deleting of household members who do not really live at the housing unit). Trainers felt that this would give the interviewers a chance to learn the basic process before introducing the more complicated components. In addition, trainers noted that some interviewersnew to field data collection began to worry that all household screenings would be as complicated as our practice scenarios. Trainers had to reassure these interviewers that this would not be the case and that most screening respondents would provide the roster information easily and completely. The trainers recommended that once a set of straightforward screenings has been completed, more complicated scenarios could be introduced to cover all the "what if" situations. With more time at training to cover the Newton, this progression of screening exercises should work well.

As part of the training evaluation, interviewers were asked to rate the amount of time spent on learning how to use the Newton and on how to transmit data between the Newton and the Toshiba. A majority of the interviewers felt the amount of time spent learning to use the Newton was about right. A small number of interviewers (7%) at the RTP training session felt that too much time was spent on the Newton, whereas only 1% of the interviewers trained in Los Angeles felt this way. Perhaps more of the RTP interviewers had previous interviewing experience and thus were able to pick up the Newton skills more easily. Similar results were also seen on the amount of time spent learning how to transmit data between the Newton and the Toshiba. Again, the majority of interviewers felt that the amount of time spent on this activity was about right. However, fewer interviewers felt that too much time had been spent on this activity.

Interviewers were also given the opportunity to record open-ended comments regarding the homestudy materials. A number of interviewers noted that the portion of the manual that covered the Newton was especially useful. The screen images were included, as were step-by-step instructions, and these items were noted to be some of the more useful parts of the manual. Several interviewers suggested that even more information of this type should be included in the future. Interviewers also noted that a "cheat sheet" of Newton instructions would be a useful addition to the manual, perhaps as a detachable or laminated card that could be kept in the Newton carrying case on in the Toshiba bag. One interviewer noted that it would have been useful to receive the Newton prior to training so that she could work with it as she reviewed the manual.

Many interviewers commented that the Newton should have been introduced earlier in the training session. Several interviewers noted that the first few days of training were not particularly intensive and the last few days were exceptionally challenging because all the hardware was being covered. They felt the Newton (and the Toshiba) should be introduced on the first day of training and all procedures related to the hardware should be covered before such topics as locating the area segment, introducing the study, or refusal conversion are discussed.

8.2.3 Implementation Results: Interviewer and Respondent Reactions

To obtain information on the implementation, we conducted two debriefing teleconferences with a sample of interviewers: one during the third week of the field experiment and one following the end of the 1997 field experiment. The purpose of these calls was to hear from interviewers about their initial experiences using the Newton screening application. A total of 16 interviewers took part in the debriefings, with no interviewer taking part in any twosessions. Each call was moderated by an RTI staff member. In general, interviewers had favorable things to say about the Newton and the electronic screening application. More specific comments related to the different facets of working with the Newton are summarized below.

Respondent reactions. Some interviewers felt that the Newton gave them a more professional appearance on the dwelling unit's doorstep, which resulted in respondents being more likely to provide the screening information. Interviewers also noted that they felt more professional because they did not have to shuffle a bunch of papers at the doorstep while the respondent waited. Interviewers also commented that some of their respondents were interested in how the Newton worked and liked to watch as the interviewer entered the screening information. Finally, interviewers noted that because the screening moves so quickly on the Newton, respondents did not become annoyed by the amount of time needed to complete the screening.

Battery usage. Interviewers seemed to fall into two groups with regard to battery usage: those who had used several sets of batteries and those who were still working with their first set. The primary reason for the difference appeared to be the amount of time the Newton was set in the "backlight" mode. The backlighting mode, which causes the screen to appear brighter, drains the batteries more quickly; interviewers who use this mode frequently use more battery power.

Screen readability. All interviewers reported that the Newton screen is very difficult to read in bright sunlight. This problem was reported both by interviewers who regularly use the backlighting mode as well as by those who do not. To be able to read the screen, interviewers either had to move to a more shaded spot or try to cover the screen with their hand. Interviewers reported that this is annoying but does not seem to result in lost screenings (i.e., respondents who are unwilling to move to another location either inside or outside the dwelling unit to complete the screening). Interviewers also noted that some of the text on the Newton was too small to be easily seen. They requested that all text be printed in the same point size as the text on the introduction screen, which was a slightly larger point size.

Newton "pens." All interviewers reported that they still had the original Newton "pen" in their possession. Most interviewers were using the original pen, but a couple of interviewers reported a preference for the replacement pens that were provided at training. Several reasons were given for this preference. The replacement pen is easier to hold, the replacement pen fits easily inside the Newton carrying case, and the original pen is too expensive and they are afraid of losing it.

Carrying cases. Most interviewers seemed to like the leather carrying cases. They felt the case protected the Newton well. However, one interviewer had experienced a problem with the zipper on the side of the carrying case causing the flashcard (i.e., the device that stores the interviewer's assignment) to be popped out of her Newton. Because the Newton was in the case, the interviewer could not tell that the flashcard had come out until she was ready to screen a case. In this case, simply pushing the flashcard back in solved the problem. However, the interviewer noted that it would be wise to inform all interviewers of this possibility and to instruct staff to keep the side zipper partially unzipped to minimize the chance of the flashcard becomingdislodged. All interviewers were informed of this problem and advised of the corrective action to take.

Size and shape. Interviewers reported the Newton was easy to hold and use. Several interviewers said they rested the Newton on their clipboard to complete the screenings. Others said they simply held the Newton in their hand. The weight of the Newton did not appear to be a problem.

Screening program. Interviewers had a number of comments regarding changes that could be made to the screening program. Several noted that it would be nice to have all information on finalized screenings resident on the Newton so that the list of pending screening cases would be accurate.18 Interviewers also noted that there should be some way to go back into a case after the screening is completed so that corrections could be made if necessary. In general, interviewers would like to have additional functionality on the Newton, specifically, the ability to see the status of all cases and the ability to use some of the Newton's other functions to schedule appointments, organize work time, and so on. Several interviewers reported that it was difficult to use the "page down" function to scroll through their cases; they noted that it is very easy to select a case rather than scrolling down. They also noted that when new segments get added to their assignment, they may need to page down several screens to get to the case they want to work. Each time the Newton is turned off and back on, the cursor moved back to the top of the list, then the interviewer had to scroll down again; interviewers noted that this was annoying and time consuming. Interviewers also felt that time was wasted by having each screening question come up each time. They noted that after completing a few screenings they knew how to ask these questions and did not need to see the text of each question every time.

Training issues. Interviewers were asked whether there was additional information they wish had been provided at training. One interviewer mentioned that there was too much emphasis at training on doing everything correctly and not making any mistakes. She felt that interviewers should be encouraged to make mistakes during training to see what will happen so that when they are out in the field they will know what to do. Another interviewer said that additional time should have been spent discussing how to correctly screen vacant housing units. Also, some interviewers felt the screening exercises were too complex; these interviewers felt that there should have been more "straightforward" practice exercises before moving into the complex situations that are not likely to happen as often in the field.

Finally, interviewers were asked to report the thing they liked best about the Newton and the thing they liked least. The items they mentioned are listed below:

Like Best
1. It makes the screening go faster.
2. The selections are made for you.
3. It isn't heavy/It is portable.
4. It is easy to work with.
5. It looks professional.
Like Least
1. Can't go back in to a finalized case.
2. Bad glare when working in bright sun.
3. Can't remove some cases from pending status.
4. Questions are too repetitive.
5. Print on some screens is too small.

Field interviewer debriefing questionnaire. At the end of the 1997 field experiment, all interviewers were asked to complete a questionnaire concerning the overall issues of the survey as a whole. A total of 142 surveys were completed, and their results correspond very similarly to the comments given in the debriefing teleconferences.

Overall, approximately 89% of all reporting interviewers stated that some difficulty with the Newton did occur (62.9% actually having problems at the doorstep of the dwelling unit), but only 17% reported difficulty on a usual basis. There were 208 mentions of the causes of difficulty with the Newton (see Exhibit 8.2.1).

In regard to battery usage and lighting problems, 97% reported using their backlighting, which would account for the interviewer's concern about the decrease in battery charge life. "Worrying about changing batteries" was the third least liked aspect of the Newton as reported in the end of study survey questionnaire.

On another note, 45.4% of interviewers usually carried their Toshiba when screening a household. Of this percentage, 31.3 % reported a positive effect on the screening and 1.6% reported a negative effect. This positive effect corresponded with the interviewer's notion that the Toshiba gave an added look of professionalism and assisted with their ability to gain cooperation.

Screener debriefing questions. As part of the 1997 field experiment's CMS, interviewers were prompted to answer a short series of questions for each screening they finalized. These questions were brought up automatically by the system when the interviewer transferred data from the Newton to the laptop computer. Two of these questions related specifically to the Newton screening application. Interviewers were first asked whether they thought that the presence of the Newton influenced the respondent's decision to complete screening. If the interviewer indicated that it did, a second question asked whether the Newton influenced the respondent in a positive or negative way. Results from these two questions are presented in Exhibit 8.2.2.

The interviewers did not believe that the Newton had an effect on the screening outcome in nearly two thirds of the cases. However, for those cases where the Newton did seem to influence the respondent, the effect was nearly always positive (94.1% vs. 5.7%). This was seen as an important finding in that we did not want to implement a new screening procedure that forces interviewers to counter additional resistance.

8.2.4 Implementation Results: Hardware, Software, and User Problems

We monitored the field situation closely. Technical support staff entered each call received from the field into a problem report log. This log allowed us to determine the nature and number of problems being reported by our interviewing staff. Problems entered into the log after the first 5 weeks of the field experiment were reviewed, and those that relate to the Newton screener are summarized below.

A total of 177 problems were reported that were classified as related to the use of the Newton. These 177 problems were spread over 94 interviewers (approximately 55% of the original 171 interviewing staff). The largest number of problems reported by a single interviewer was seven, with most interviewers appearing on the log only once or twice. It is worth noting that 11 (including the FI with seven calls to RTI's technical support) of the 94 interviewers who reported Newton problems left the field experiment; either because they quit or were asked to resign.

At a broad level, the problems experienced by the interviewers can be coded into three categories:

  1. interviewer errors,

  2. hardware issues, and

  3. errors due to interaction of interviewer and hardware.

Interviewer mistakes made up the largest proportion of the total number of problems. A total of 74 problems (42%) were attributable solely to the interviewer. "Hand-holding" was the most common solution to these interviewer mistakes. Nearly half of the interviewer mistakes required the technical support person to review a procedure that had been taught during training and/or was discussed in the interviewers' manual. Most of the remaining interviewer mistakes occurred when the interviewer either screened a household on the wrong line number, finalized the roster and implemented the selection process before fully reviewing the roster for accuracy, or finalized a household as ineligible by mistake (e.g., by incorrectly indicating that no one at the housing unit would live there for most of the quarter). Based on the frequency of these types of mistakes, in subsequent applications we developed procedures that allowed interviewers to "reactivate"lines or move roster records from one line to another with the permission of their supervisor.

An additional 49 problems were attributable to hardware or software problems. In some cases, the Newton suffered a complete failure and the unit had to be returned to RTI and a replacement sent to the interviewer (n=5) or hardware (such as a new flashcard or connector cord) had to be sent (n=5). However, the most common problem in this category (n=24) occurred when the Newton simply "died" and could not be revived by the interviewer using the procedures taught during training. This happens most often when the Newton was without power (no batteries or AC charge) for an extended period of time. In these situations, technical supportstaff guided the interviewers through a "Hard Reset" of the Newton using the reset button on the back of the Newton. Additionally, there were 10 reported problems involving a lost username and password, which affected the linking between the Newton and the Toshiba. These problems were fixed by the technical support staff by walking the FIs through a process to reset the information. Also, there were five other incidences of linking problems; however, there was insubstantial evidence to conclude whether it was definitely a user-related error or hardware error. Subsequently, these problems were fixed when a new link was established.

The last grouping of problems (54 in all) called into RTI's technical support staff are identified as errors that occurred as a result of both FI interaction and the hardware. A third of these problems were caused by the flashcard popping out of its slot as a result of the zipper on the carrying case pressing on the release button or the flashcard not being properly inserted. The flashcard is the device that stores the interviewer's assignment. The screening application itself is stored internally in the Newton. The decision was made to store the assignment on the flashcard so that if a Newton broke, the interviewer would be able to easily pop the flashcard out and load it into a replacement Newton. Because the carrying cases were delivered only 2 days before training began, we did not realize this would be a problem until data collection was actually under way. In addition, the cases were leather causing the Newton to overheat and interviewers had to remove the Newton from the case to transmit the information. Different cases were purchased for the 1999 NHSDA. These new cases were not leather and did not cause the Newton to overheat, had velcro fasteners, and allowed the interviewers to transmit data without removing the case.

Some of the remaining problems (n=18) related to the Newton occurred as a direct result of the flashcard popping out. These resulted from FIs not correctly following the Newton messages that followed the popping out of the flashcard and replacing it. As a result, either the FormLogic icon was missing or all cases were "missing." Also during training, interviewers were instructed to exit out of the screening program after every completed screening. However, in several cases (n=13) FIs did not do this, which then resulted in memory problems. This problem was resolved by exiting out of the program.

8.2.5 Implementation Results: Response Rates and Data Quality

Two important considerations in measuring the success of the Newton screening application were the effect of the Newton on screening response rates and on data quality. In this section, comparisons are made between the 1997 field experiment and data collected from those same segments during Quarters 1, 2, and 3, and from data in corresponding primary sampling units (PSUs) for Quarter 4 of the 1997 NHSDA.

Data on eligibility and response rates are presented in Exhibit 8.2.3. In general, screening response rates for the NHSDA are quite high. We had not hypothesized that the Newton would have any significant positive impact on screening response rates. However, neither do we wish to see a significant decrease in response rates for electronic screening as compared to paper screening. The data indicate a fall in the screening response rate of the 1997 field experiment in comparison to the two other groups. However, because the interviewers used in the field experiment lacked NHSDA experience, we were unable to attribute this to the use of the Newton. This inability to distinguish between the staff experience and the new procedures was aconsequence of our decision to use the ongoing NHSDA as the comparison group. Although this did provide a much larger sample for looking at differences in reporting, it did prevent us from assessing the impact of the screening instrument on the screening response rate.

Because interviewers were no longer required to complete the actual selection process, we did expect that electronic screening would result in improved data quality for the screening data. However, as can be seen from the following results, this anticipated increase in quality was not realized because we had not included the range and edit checks in the application that are the cause of the increased data quality observed in computer- assisted applications. Exhibit 8.2.4 provides information on the quality of the electronic screener data as compared to the paper form. The percentage of non-blank responses to the demographic data items appears comparable across the three samples. Inconsistencies in the screener data were extremely low for both modes of screening. However, the 1997 field experiment data showed a higher rate of misclassifying the gender with the relationship to the householder. There were 83 inconsistencies for the field experiment as compared with 26 for Quarters 1 to 3 and 42 for Quarter 4. In addition, all three samples had occurrences of husbands and wives who were coded as "never married." In subsequent applications, additional programming logic was added so that such inconsistencies no longer occur. For example, if "daughter" is selected as the relationship, then "female" will automatically be filled for the gender.

Although also extremely rare, an additional type of inconsistency was documented for the electronic screener with regard to the ages of the household members. There were 14 cases in which the age of a household member appeared to be extremely unlikely (although perhaps not impossible) given the relationship code. For example, the householder might be listed as a 32-year-old male. Then a son would be listed with an age of 37. In these cases, it is impossible to know which, if either, of the roster items is incorrect. Also, several of these age inconsistencies were between the head of household, who happened to be male, and a son. However, the son was often classified as female, which led to the conclusion that FIs were inadvertently tapping son instead of wife because on the relationship screen "wife" is above "son." This also constitutes a possible reason for the high number of female sons/brothers, as seen in Exhibit 8.2.4.

One additional inconsistency was noted where the age of the household member was a number significantly larger than 100. In this case, the son was listed as 120 years old. In cases of this type, it is possible that when the interviewer was writing in the age, he or she also made an additional mark that the Newton identified as a 1; the correct age for the household member was probably 20 or even maybe 12. However, this is impossible to determine after the fact. More restrictive age ranges were programmed into the Newton to eliminate some of these more egregious errors.

Finally, there was one case where the relationship appeared to be incorrect. In this situation, the household composition suggested that the interviewer had confused the relationship between the wife and the son's fiancee, thus switching their respective relationships. For example, the head of household was listed as 69, his live-in partner/fiancee was listed as 68, his son was listed as 22, and his wife was listed as 20. Most likely, the 20-year-old wife was actually the son's fiancee, making her the to-be-daughter-in-law of the householder. However, the originalrostering may be correct. Remedies will be made that deal specifically with this confusion.

Exhibit 8.2.1 Causes of Newton Difficulties

Causes of Newton Difficulty

Reported Times

Number

Percent

Direct Sunlight

113

54.33%

Other Lighting Problems

36

17.31%

Print Size

38

18.27%

Other Non-Lighting Problems

21

10.09%

Source: National Household Survey on Drug Abuse: Development of Computer-Assisted Interviewing Procedures; 1997 Field Experiment.

Exhibit 8.2.2 Results from Screening Debriefing Questions Related to the Newton

In your opinion, did the presence of the Newton influence the respondent's decision whether or not to complete screening?

   

Percent

Number

 

Yes

29.2

2,780

 

No

68.5

6,512

 

Don't know

2.3

218

 

Total

100.0

9,510

Did it influence the respondent in a positive or negative way? That is, did they seem more or less likely to complete screening because of the Newton?

   

Percent

Number

 

Positive way

94.1

2,617

 

Negative way

5.7

159

 

Don't know

0.1

4

 

Total

100.0

2,780

Source: National Household Survey on Drug Abuse: Development of Computer-Assisted Interviewing Procedures; 1997 Field Experiment.

Exhibit 8.2.3 Analysis of Eligibility Rates and Response Rates (Unweighted)

Dwelling Unit Statistics

NHSDA Quarters 1-3 Segments

NHSDA Quarter 4
PSUs

1997 Field Experiment

       

# of Selected DUs

8,449

13,012

14,327

# of Eligible DUs

7,034

11,021

11,994

# of Completed DUs

6,658

10,301

10,243

       

Eligibility Rate

83.3%

84.7%

83.7%

Response Rate

94.7%

93.5%

85.4%

Source: National Household Survey on Drug Abuse: Development of Computer-Assisted Interviewing Procedures; 1997 Field Experiment. 1997 National Household Survey on Drug Abuse: Quarter 4.

Exhibit 8.2.4 Preliminary Analysis of the Quality of the Electronic Screener Data as Compared to the Paper Screener

Screener Data

NHSDA Quarters 1-3
Segments

NHSDA Quarter 4
PSUs

1997 Field Experiment

Total Rostered Persons

% of Rostered Person with a Non-Blank Response

Relation to Head of Household

    Age

    Hispanicity

    Race/ethnicity

    Gender

    Marital Status

Inconsistencies

Total Screenings

    # of Female Brothers/Sons

    # of Male Sisters/Daughters

    # of Female Husbands/Fathers

    # of Male Wives/Mothers

    # of Never Married Husbands

    # of Never Married Wives

Total Inconsistencies

13,978

99.69

100.00

99.11

99.18

99.69

99.06

6,658

4

1

5

9

2

5

26

21,237

99.56

100.00

98.89

98.73

99.62

98.82

10,301

2

4

6

12

2

16

42

21,800

97.22

100.00

99.29

98.28

98.39

99.27

10,243

29

8

13

14

4

15

83

Source: National Household Survey on Drug Abuse: Development of Computer-Assisted Interviewing Procedures; 1997 Field Experiment. 1997 National Household Survey on Drug Abuse: Quarter 4.

8.3 Revision and Additional Testing of Screener: Application Changes

After we reviewed the comments given by both interviewers and trainers and noted the problems logged by our technical support staff, we took measures to correct or minimize some of the problems associated with the Newton electronic screener. Specific problems and their respective remedies are summarized below.

Size of print on screens is too small. The print size was increased to the maximum possible on each screen. This resulted in dramatic improvements on many screens. Unfortunately, we are unable to control the print size in the "pop-up boxes."

Battery problems. Interviewers reported having to change their batteries frequently or other difficulties with the battery. In the future, rechargeable battery packs will be used in which the packs are plugged into AC outlets every night. This charge will hold well over the amount of life needed for a full day's work.

Direct sunlight. Unfortunately, no cure-all solution is available to control this environmental factor. The Newtons are currently set to the maximum amount of screen contrast before they are given to the FIs. Nothing further can be done to the hardware, but more emphasis will be placed on training interviewers to optimize their environmental surroundings to minimize any further complications due to lighting.

Flashcards. Complications were reported resulting from the flashcard "popping out" inadvertently. Alternative carrying cases were purchased for the 1999 NHSDA.

Correcting the roster after selections made. By obtaining a unique case-specific code from their field supervisor (FS), an FI can re-enter a completed screening and make corrections as needed. Some restrictions apply. Also, to help alleviate the need for changes, a roster verification screen was added before final selection could be made.

Correcting screeners performed on wrong line. As with the problem above, an FI must obtain a case-specific code from the FS from which he or she can then exchange information between two lines. To further ensure that an FI is entering data on the correct line, the line number and street address are displayed at the top of each screen.

Page scrolling and return to screen difficulties. Problems associated with page scrolling were corrected with special buttons that allow paging up or down without the use of the difficult Newton default controls. Also, currently when an FI returns to the Select Case screen, the list will be in the same place as when the FI left the screen.

Question test for roster questions is bothersome. Many FIs did not like the text of the roster questions after their first few screenings. They felt that they could recite the questions from memory and did not like having to tap "OK" to close the text box before entering the answer for each item. From a technical standpoint, it is easy to turn this presentation of questions on or off. However, RTI methodologists and field staff do not feel that it is advisable to allow FIs thisoption. They feel it will encourage them not to read the question text verbatim and to take "shortcuts." Upon further review, it was decided that an FI can submit a request to the FS that this option be turned off. Upon appropriate approval and demonstration of reliable and correct screening practices, via the FS's CMS, an FI's text question presentation can be turned off.

Gender/relationship/marital status mismatches. To help resolve these problems, in situations where it is obvious, gender and marital status will be coded automatically. For example, if the relationship is "wife," the person will automatically be coded female and married. If the relationship is "brother," the gender will be coded as male.

The Newton application was modified to add these additional functions. Case management activities were included in the application and the approach was modified so that the interviewers could directly transfer data from the Newton to the central office without going through the laptop computers. Records of calls and refusal reports were added as well.

In August 1998, the modified screening application was field tested. A total of 20 experienced field interviewers screened 836 households and listed 1,313 persons. Again, we closely monitored problems that were encountered. The most frequent Newton problems were problems with transmission to RTI (logged 36 times), problems with the screener (15 times), and problems with the CMS (6 times):

  1. Generally, transmission problems were solved by resetting the Newton or the modem or by repeating the transmission (16 times); some were problems due to the way in which phone numbers were set up by the machine (11 times).

  2. Newton screener problems included having the screen go blank or transmit slowly. These were generally solved by resetting the machine or removing/replacing the batteries. A couple of screener problems were due to the way in which phone numbers were set up by the machine.

  3. About half of the Newton case management problems required intervention from either FS or technical support.

The interviewers chosen for the August 1998 field test were very skilled in using the paper-and-pencil procedures for the NHSDA. In contrast to the interviewers who participated in the 1997 field experiment, these interviewers felt that the Newton interfered somewhat with the screening. They complained that the Newton slowed down the screening process unduly, was difficult to read in bright sunlight and heat and humidity, disrupted eye contact with the respondent, and increased respondent impatience. They also had some problems with the complexity of the application. However, other comments were favorable to the technology. Interviewers noted that the Newton was light and more convenient to carry than the paper screener, that it was easier to pick up at the correct point on screening breakoffs and to get an overview of the pending interviews, that it was more convincing to the respondent that theselection was random, and that the reduction in paperwork associated with sending the data each night was welcome.

These comments and experiences were used to make final revisions in the application and training prior to fielding the application for the 1999 NHSDA.

8.4 Summary of 1999 NHSDA Electronic Screener Application

The 1999 NHSDA screening and electronic case management application contained the following features:

  1. case management system,

  2. household and group quarters rostering and sample selection application,

  3. English and Spanish translations of the screenings,

  4. collection of information that allows supervisors to verify selected final screening codes,

  5. record of calls and refusal reports,

  6. addition of dwelling units that were missed during the counting and listing, and

  7. summary of information from the weekly production, time, and expense reports (PT&E)

The interviewer assignments were loaded onto the Newton, and the interviewers had the ability to sort the cases by status, such as all pending, pending screening, pending interview, final interview, and so on.

To screen a dwelling unit, the interviewer selected a case by tapping on a line in the list of dwelling units. An introductory screen appeared that guided the interviewer through the introduction and identification of an eligible screening respondent (someone who is 18 or older and who lives there). The interviewer then was presented with an address verification screen. Once the address was verified, the informed consent statement that was to be read to the screening respondent appeared. The application continued by displaying the basic questions on occupancy, number of residents, and questions age, relationship, gender, race, Hispanic status, and military status for all residents who were 12 years old or older. Once the roster was complete, the interviewer verified the listing and used an Edit Roster Record screen to make any necessary changes. The Make Selection button was then tapped, and the application made the random selection.

After the selection was made, a screen appeared that informed the interviewer of who, if anyone, was selected for the interview. The basic information identifying the persons selected (e.g., 18 year old, son) was displayed. This screen also displayed the questionnaire ID for each person. To begin an interview using the laptop computer, the interviewer entered this QuestID into the startup screen of the laptop computer. This QuestID provided the link between the questionnaire data, the screening data, and, thus, the sample selection probabilities. At the end of the interview, the interviewer entered a code into the Newton indicating that the interview was completed.

The CMS on the Newton was also used to monitor the visits that were necessary to conduct the interviews. A Record of Calls screen displayed the records of all prior calls to the dwelling unit. By pressing an Add button, the interviewer accessed an Add Call Record screen that allowed him or her to select a result code from a drop-down list and enter a comment. The date and time of the call were automatically recorded. If refusals were obtained, a Refusal Report screen appeared for use in coding the reason for the refusal and providing comments. Additional case management functions include the ability to edit addresses, special screening questions for group quarters, procedures for adding missed dwelling units, transmittal programs, and procedures for transferring cases from one interviewer to another.

The training for the 1999 survey on use of the electronic screening application was extensive. Interviewers received a computer manual with photographs of the parts of the Newton and explanations of its operation. The interviewer manual gave step-by-step instructions on the use of the Newton for case management and screening. During training, overhead projectors that could display the Newton screen were used which allowed the instructors to dynamically illustrate use of the Newton.

18 This feature was added when the CMS was moved from the laptop to the Newton.

Top Of PageTable of Contents

This page was last updated on June 16, 2008.