Behavior Research Methods, Instruments, & Computers 1999,31 (4), 572-577 Web-based administration of a personality questionnaire: Comparison with traditional methods ROBERT N. DAVIS University ofMichigan, Ann Arbor, Michigan The World-Wide Webholds great promise as a meehanism for questionnaire-based research. But are data from Web-based questionnaires eomparable to data from standard paper-and-peneil questionnaires? This study assessed the equivalenee ofthe Ruminative Responses Seale in a Web-based format and in a paper-and-peneil format among introductory psyehology, upper-level psyehology, and nonpsyehology students. Internal eonsisteney eoefficients were eomparable aeross the groups. The participants in the Web sample reported higher levels of self-foeused rumination than did the other groups. Women in the Web sample reported more self-foeused rumination than did women in the other groups. In the Websample, results did not eovary with aeeess loeation. These results suggest that findings from Web-based questionnaire research are eomparable with results obtained using standard proeedures. The eomputerized Web interface may also facilitate self-disclosure among research participants. The use ofcomputers in psychological assessment has increased steadily over the last 30 years (Burke, 1992). The advent ofthe World-Wide Web otTers unprecedented opportunities for researchers to benefit from the use of computers in data collection (Schmidt, 1997; Smith & Leigh, 1997). As a research medium, the Web offers researchers at least four major benefits. First, the Web permits individuals to send data to a researcher at their convenience, in terms of time and location, which increases the potential number of eligible research participants (Schmidt, 1997; Smith & Leigh, 1997). Second, the Web permits automatie transformation ofraw data into an analyzable format, such as an SPSS data file, using procedures such as CGI scripts (see Kieley, 1996; Schmidt, Hoffman, & MacDonald, 1997). Third, Web-based research is etTicient in terms oftime and the resources it requires (Schmidt, 1997). Fourth, the Web provides a degree of anonymity to research participants, which decreases the influence of demand characteristics by facilitating self-disclosure and eliminating observer bias (Esposito, Agard, & Rosnow, 1984; Hewson, Laurent, & Vogel, 1996; Meszaros, Engelsmann, Meterissian, & Kusalic, 1995; Smith & Leigh, 1997). Is research conducted over the Web equivalent to research conducted in more traditional settings, such as a psychology laboratory? Validity studies ofWeb-based research have recently begun to examine this question and This research was supported by U.S. Public Health Grant 51817 to Susan Nolen-Hoeksema. I thank Michael Bishop, Jamie Polito, and Andrew Ward for their assistance with data collection. I also thank Tom Buchanan for his helpful comments on an earlier draft of this manuscript. Correspondence concerning this article should be addressed to R. N, Davis,who isnow at the University ofHouston, Department ofPsychology, Houston, TX 77204-5341 (e-mail: robnd@uh.edu). have reported favorable results (e.g., Krantz, Ballard, & Scher, 1997). For example, the results from one experiment that was conducted both on the Web and in a laboratory were nearly perfectly correlated (Krantz et al., 1997). Although only a few published studies have examined the equivalence question with respect to Webbased versus paper-and-pencil administration ofpersonality questionnaires, nascent studies in this area have reported comparable results across domains (Buchanan & Smith, in press; Pasveer & Ellard, 1998; Smith & Leigh, 1997; Stanton, 1998). For example, Buchanan and Smith administered the revised version ofthe Self-Monitoring Scale (SMS-R; Gangestad & Snyder, 1985) to Web users on line and also to a local student sampie in paper-andpencil format. The factor structure ofthe SMS-R and coetTicient alphas were found to be comparable across the groups, with the Web version yielding some superior results (e.g., coefficient alpha), as compared with previously reported studies (Buchanan & Smith, 1999). Another study that compared Web-based and paper-andpencil vers ions ofan organizationaljustice scale found a similar factor structure across the two domains, as weil as more item variability and less missing data in the Webbased condition (Stanton, 1998). The advent ofWeb-based data collection is a direct extension ofdata collection on stand-alone computers (Buchanan & Smith, in press). Given the current dearth of findings pertaining to Web-based data collection, studies of computerized (non-Web-based) versus paper-andpencil data collection procedures may provide useful information. Research comparing computerized and paperand-pencil assessments has generally found that research participants prefer computerized assessments and are equally, ifnot more, willing to report sensitive information to a computer than to a paper-and-pencil questionnaire Copyright 1999 Psychonomic Society, Inc. 572 WEB-BASED ADMINISTRATION OF A PERSONALITY QUESTIONNAIRE 573 or to a human interviewer (Booth-Kewley, Edwards, & Rosenfeld, 1992; Evan & Miller, 1969; Finegan & Allen, 1994; Hile & Adkins, 1997; King & Miles, 1995; Millstein, 1987; Petrie & Abell, 1994; Wilson, Genco, & Yager, 1986). In one study, recently admitted parasuicide patients preferred a computerized interview to an interview with a clinician (Petrie & Abell, 1994). In another study, recent suicide attempters disclosed more information to computerized depression and suicidal ideation questionnaires than to a clinician interviewer (Levine, Ancill, & Roberts, 1989). Moreover, the computerized measures were better predictors of suicidality than was the clinician interviewer (Levine et al., 1989). People also appear to be more willing to report behaviors such as substance use to a computer than to a clinician or a paper-and-pencil questionnaire (Lucas, 1977; Lucas, Mullin, Luna, & McInroy, 1977; Skinner & Allen, 1983). Although these findings are informative, the extent to which they generalize to Web-based research is an open question. Despite the initial promising evidence in support of Web-based psychological research (e.g., Buchanan & Srnith, in press; Krantz et al., 1997; Michalak, 1998; Pasveer & Ellard, 1998; Smith & Leigh, 1997; Stanton, 1998), it is necessary to demonstrate the equivalence of Web-based and traditional data collection methods empirically before adopting Web-based methods extensively (Krantz et al., 1997; Smith & Leigh, 1997). There remain numerous potential threats to the reliability and validity ofWeb-based questionnaires (see Hewson et al., 1996; Schrnidt, 1997, for reviews). For example, participants recruited from the Web are likely to be more heterogeneous on demographie and other variables than participants recruited by traditional means (Buchanan & Smith, in press). Moreover, researchers lose considerable control over the testing environment when conducting Web-based research.Some participants may send data from a quiet roorn, whereas others might send it from a noisy computer laboratory (Reips, 1996). In addition, Web-based research is open to exploitation by peopie who participate numerous times and/or provide disingenuous data (Schmidt, 1997; Smith & Leigh, 1997). The primary concern underlying these possibilities is that factors inherent in the Web interface, the Web sample, and/or the testing environment might obfuscate the extent to which an otherwise useful measure with good psychometrie properties is able to measure the construct of interest. In light of these concerns, several relevant analyses will be performed in the present study (see below). The present study assessed the equivalence of a Webbased versus a paper-and-pencil administration ofa personality questionnaire commonly used as a screening instrument: the Ruminative Responses Scale (RRS) from the Response Styles Questionnaire (RSQ; Nolen-Hoeksema & Morrow, 1991). The screening of participants for research and their subsequent selection on the basis of inclusion and exclusion criteria is a common practice among researchers on college campuses and in medical settings. Participants scoring at the upper and/or the lower end of an overall distribution on variables of interest are frequently selected for follow-up studies on the basis ofresults from an initial screening session-for example, a tendency to engage in self-focused rumination when experiencing dysphoric affect (cf. Davis & Nolen-Hoeksema, in press). These procedures facilitate research on interactions between subject variables, such as personality traits, and experimental treatments (cf. Lyubomirsky & Nolen-Hoeksema, 1993, 1995), as weil as cross-sectional studies of variables on which an experimenter wants to select participants (e.g., age or gender). The measure used in the present study assesses the extent to which people tend to respond to feelings of sadness or depression with self-focused rumination. Selffocused rumination describes a tendency to engage in passive thoughts and behaviors that focus one's attention on one's depressed mood and on the implications ofthese symptoms,rather than taking action to alleviate one's symptoms (see Nolen-Hoeksema, 1990, 1991,for reviews). People who ruminate may tend to sit alone and think about how tired and unmotivated they feel or passively review all the negative happenings in their lives, without taking action to change their situation. The RSQ (NolenHoeksema & Morrow, 1991) was developed to assess people's habitual responses to depressed mood, and the RRS consists of aseries of items on the RSQ that assess the tendency to engage in self-focused rumination. A short (IO-item) form of the RRS was recently developed and was used in the present study (cf. Davis & Nolen-Hoeksema, in press). In the present study, the RRS was administered to four sampies ofparticipants: (I) introductory psychology students; (2) upper-Ievel psychology students; (3) nonpsychology students; and (4) a Web-based studentsampie. The nonpsychology and upper-Ievel psychology student groups were included in order to determine whether they differed from introductory psychology students even when using the same methods (i.e., paper and pencil) for filling out the questionnaire. The Web sampie was deliberately drawn only from the local student population, in order to ensure comparability with the other groups on major demographie variables. This procedure enables a comparison of group differences as a function of the assessment medium (i.e., Web vs. paper and pencil), while Iimiting the heterogeneity that might result from a general Web sampie. Moreover, within the Web sampIe itself, we analyzed possible differences as a function of the location from which the participants accessed the Web (e.g., at horne vs. a campus computer laboratory). These analyses were performed to determine whether concerns about loss ofexperimental control over the testing setting were warranted-whether responses to the questionnaire differed systematically as a function ofthe participant's location. It was predicted that the Web sampIe would report a greater tendency to engage in self-focused rumination, as compared with the other three groups. These predictions 574 DAVIS were based on previous findings that people are generally more willing to report sensitive information to a computer than to a paper-and-pencil questionnaire. Moreover, it was expected that decreased observer bias and increased anonymity would facilitate self-disclosure in the Web sampIe. Gender was also included as an independent variable, in light of previous findings that women are more likely to ruminate when depressed than are men (e.g., Schwartz & Koenig, 1996; Thayer, Newman, & McClain, 1994). Moreover, there are gender differences in Web usage (GVU Center, 1998). Thus, the possible interaction between sampIe group and gender was also examined. METHOD Participants Four groups of individuals participated in the study: (I) 128 students (56 males, 72 females) from the Web; (2) 118 students (46 males, 72 females) from upper-Ievel psychology courses; (3) 113students(50 males, 63 females) fromnonpsychology courses; and (4) a total of 1,012 students (427 males, 585 females) from introductory psychology courses. The participants ranged in age from 17 to 26 years and were drawn from the University of Michigan campus. In the nonpsychology sampie, participants were recruited from three lecture courses: History of the United States: 1865present; Planets and Moons (Geology); and Introductory Biology. The upper-Ievel psychology sampie was also recruited from three lecture courses: Personality Psychology, Social Psychology, and Abnormal Psychology. The Web participants were recruited from flyers posted around the university campus that included the study's URL (Internet address). A Pearson chi-square test indicated that the groups did not differsignificantly in sex composition [X2(3) = .836, p>.IO,N= 1,371]. Materials Demographies questionnaire. The participants filled out a one-page demographies questionnaire and provided information about their age, sex, native language, and contact information. Ruminative Responses Scale. A shortened (10-item) form of the RRS ofthe RSQ was used in this study. This scale assesses how participants tend to respond to their own symptoms of negative emotion. The instructions for this questionnaire read: People think and do many different things when they feel sad, blue, or depressed. Please read each of the items below and indicate whether you never, sometimes, often, or always think or do each one when you feeI sad, down, or depressed. Please indicate what you generally do, not what you think you should do. The original RRS includes 22 items describing responses to depressed mood that are self-focused (e.g., I think "Why do I react this way?"), symptom-focused (e.g., I think about how hard it is to concentrate), and focused on the possible causes and consequences of one's mood (e.g., I think "I won't be able to do my job if I don't snap out of this"). Scores on this scale show high test-retest reliability (r = .80 at 6 months, r = .66 at I year; Nolen-Hoeksema & Larson, 1998; Nolen-Hoeksema, Parker, & Larson, 1994), internal consistency (Cronbach's alpha = .89; Nolen-Hoeksema & Morrow, 1991), and acceptable convergent and predictive validity (Butler & Nolen-Hoeksema, 1994;Nolen-Hoeksema & Morrow, 1991; NolenHoeksema et al., 1994). The 10 items on this shortened scale were based on item analyses conducted with a community sampie of 1,122 adults (Nolen-Hoeksema, unpublished data). The 10 items from the RRS that correlated most strongly with total scores on the longer scale, and on which at least 15% ofthe sample endorsed an answer other than never, were chosen for inclusion on the shortened scale. The internal consistency (Cronbach's alpha) ofthis 10-item scale in the community sampie of 1,122 was .87, and the correlation between this IO-item scale and the total 22-item scale was .93. Procedure Paper administration. The investigators received permission from instructorsin large lecturecoursesto visit their respectiveclasses (listed above) in order to solicit student participation in a research study. Both the course lecturers and the participants were blind as to the purpose of the study. In all cases, all but one to three students in each lecture course participated in the study. All the participants provided informed consent and received adebriefing after completing the questionnaire. The participants in the introductory psychology class received course credit for completing the forms, and those in the other classes were paid $5 for completing the forms. In addition, the participants were instructed to print and sign their names on a research recruitment sheet at the end of the questionnaire packet ifthey were interested in possibly being selected to participate in other studies for pay. Web administration. The participants responded to flyers on campus advertising a research study that required visiting a Web site and filling out a questionnaire on line. Interested participants tore off a piece of the flyer that included the study's URL and then used a Web browser to visit the Web site constructed for questionnaire administration. The Web site included a copy of the consent form, as weil as the demographies questionnaire and RRS. The Web page also informed participants that they would have a chance of being selected to participate in another study for pay by filling out the questionnaires. Thus, the participants in each ofthe four groups had some incentive for completing the forms. The participants typed the information requested by the demographie questionnaire onto the Web page and used a mouse to click on their responses to the RRS. Only one response per questionnaire item was permitted, and the participants had the ability to scroll up or down on the Web page. After filling out the questionnaires, the participants used a mouse to click on one of two text boxes, "Send Form" or "Clear Form."Clicking on "Send Form" sent the data to the experimenter by means of e-mail. When this occurred, the participants then saw a screen indicating that their message had been sent successfully. Clicking on "Clear Form" erased the data completely. This option was provided in the event that participants opted not to submit their data or wished to clear the page and start over. The Web page forms were converted to e-mail message formatusing the program Htmail (Disser, 1996). In light of concerns that the participants from the lecture classes could also have participated in the Web portion of the study, and given that it was possible for some people to complete the questionnaire more than once on line, research assistants checked the data before the analyses to ensure that no data were entered for the same participant twice. These data-cleaning procedures resulted in the deletion of data from 2 participants who completed the questionnaire on line more than once (entries following the first instance of participation were viewed as spurious). Data were also deleted from I participant who completed the questionnaires both on line and in two of her classes. These procedures resulted in four independent samples. RESULTS Interna) Consistency To examine the psychometrie properties of the Webbased RRS, as compared with the paper-and-pencil version, the internal consistency (Cronbach's alpha) of the RRS was assessed separately for the four sampIes. Cronbach 's alphas for the RRS were as folIows: .82 (Web sam- WEB-BASED ADMINISTRATION OF A PERSONALITY QUESTIONNAIRE 575 pie), .88 (upper-Ievel psychology students), .88 (nonpsychology students), and .83 (introductory psychology students). Group and Gender Differences Sampie group and gender differences were assessed, using a 2 (gender) X 4 (group) analysis of variance (ANOVA), with sampie group (Web sampie, upper-Ievel psychology students, nonpsychology students, and introductory psychology students) and gender as independent variables and RRS total scores as dependent variables. Table I presents the means and standard deviations for all scores by sampIe group and gender. There were significant group differences on RRS scores [F(3, 1363) = 4.23, p < .01]. Post hoc follow-up tests, using Tukey's honestly significant differences (HSD) tests, revealed that students in the Web sampIe reported levels ofrumination that were significantly higher than those reported by students in both introductory psychology (p < .0I) and nonpsychology (p < .0I) courses, but not those reported by students in upper-Ievel psychology courses (p > .10). Significant gender differences were also obtained on RRS scores [F( I, 1363) = 8.47,p< .01], indicating that women reported significantly greater rumination than did men. In addition, there was a nonsignificant trend toward a group X gender interaction on RRS scores [F(3, 1363) = 2.22,p< .09]. Follow-up one-way ANOVAs within each gender separately revealed no significant group differences among males [F(3,575) = 0.53, p > .10], but did reveal significant group differences among females [F(3,788) = 6.70,p < .0001]. Post hoc follow-up tests, using Tukey's HSD, revealed that women in the Web sampIe reported significantly greater rumination than did women in introductory psychology (p < .0001) and nonpsychology (p < .0001) courses. Women in the Web sampIe also reported greater rumination than did women in upper-Ievel psychology courses, but the difference was not statistically significant (p < .08). Analysis of Web Data by Access Location The 128participants who filled out the RRS on the Web opted to do so from a variety oflocations across campus. The server program used in this study (Htmail) also automatically detected and reported the terminal location from which the data originated. In the Web sampIe, 13% (n = 17) used a modem to access the Web page from horne, 49% (n = 63) accessed the Web page from a computing site on campus, and 38% (n = 48) accessed the Table I Means and Standard Deviations of Scores on the Ruminative Responses Scale Total Men Wornen Group M SD M SD M SD Web 11.31 4.76 9.64 4.47 12.61 4.60 Psychology 10.41 5.03 10.02 4.30 10.65 5.45 Introductory psychology 9.83 4.91 9.45 4.97 10.11 4.85 Nonpsychology 9.04 4.97 8.80 4.52 9.24 5.32 Web page from a computer on campus that was not part of a public computing site (e.g., from a science laboratory or an office). Follow-up analyses using a one-way ANOVA, with access location (horne, campus computing site, or other campus computer) as the independent variable and RRS total scores as dependent variables, found no significant differences across access locations [F(2,125) = 1.34,p > .10]. DISCUSSION In this study, students who completed a personality questionnaire on the Web reported higher levels of selffocused rumination than did students who completed the same questionnaire in paper-and-pencil format. The internal consistency of the Web version was comparable with the paper-and-pencil versions. Although students in the Web sampIe participated in the study from varying locations, this difference did not affect their responses in any systematic manner. Taken together, these findings support the use of the RRS on the Web and provide tentative evidence that the Web is a viable mechanism for questionnaire-based data collection and/or subject screemng. The participants in the Web sampIe reported more selffocused rumination than did the participants who filled out paper-and-pencil questionnaires, although the Web sampIe did not differ from the students in upper-Ievel psychology courses. Although women reported higher levels ofrumination than did men in the general sampie (collapsed across group), women in the Web sampie also reported more rumination than did women who filled out paper-and-pencil questionnaires. This finding might reflect increased self-disclosure among women in the context of a Web-based questionnaire. Some women may feel more open to disclose tendencies such as ruminative coping in a context that is arguably less evaluative than a typical data collection context. By allowing people to have more control over their environment when they disclose information about sensitive issues, such as how one responds to depressed mood, Web-based questionnaires may encourage increased frankness ofresponse and selfdisclosure (Hewson et al., 1996). A further possibility is that some people may gain greater access to their feelings and personality when cornpleting a Web-based questionnaire, as compared with a questionnaire in a large lecture class. Given that women are more likely than men to ruminate when depressed, this tendency may be facilitated in the context of a Web-based questionnaire. The internal consistency ofthe RRS was similar across the four groups. This finding indicates that these questionnaires consistently measure a single construct and that results from the Web version are comparable with those obtained in the other sampies. The Web versions of the RRS also were unaffected by differences in the location from which the participants accessed the Web. This finding suggests that the data people provide to a Web page do not differ significantly as a function ofwhether 576 DAVIS they choose to fill out questionnaires at horne, a public computing site, or another computer on campus. Although permitting access from multiple locations decreases experimental control, it does not appear to affect questionnaire results adversely. A highly publicized study recently suggested that Internet usage was associated with increases in depression and loneliness and with declines in family communication and the size of participants' social circles (Kraut et al., 1998).This finding raises the question ofwhether the present results reflect a sampling artifact, rather than differences that are a function ofthe assessment medium. AIthough such a possibility cannot be mied out definitively, it is unlikely to have occurred for several reasons. First, the present study sampled from college students with a working knowledge of Web browsing, rather than from first-time Internet users. Second, the findings of Kraut et al. demonstrate only small relationships at best (e.g., the largest correlation between Internet usage and adjustment measures is .19). Third, although Kraut et al.'s data were interpreted as showing that participants used the Internet as their social and emotional functioning declined, Internet usage may haveprovided users with a newfound hobby or a constructive activity to engage in given that social and emotional functioning had declined, rather than having caused the social and emotional difficulties. Thus, there remains no conclusive or causal evidence that Internet users have more social and/or emotional problems than non-Internet users, nor has it been established definitively that Internet use causes social and/or emotional problems. The lack of such evidence and the dissimilarity ofthe sampies used in this study and in that of Kraut et al. suggest that the present results are not simply a sampling artifact. It might be argued that the Web sampie reported spuriously higher levels ofself-focused rumination than did the other groups owing to computer anxiety (cf. Tseng, Tiplady, MacLeod, & Wright, 1998). This suggestion is unlikely for three reasons. First, the participants in the Websampie volunteered to participate in this study, which required filling out a questionnaire on line. It is not likely that computer-anxious people would volunteer for such a study. Second, previous research has found that greater levels of education are associated with lower computer anxiety (Igbaria & Parasuraman, 1989). All the participants in the present study were college students attending a highly competitive university. Third, the RRS is not a symptom questionnaire and has no apriori relationship to computer anxiety. Rather, it measures the relatively stable tendency to respond to depressed mood with passive, emotion-focused thoughts and behaviors. Alternatively,it could be argued that differences among the groups other than the assessment medium might account partly or fully for the observed results. Specifically, the Web sampie replied to a general advertisement, whereas the other groups were solicited at a lecture. Thus, the Websampie may have been more motivated and more interested in participating. It should be noted, however, that flyers advertising the study were strategically placed near computer laboratories and other computer facilities to minimize inconvenience to potential participants. Many people were observed simply writing down the URL as they went to check their e-mail or to perform other routine computing tasks. Thus, both the Web sampie and the paper-and-pencil sampies were solicited at comparable, although not identical, moments of convenience. This study provides some initial data that tend to support the use ofthe Web for questionnaire-based research. However, more validation of Web-basedresearch is needed (Krantz et al., 1997). The results ofthe present study increase confidence in the use of the Web as a screening device and as a method for collecting questionnaire data. However, the participants were sampled from a college campus and were restricted in terms of age range. More data, using multiple measures and a broader sampie, would be desirable in future studies. In addition, experimental studies ofthe mechanisms underlying participants' higher scores in the Web group would be useful. Identification of such mechanisms and the continued validation of the Web as a research tool appear to be growing areas of research (Schmidt, 1997; Smith & Leigh, 1997). REFERENCES BOOTH-KEWLEY, S., EDWARDS, J. E., & ROSEN FELD, P. (1992). Impression management, social desirability, and computer administration of attitude questionnaires: Does the computer make a difference? Journal ofApplied Psychology, 77, 562-566. BUCHANAN, T., & SMITH,J. L. (1999). Using the Internet for psychological research: Persona1ity testing on the World-Wide Web. British JournalofPsychology,90,125-144. BURKE, M. J. (1992). Computerized psychological testing: Impacts on measuring predictor constructs and future job behavior. In N. Schmitt & W.C. Borman (Eds.), Personnel selection in organizations (pp, 203- 238). San Francisco: Jossey-Bass. BUTLER, L. D., & NOLEN-HoEKSEMA, S. (1994). Gender differences in responses to a depressed mood in a college sampIe. Sex Roles, 30, 331-346. DAVIS, R. N., & NOLEN-HoEKSEMA, S. (in press). Cognitive inflexibility among ruminators and nonruminators. Cognitive Therapy & Re- search. DISSER, D. (1996). Htmail: A genericforms-to-email gateway [Computer program]. Ann Arbor: University of Michigan Information Techno1ogy Division. ESPOSITO, J.• AGARD, E., & ROSNOW, R. (1984). Can confidentiality of data pay off? Personality & Individual Differences, 5, 744-745. EVAN, W. M., & MILLER,J. R. (I 969). Differential effects on response bias of computer vs. conventional administration of a social science questionnaire: An exploratory methodological experiment. Behavioral Science, 14,2[6-227. FINEGAN, J. E., & ALLEN,N. J. (1994). Computerized and written questionnaires: Are they equivalent? Computers in Human Behavior, 10, 483-496. GANGESTAD, S. W., & SNYDER, M. (1985). "To carve nature at itsjoints": On the existence ofdiscrete classes in personality. Psychological Review, 92, 317-340. GRAPHICS, VlSUALIZATION, AND USABILITY CENTER(1998). G VU:~ 9th WWW survey results [On-line]. Atlanta: Georgia Institute of Technology, College of Computing. Available URL: http://www.cc.gat- ech.edu/gvu/user_surveys/ HEWSON, C. M., LAURENT, D., & VOGEL, C. M. (1996). Proper method- WEB-BASED ADMINISTRATION OF A PERSONALITY QUESTIONNAIRE 577 ologies for psychological and sociological studies conducted via the Internet. Behavior Research Methods, Instruments. & Computers, 28, 186-191. HILE, M. G., & ADKINS, R. E. (1997). 00 substance abuse and mental healthc1ients prefer automatedassessments?Behavior Research Methods.Tnstruments, & Computers, 29,146-150. IGBARIA, M..& PARASURAMAN. S. (1989). A path analytic study ofindividual characteristics, computer anxiety, and attitudes toward microcomputers. Journal ofManagement, 15,373-388. KIELEY, J. M. (1996). CGI scripts: Gatewaysto World-Wide Webpower. Behavior Research Methods.Instruments. & Computers, 28,165-169. KING, W.C., & MILES, E. W. (1995). A quasi-experimental assessment ofthe effectofcomputerizing noncognitivepaper-and-pencil measurements: A test of measurement equivalence. Journal 01'Applied Psvchology, 80, 643-651. KRANTZ, 1. H., BALLARD, 1..& SCHER, J. (1997). Comparing the results of laboratory and World-Wide Web sampies on the determinants of female attractiveness. Behavior Research Methods, Instruments. & Computers, 29, 264-269. KRAUT, R., PATTERSON, M., LUNOMARK, V.. KIESLER, S., MUKOPHAOHYAY, T., & SCHERLIS, W. (1998). Internet paradox: A social technologythat reduees soeial involvementand psychological well-being? American Psychologist, 53, 1017-1031. LEVINE, S., ANClLL, R. J., & ROBERTS, A. P.(1989). Assessment ofsuicide risk by computer-delivered self-rating questionnaire: Preliminary findings. Acta Psychiatrica Scandinavica, 80, 216-220. LUCAS, R. W.(1977). A study of patients' attitudes to computer interrogation. International Journal ofMan-Machine Studies, 9, 69-86. LUCAS, R. w., MULLIN, P.1..LUNA, C. B. X., & MclNROY, D.C. (1977). Psychiatristsand a computer as interrogatars of patients with alcoholrelated illnesses: A camparison. British Journal ofPsychiutry, 131, 160-167. LYUBOMIRSKY, S.. & NOI.EN-HoEKSEMA, S. (1993). Self-perpetuating properties of dysphoric rumination. Journal ofPersonalitv & Social Psvchology, 65, 339-349. LYUBOMIRSKY, S., & NOLEN-HoEKSEMA, S. (1995). Effects of selffocused rumination on negative thinking and interpersonal problem solving. Journal 01' Personulitv & Social Psvchology, 69, 176-190. MESZAROS, A., ENGELSMANN, F., METERISSIAN, G.. & KUSALIC, M. (1995).Computerized assessment of depression and suicidal ideation. Journal ofNervous & Mental Diseuse, 183,487-488. MICHALAK, E. E. (1998). The use ofthe Internet as a research taoI: The nature and characteristics of seasonal affective disorder (SAD) amangst a population ofusers.lntemcting with Computers, 9, 349-365. MII.LSTEIN, S. G. (1987). Acceptability and reliability of sensitive information collected via computer interview. Educational & Psvchological Measurement, 47, 523-533. NOLEN-HoEKSEMA, S. (1990).Sex differences in depression. Palo Alto, CA: Stanford University Press. NOI.EN-HoEKSEMA, S. (1991). Responses to depression and their effects on the duration ofdepressive episodes. Journal ofAbnormal Psvchologv, 100,569-582. NOLEN-HoEKSEMA, S.. & LARSON, J. (1998). A dynamic model ofthe gender differences in depressive symptoms. Manuscript submitted for publication. NOLEN-HoEKSEMA. S.. & MORROW, J. (1991). A prospective study of depression and posttraumatic stress symptoms after a natural disaster: The 1989 Loma Prieta earthquake. Journal ofPersonality & Social Psychology, 61, 115-121. NOLEN-HoEKSEMA, S.. PARKER, L., & LARSON, J. (1994). Ruminative coping with depressed mood following lass. Journal ofPersonality & Social Psychology, 67, 92-104. PASVEER, K. A., & ELLARD, J. H. (1998). The making of a personality inventory: Help from the Www. Behavior Research Methods, Instruments. & Computers, 30, 309-313. PETRIE, K., & ASELL, W. (1994). Responses ofparasuicides to a computerized interview. Computers in Human Behavior, 10,415-418. REIPS, U. (1996, October). Experimenting in the World Wide Weh. Paper presented at the Society for Computers in Psychology conference, Chicago. SCHMIOT, W. C. (1997). World-Wide Web survey research: Benefits, potential problerns,and solutions.Behavior Research Methods, Instruments, & Computers, 29, 274-279. SCHMIOT, W.C; HOFFMAN, R..& MACDoNALD, J. (1997). Operate your own World-Wide Web server. Behavior Research Methods, Instruments, & Computers, 29, 189-193. SCHWARTZ, J. A. J., & KOENIG, L. J. (1996). Response styles and negative affect among adolescents. Cognitive Therapy & Research, 20, 13-36. SKINNER, H. A..& ALLEN, B. A. (1983). Does the computer make a difference? Computerized versus self-report assessment of alcohol, drug, and tobacco use. Journal ofConsulting & Clinical Psychology, 51,267-275. SMITH, M. A.. & LEIGH, B. (1997). Virtual subjects: Using the Internet as an alternative source of subjects and research environment. Behavior Research Methods, Instruments. & Computers, 29, 496-505. STANTON, J.M. (1998). An empirical assessment ofdata collection using the Internet. PersonnelPsychology, SI, 709-725. THAYER, R. E., NEWMAN, J. R., & MCCLAIN, T. M. (1994). Selfregulation ofmood: Strategies for changing a bad mood, raising energy, and reducing tension. Journal 01'Personality & Social Psychology, 67, 910-925. TSENG, H.. TIPLADY, B.. MACLEOD, H. A., & WRIGHT, P.(1998). Computer anxiety: A comparison ofpen-based personal digital assistants, conventional computer and paper assessment of mood and performance. British Journal ofPsychology, 89, 599-610. WILSON, F. R., GENCO, K. T.. & YAGER, G. G. (1986). Assessing the equivalence of paper-and-pencil vs, computerized tests: Demonstration ofa promising methodology. Computers in Human Behavior, I, 265-275. (Manuscript received November 16, 1998; revision aecepted far publieation April 27, 1999.)