Methodology

Observation Description

The following details a small-scale qualitative study that was conducted to assess short-term retention rates and satisfaction levels against the passive and interactive multimedia formats on Gen Y users. In order to most closely mimic the natural viewing conditions of a user, test subjects were not monitored during their interaction time with the stimuli. Furthermore, no commentary regarding each stimulus was discussed during the testing period that might have potentially impacted the findings in the subsequent tasks.

Variables

The two dependent variables studied were short-term retention rates and satisfaction levels. There was one main independent variable in this study: content classification as either passive or interactive. Two other independent variables – sex and race – were considered during the recruiting process, but were not integrated into the findings due to the small subject pool. Each test subject viewed four presentations, two of each content classification (P for passive, I for interactive). For example, test subject SI (Figure 2) viewed the sites in the following order: P1, I1, P2, I2. The test subject S6 viewed the same sites, but in opposing order: I1, P1, I2, P2. By organizing the study so that each sub group viewed both types of stimuli but in opposing order (passive-interactive; interactive-passive), potential data error on results from the second test due to a biased mindset were mitigated.

Figure 2: Test matrix showing the order and stimuli selection of each subject.

Matrix showing order and stimuli selection for each subject

Twelve multimedia presentations were carefully chosen and organized from most simple to most advanced, determined by the number and complexity of media present, and on amount of user input (Figure 4). Therefore, while subjects 1 and 6 arguably viewed the most basic types of passive and interactive multimedia, subjects 3 and 8 experienced the most advanced types. Furthermore, subjects 4 and 9 were tested on the extremes of each content classification, while subjects 2 and 7 saw the intermediate passive and interactive content.

Figure 3: Test stimuli ranked by number and complexity of media present

Continuum of Interactivity

Passive:
P1: The Roanoke Times: Warrior for the elderly (text and photos)
P2: BBC: British Doctor returns from Haiti (audio and text)
P3: FlowingPrints: World progress report (graphic and text)
P4: La Times: Their home base (audio and photo and text)
P5: MediaStorm: Global governance monitor oceans (audio, still and moving photo) * Only tested video, not entire site
P6: UNC: The truth about energy (motion graphics, text and audio) * Only tested video above fold, not entire site

Interactive:
I1: NPR: Traveling down the Amazon road (photo, text and audio) * Only tested “The Road” slide show, not entire site
I2: USA Today: Presidential approval tracker (photo, graphics and text)
I3: Washington Post: Unemployment rate by county (graphics, map, and text)
I4: The Boston Globe: New York plane crash (graphics, map, photo, 3D and text)
I5: MSNBC: Battle of the bags (video, text, audio, photos, and user-submission tools)
I6: New York Times: Gauging your distraction (text, graphics, user-submission tools, and virtual simulation game)

Participants

A total of 12 participants were recruited for this study, although two participants’ results were thrown out due to inconsistent testing procedures. Thus, findings from this qualitative study are based upon data from the remaining 10 subjects.

Recruitment took place within the Chapel Hill area. An informational email was sent via a student body listserv to the UNC undergraduate and graduate community. Upon email inquiry to the experimenter, potential participants were screened for three criteria: age, prior Internet usage, and fluency in English. Internet usage was a screening factor because the researcher wanted to ensure that it was not the first time that a test subject had experienced a multimedia presentation. Once approved, they were asked to make an appointment to be tested at a facility on UNC’s campus. The study lasted approximately one hour and subjects received $15 cash upon completion of the study. Recruitment and testing for this study were conducted during March and April of 2010.

Of the 10 subjects, six were male and four were female; seven were Caucasian, one was Indian, one was African American, and one was Asian. They ranged in age from 19-30 and were all UNC undergraduate or graduate students. As the study progressed, latter participants were selected for specific roles so that sex could be analyzed as a potential independent variable (Figure 4). However, while reviewing the data, it was determined that a comprehensive analysis of the impact of sex on the findings could not be calculated from the small subject pool. Nonetheless, it is included here as a suggestion for further thought and potential expansion in a future study.

Figure 4: The sex of each subject grouped against opposing test stimuli.

Organization of subjects by sex for potential analysis

Procedure

At the pre-determined time, participants arrived at the facility and were given a consent form to read and sign. They were then asked to fill out a pre-test questionnaire, which asked more detailed demographics information, perceived definitions of multimedia and interactivity, personality traits and detailed Internet usage (Appendix 1). This data was used in comparison to Liu et al.’s (2002) findings that a person’s personality, presence of CMCA, and/or prior usage with technology might affect his or her satisfaction rate with interactive elements. Also, special care was taken to ensure that none of the subjects had previously seen the stories, and that they honestly answered “I don’t know” to a recall question rather than guessing so as to not throw off the retention data.

Upon completion, subjects were notified to open the provided laptop. The test subject’s computer was a Macintosh laptop running OSX with a resolution of 1280 x 1024 pixels. The experimenter verbally explained the procedure to the test subject. Upon verbal agreement, the first website loaded on the participant’s screen. Synchronized passive presentations had a set time and the user was asked to close the browser window and notify the experimenter once completed. Participants viewing the asynchronized passive and interactive presentations were told to view the site as they normally would, and to notify the experimenter once they finished. When the subject indicated that he or she had finished, or a maximum five-minute period had elapsed, he or she was prompted to close the browser. A five-minute time period was determined since it was the average time needed to view the synchronized passive testing stimuli.

A brief retention exercise was then given to all participants immediately following. This consisted of a brief, five multiple-choice question quiz covering both the main concepts of the package as well as more minute details such as people’s names, dates, and places. Since 12 stimuli were tested – six passive and six interactive – 12 quizzes were prepared (Appendix 2).

A satisfaction survey followed, using a modified 7-item version of the user satisfaction measure first developed by Bailey and Pearson in 1983 and expanded upon by Ives et al. later that year, as noted by Wildemuth in 2009 (p. 273-275). Participants were asked to rate their satisfaction of the site by indicating their stance between ten pairs of descriptive antonyms (Appendix 3).

The subject then examined a second site and completed a corresponding retention quiz and satisfaction survey. The procedure was repeated for the third and fourth sites. Upon completion, they were asked to fill out a brief post-test questionnaire asking about overall thoughts on the four sites visited, and general thoughts about their experiences with passive and interactive multimedia (Appendix 4). They were then led out of the study room and given their compensation.



About the Researcher

Tracy Boyer is an award-winning multimedia technology strategist, specializing in the intersection of digital media and interactive technology. Currently, Tracy is the first MBA/MSIS dual master’s candidate at UNC-Chapel Hill, where she is studying General Management at Kenan-Flagler Business School and Human-Computer Interaction in the School’s Information Science program. Boyer is currently the managing editor of Innovative Interactivity, a widely read multimedia blog that she founded in 2007.

Previously, she reported on malnutrition in Honduras with The Pulitzer Center, was a multimedia producer at Roanoke.com, served as the UNC correspondent for CNN.com and interned with The Atlanta Journal-Constitution. In 2007, she was selected to participate in the Poynter Summer Fellowship. Boyer graduated with a multimedia degree from UNC’s School of Journalism and Mass Communication.

Feel free to email Tracy directly with comments, suggestions, and any other feedback related to this research.

View more of her work at www.tracyboyerclark.com.