Stephanie Thompson, Seavoss Associates and Rick Bonney, Cornell Laboratory of Ornithology, USA
Abstract
eBird is an ongoing project that aims to document bird distribution and abundance in North America. The project allows participants to report any bird they see using one of several data-collecting protocols. Participants and other visitors to the site also view and manipulate data submitted to the eBird database to answer their own questions about bird distribution and abundance. Responses to these visitor queries can be viewed in the form of graphs and histograms as well as on maps. We were interested in learning about how eBird participants use these data tools, and how participating in the project might affect their thinking about the concept of conservation. The eBird New User Survey employed standard survey questions combined with exploration of live data and an on-line version of the Personal Meaning Mapping (PMM) technique to address these questions. Nearly ninety percent of eBird participants used the data tools to answer at least one type of question, and most participants used the tools in multiple ways.
Participant understanding of the resulting data varied. Most understood how to use map and frequency data; however, many were confused by other data types available. The on-line PMM protocol generated extremely rich data to which a typical coding framework could be successfully applied; however, respondents were unlikely to make changes to their PMM statements, making it difficult to assess changes in perspectives on conservation related to participation in the eBird project.
Keywords: evaluation, personal meaning mapping, citizen science, visitor learning, surveys
Introduction
Citizen science is a term coined by the Cornell Lab of Ornithology (CLO) in the mid-1990s to describe projects that involve participants directly in the process of research. Citizen science projects have at least two goals: first, to gather data for studying scientific questions at continental or even global scales, and second, to increase scientific and/or conservation literacy among project participants. By 2007, CLO was operating numerous citizen science projects that together have engaged hundreds of thousands of individuals in observing birds, collecting and submitting bird data, and even analyzing and interpreting those data themselves (http://www.birds.cornell.edu/LabPrograms/CitSci/).
In the past few years citizen science has started to gain acceptance as an authoritative method for addressing large-scale scientific issues. For example, data collected by CLO project participants and published in peer-reviewed scientific journals have shown how bird populations change in time and space (Hochachka et al. 1999); how bird breeding success is affected by environmental change (Hames et al. 2002a); how acid rain affects bird populations (Hames et al. 2002b), how infectious diseases spread through wild animal populations (Dhondt et al. 2005); and how bird life history varies with latitude (Cooper et al. 2006).
Citizen science also appears effective at increasing scientific literacy. Evaluations of CLO-based projects have shown that participants not only learn scientific facts (Brossard et al. 2005), but also discover that scientific investigation is neither a precise nor a linear process – rather, a complex procedure that involves careful observations, adherence to data-gathering protocols, cautious conclusions, and an abiding need for further investigation (Trumbull et al. 2000, Bonney 2004, Krasny and Bonney 2005, Phillips et al. 2006).
This paper addresses the educational impacts of eBird, a citizen science project developed by CLO and the National Audubon Society and operated completely on-line. Like other citizen science projects, eBird has dual goals. Its scientific goal is to increase understanding of the dynamics of year-round bird populations across North America and beyond. Its educational goal is to help participants understand the scientific process, first as they collect data that scientists can use to document bird distributions and population changes over time, and second as they investigate the information that project participants, including themselves, have collected and submitted to the on-line eBird database.
eBird is one of the first Lab projects to be operated solely on-line. Therefore, project evaluation has required development of new tools and techniques to assess impacts of on-line participation on a largely unseen audience. In addition to standard survey questions and tracking data from the Web site, we wanted to develop methods for collecting data on understanding of key concepts. This desire resulted in a survey that combined standard questions with a data visualization task and an on-line version of a Personal Meaning Mapping interview. This paper summarizes our evaluation findings and discusses the effectiveness of the methods.
eBird Overview
eBird (ebird.org) was launched in 2002 to provide a means for any birdwatcher to submit any bird sighting from anywhere in North America. Participants can report birds as often as they like from as many locations as they like. Some participants report birds only occasionally. Others submit complete checklists of all the species they see each day, often from several locations – home, office, vacation spot, favorite birding location.
Individuals who wish to report sightings for the first time must register by providing name and email address. Once registered, users can log in to report as many bird sightings as they like. Registration and log-in are not required to explore the on-line database, however, and many people view data without submitting data or even registering.
To report sightings, users choose a reporting location from drop-down menus of birding ‘hotspots,’ such as nature reserves, which have been entered into the database by project staff and other eBirders. Observers also can use eBird’s on-line mapping software to select any number of new reporting locations. For instance, many users pinpoint their home. That location is then stored in the database so that participants can report the birds they see there any time they choose. Users also can enter locations using latitude and longitude.
To enter data from a location, users first indicate the type of observation or observations they have made: casual observation, stationary count, traveling count, or area search. Then, after entering the date and time of observation and indicating whether they are reporting all of the birds they have seen or just a subset, the Web site presents them with a bird checklist for their state or region. Users then fill in the number of individuals of each species seen or an “x” indicating simply that a species was present. Next the user hits submit.
At this point, the data are vetted by an automatic filter, developed by regional bird experts, which reviews the submission for suspect information – for example, species that are unlikely to be seen in that region at the time they are being reported, or individual counts that are higher than expected. The vetted data are then summarized on a Confirmation and Notes page that indicates any information that might be suspect. Users can either change data entered mistakenly, or choose to submit data they are certain are accurate. After making this decision the user hits submit one more time and receives a Thank You page.
Any data that have been flagged by the filters are automatically sent to the regional experts for review. Often these experts contact eBird participants for more sighting details. Only after the regional expert decides that a sighting is accurate can it be viewed by the public in the on-line database.
To explore the database, users first choose whether they wish to examine data for a certain species (for example, Yellow Warbler, Black-billed Magpie) or from a certain location (for example, county, state, or bird conservation region, or their personal locations). Then the user can create maps, histograms, and various types of graphs that combine all of the eBird data available for each species or location at any chosen time. Users also can use the eBird site to keep track of their life list, or all of the birds that they have seen, sorted by location.
After five years of operation, the project has become a significant source of scientific information about North American bird populations. In 2006, participants submitted more than 323,500 checklists of more than 4.3 million observations (i.e., a bird at a location on a date). These data provide an extremely accurate representation of the distribution and abundance of bird populations at both the backyard and continental levels.
These data are available not only through eBird but also via other applications developed by the global biodiversity information community. For example, eBird data are part of the Avian Knowledge Network (AKN) which integrates observational data on bird populations across the western hemisphere. In turn, the AKN feeds eBird data to international biodiversity data systems, such as the Global Biodiversity Information Facility (GBIF). In this way any contribution made to eBird increases scientific understanding of the distribution, richness, and uniqueness of our planet’s biodiversity.
In addition to its scientific objectives, eBird has several educational objectives. Specifically, the project is designed with the hope that participants will:
- Increase their interest in birds and birdwatching
- Learn techniques for counting birds
- Learn methods for gathering scientifically useful data (point counts, transects, area searches)
- Understand the scientific value of complete checklists versus casual counts
- Explore eBird data to learn more about birds in their own communities and to compare local bird populations to those in other areas
- Understand that bird populations change over time and space
- Understand the value of eBird data for informing bird conservation efforts
These are lofty objectives for a project in which participants are entirely self instructed and self directed. The primary way for eBird to help users meet these objectives is through on-line tutorials and FAQs that help users understand the site, report their sightings accurately, and view and explore data using methods appropriate for the questions being asked.
New User Survey
In 2005, project staff collaborated with Seavoss Associates, Inc. and eduweb to conduct an in-depth survey of new eBird participants. The survey was administered when a new participant joined eBird and again 8 weeks later. This pre-post design allowed evaluators to examine changes in attitudes toward, behavior about, and knowledge of birds and scientific research that might have occurred in tandem with project participation.
The eBird New User Survey was timed to coincide with the launch of eBird 2.0 in summer 2005. eBird 2.0 is a product of a 3-year development and pilot-testing process that included extensive user testing and formative evaluation. Feedback from user e-mails, interviews, and on-line surveys informed changes that aimed to make eBird 2.0 more accessible and user-friendly than its predecessor.
The New User Survey focused on three of the previously stated educational objectives for participants:
- Increased interest in birds and birdwatching
- Exploration of eBird data to learn more about birds in their own communities and to compare local bird populations to those in other areas
- Understanding of the value of eBird data for informing bird conservation efforts
The survey was divided into three sections. Section one included standard questions about participant characteristics, including demographics, bird-watching expertise, and reasons for participation. Section two explored participant use and understanding of eBird’s View and Explore Data tools. Finally, the survey employed an on-line adaptation of the Personal Meaning Mapping (PMM) technique developed at the Institute for Learning Innovation (Falk et al., 1998). The PMM allowed project evaluators to learn a great deal more about how project participants viewed the concept of conservation, one of the project’s main goals, than could be determined by standard survey questions.
Survey respondents, all of whom volunteered to participate, were recruited in two ways. First, a direct link to the survey was placed on eBird’s final registration page. Second, e-mail invitations were sent directly to eBird participants who had created their first location or submitted their first checklist in the two weeks prior to the survey launch. This approach allowed us to capture responses from individuals who had recently registered as well as those who had registered previously but only recently submitted data. A total of 745 e-mail invitations were sent, excluding those returned owing to faulty e-mail addresses.
Each respondent was asked to complete two versions of the survey. The first completion (Time 1) was done at the time of eBird registration or first data submission and was intended to document individuals’ current birding habits as well as baseline information on participant ability to understand eBirds’s View and Explore Data tools. The second completion (Time 2), administered approximately 8 weeks later, was similar but not identical to the first survey, and captured changes in participants’ birding behaviors and understanding of eBird’s View and Explore Data tools.
Substantially complete Time 1 surveys were submitted by 174 participants (a 23% response rate), while 82 participants completed both the Time 1 and Time 2 surveys (a 47% response rate to the Time 2 invitation, and an 11% overall response rate for both surveys).
Respondent Characteristics
Respondents were asked to complete questions about their background and birding interests. Demographic data for users who completed both the Time 1 and Time 2 surveys are summarized in Table 1.
Category |
% of responses |
Number |
---|---|---|
Total surveys | - | 82 |
Sex | ||
Male | 51% | 42 |
Female | 49% | 40 |
Age | ||
18-24 | 5% | 4 |
25-34 | 5% | 4 |
35-49 | 31% | 25 |
50-64 | 48% | 40 |
65+ | 11% | 9 |
Education |
||
High school graduate or less | 7% | 6 |
Some college or college graduate | 44% | 36 |
Post-graduate study or degree | 47% | 39 |
Did not respond | 1% | 1 |
Self-reported bird identification skill |
||
1 (No experience) | 2% | 2 |
2 (Beginner) | 24% | 20 |
3 (Intermediate) | 49% | 40 |
4 (Advanced) | 23% | 19 |
5 (Expert) | 0% | 0 |
Did not respond | 2% | 1 |
Mean rating | 3 | |
Participation in other CLO/NAS citizen science projects |
||
The Birdhouse Network | 1% | 1 |
Christmas Bird Count | 28% | 23 |
Project FeederWatch | 23% | 19 |
Great Backyard Bird Count | 29% | 24 |
Other | 10% | 8 |
Primary reason for participating |
||
Learn more about bird distribution and abundance | 30% | 25 |
Conservation interest | 11% | 9 |
Contribute to citizen science | 16% | 13 |
Fun with others | 7% | 6 |
Track my own bird observations | 27% | 22 |
Other | 5% | 4 |
Did not respond | 4% | 3 |
Table 1: 2005 eBird New User Demographics
In addition to selecting their primary reason for participating, respondents also were asked to rate the importance of each of the 5 reasons in deciding to participate. These ratings are summarized in Figure 1 below. Conservation was considered either an important or a very important reason for nearly 80% of new users. Learning about birds and bird populations and obtaining the ability to track one’s own observations also were deemed important by a majority of new users. These responses indicate that the interests of eBird users appropriately coincide with the project’s objectives.
Fig 1: Importance of reasons for eBird participation
Active Users Vs. Non-Users
At Time 2, respondents were asked, of the data they had collected about birds during the past month, how much they had submitted to eBird. Based on those responses, we created two categories of eBird usage. Respondents who had submitted data to eBird during the past month were categorized as active users (n= 37); those who indicated that they had not submitted any data in the past month were labeled non-users (n= 45). Active users and non-users did not differ significantly in terms of age, sex, education, self-reported bird identification skill, or motivation for eBird participation. The two categories were used in the analyses of behavior change that follow, with the non-users acting as a comparison group for active users.
Reasons for not submitting data varied. Most common were lack of time or forgetting about the project (70% of reasons fell into one of these two categories). Fifteen percent of participants could not figure out how to submit data or found the process too cumbersome. Ten percent felt that the birds they had seen were “common” and thus not worth reporting. One person decided she was no longer interested in the project, and one felt that her bird identification skills were insufficient. We note that although the majority of non-users cited time as a reason for their non participation, there was no difference between active users and non-users in terms of the number of days of birdwatching activity in the month prior to the Time 2 survey (M = 14). Thus, individuals in the two groups appear to have been doing a similar amount of birdwatching; the difference between them was whether individuals submitted data to eBird. The reason for this difference might lie in experience with recording bird data. All active users had recorded at least one type of data prior to participating in eBird. Of the non-users, one-third had never collected any kind of bird data. Strikingly, of the non-users who lacked data collection experience, not one submitted any data to eBird.
Behavior Related To Birding And Conservation
At both Time 1 and Time 2, survey respondents were asked about their membership in conservation organizations; the frequency of their birdwatching and data-recording activity; types of bird data recorded; and birding equipment owned. At Time 2 only, participants were asked if they had visited any new locations or identified any new (to them) species in the preceding month. Behaviors related to birding and conservation as reported by respondents prior to and following eBird participation are summarized in Table 2.
Active users (n= 37) | Non-users (n= 45) | |||
---|---|---|---|---|
Category | Time 1 | Time 2 | Time 1 | Time 2 |
New memberships in conservation-related organizations | ||||
- | 27% | - | 27% | |
Birdwatching days in past month | ||||
M = 17.6 | M = 14.3 | M = 17.4 | M = 14.3 | |
Percent of birdwatching days on which data were recorded | ||||
44% | 65% | 25% | 22% | |
Types of data recorded | ||||
Location | 84% | 87% | 49% | 53% |
Time spent | 11% | 25% | 11% | 15% |
Species | 100% | 100% | 64% | 64% |
Birds per species | 51% | 62% | 33% | 37% |
Behavior | 27% | 38% | 31% | 35% |
Keep life list | 70% | 73% | 29% | 29% |
Visited new locations | ||||
- | 41% | - | 20% | |
Identified new species | ||||
- | 43% | - | 13% | |
Birding equipment | Owned at T1 | Obtained in past month | Owned at T1 | Obtained in past month |
Feeders/birdhouses | 73% | 35% | 73% | 31% |
Field guides | 86% | 41% | 91% | 29% |
Binoculars | 94% | 22% | 96% | 29% |
Spotting scope | 39% | 5% | 36% | 15% |
Listing software | 19% | 13% | 22% | 4% |
Table 2: Behavior related to birding and conservation
Key findings regarding birding behavior include the following:
New types of data collected. Active users were no more likely than non-users to begin collecting any particular type of new data. However, when the total number of new data types collected was calculated, active users showed a significant increase in the number of different data types collected between Time 1 and Time 2, t(36) = 3.60, p < .01. Non-users did not show a significant increase.
New species identified. Active users were three times as likely as non-users to have identified at least one species new to them in the previous month, c ²(1, N = 82) = 9.25, p = .002.
New locations visited. Active users were more likely than non-users to have visited a new-to-them location for birding in the previous month, c ²(1, N = 82) = 4.14, p = .042.
Birdwatching days on which bird data were recorded. Active users showed a significant increase in the percentage of days on which they recorded bird data, t(28) = 3.37, p < .01. Prior to participation, active users recorded data on an average of 44% of their birdwatching days; after participating they recorded data on an average of 65% of birdwatching days. Non-users showed no change in their data recording habits.
There were no significant changes for either active users or non-users in terms of birding equipment owned, membership in conservation-related organizations, or number of birdwatching days in the past month.
To summarize respondent characteristics, eBird attracts new registrants with diverse backgrounds in terms of birding experience, education, and age (though participation among those under age 35 is lower than might be desirable). eBird participants cite personal learning (tracking their own observations and learning about distribution and abundance) as the most important reasons for using the site, but also display a strong commitment to conservation issues.
Participating in eBird appears to have the effect of broadening the scope of birding activity in ways directly related to eBird’s mission. Active users of eBird recorded data from more of their birding sessions, visited new locations, and identified new-to-them species after they began participating. Active users also began recording data on the time they spent observing and birds per species; both of these data types are among those requested on (but not required for) eBird’s checklists. Some new registrants, notably those with no prior data collection experience, did not submit data to eBird.
Use and Understanding of eBird’s View and Explore Data Tools
One of eBird’s primary goals is that users will explore its database to learn more about bird populations in their own communities and to compare local bird populations to those in other areas. To that end, eBird’s View and Explore Data Web pages guide users in creating data summaries by species or location for whatever time period(s) they choose. The New User Survey was designed to answer two questions about data manipulation. First, what types of information were participants interested in obtaining, and how did they plan to use it? Second, were participants able to determine which of the many available data visualizations- various maps and graphs- were best suited for answering their questions?
At Time 1, all participants stated that they planned to view at least one type of eBird data. Interestingly, at Time 2, half of the non-users had in fact viewed at least one type of data, indicating that some non-users access eBird features even if they do not submit data. However, active users viewed nearly twice as many types of data (M = 4.5) as non-users (M = 2.3), a significant if predictable difference, t(80) = 4.59, p < .01.
Planned and actual data use by active users is summarized in Figure 2. Responses indicate that eBird users’ interests are varied, with questions about the location or range of particular species tending to be most popular. Users were slightly less likely to look at changes in populations over time, or to compare populations at different locations.
Fig 2: Planned and actual use of View and Explore Data tools
To learn whether participants could identify appropriate data analyses for answering questions, survey respondents were asked to read three scenarios, each describing a question about bird distribution or abundance. After reading the scenarios, they viewed live eBird data (in a frame in the survey) and identified the analysis they believed would best answer the question. Available analyses included maps as well as graphs showing abundance, frequency, high count, average count, and birds per hour; respondents were able to toggle between the different displays and view the results for as long as they liked before selecting an answer. Respondents also were asked to rate, on a scale of 1-5, their confidence in the answer choice they selected.
‘Correct’ responses were identified by the eBird project manager as the best or most efficient analysis for a particular question. The questions were designed so that correct responses corresponded with the most commonly used analyses of eBird data: distribution maps, frequency graphs, and abundance graphs. Even so, many respondents skipped one or more of these three questions. Perhaps these respondents were confused by the questions, or maybe they encountered technical difficulties in viewing the live eBird data (although no such problems were reported to us). Or respondents may have felt that they lacked the knowledge to answer the questions, a possibility supported by two emails from respondents to the evaluation team.
Overall, 27% of respondents (41% of those who actually answered the question) were able to correctly identify a frequency analysis as the best answer to Question 1; 50% (or 73% of those who selected an answer) were able to identify the map as the best answer to Question 2, and 39% (61% of those who answered) were able to identify either abundance or average count analyses as the best answers to Question 3. Figures 3, 4 and 5 below summarize the responses to each question.
Fig 3: Participant responses to View and Explore Data question #1
Fig 4: Participant responses to View and Explore Data question #2
Fig 5: Participant responses to View and Explore Data question #3
There was no change in the rate of correct responses from Time 1 to Time 2 for either active users or non-users, nor were there significant changes in participant confidence in their answers. Because some non-users reported that they had in fact viewed some eBird data, we also compared the rate of correct responses for all of those who reported viewing data with those who did not; there were no differences in the rate of correct responses between these two groups.
To summarize use and understanding of eBird’s View and Explore Data tools, most respondents in our sample made use of these tools, even those who had not submitted any checklists (the non-users). Participants used the data tools to answer a variety of questions about bird distribution and abundance, indicating achievement of one of the key educational objectives of the eBird project. However, when asked to use live data analyses to answer some basic questions, it became clear that not all users were sure which analyses are appropriate for those questions, indicating a need for additional support for users of these data analysis tools.
As an evaluation tool, we feel that the live ‘View and Explore Data’ questions yielded informative results. Initially, we thought that the question format itself might have been confusing, leading to the high non-response rate for those particular questions. However, given that the survey questions featured the actual View and Explore Data interface, with live data imported directly from the eBird site, we feel it is more likely that many users simply didn’t understand the different data types themselves.
Personal Meaning Mapping
To capture even more information about the range and impact of participant experiences, the New User Survey also included an on-line version of a Personal Meaning Mapping (PMM) protocol originally developed by John Falk and associates at the Institute for Learning Innovation for use during in-person interviews with visitors to the Baltimore Aquarium (Falk et al. 1998). Given the diversity of experiences possible for eBird participants, we felt that an open-ended tool would help us capture eBird’s learning outcomes. However, we desired an instrument that would yield responses constrained enough to be tractable to coding and analysis. PMM has been shown to be effective at eliciting rich, in-depth visitor experiences and responses that go beyond traditional survey or multiple-choice testing instruments but are amenable to quantitative analysis (Adelman et al. 2000).
Because PMM is typically facilitated by an interviewer, Eduweb created a structured Web interface that guided respondents in sharing their thoughts. On the Time 1 survey, participants were asked what words, thoughts, images, or ideas came to mind when they read the prompt “Conservation.” Following this prompt, respondents were offered a series of text boxes into which they could type their thoughts. The prompt was repeated at Time 2, when respondents were invited to review their previous responses and then add to, edit, or delete any or all of them.
Our original intent was to examine PMM content at Time 2 compared with Time 1, looking for changes that might be attributable to project participation. However, few users made changes, and those that did typically corrected spelling or grammar. Thus the sample of changed PMM responses was too small to use for meaningful statistical analysis. However, we did examine all responses to learn more about participant perceptions of conservation. Responses were rich and varied, as described below.
PMMs collected by Falk and his colleagues in 1998 were coded along four parameters: extent of vocabulary used, number of different concepts invoked (“breadth”), depth of understanding, and emotional intensity. The on-line PMM responses generated by the eBird New User Survey lent themselves to coding of extent, breadth, and depth; in this paper, we will discuss coding of breadth only.
Responses from the Time 1 Survey were coded into 10 categories that are similar but not identical those used by Falk et al. in the Baltimore Aquarium study. Categories were not mutually exclusive (e.g., a single response could receive more than one code, although this happened only in rare instances).
Response Categories
- Emotional content (anger, sadness, hope, skepticism)
- Nature (General reference to nature, environment; appreciation/respect of same)
- Specific issues and resources (clean water, spotted owl, invasive species)
- Involvement or action (recycling, reducing consumption, government policies)
- Urgency (conservation is important, critical, essential)
- Specific people, places, and organizations (Everglades, Audubon, Rachel Carson)
- Future orientation (saving Earth for the children, planning for the future)
- Spiritual or religious references (to God, a “Creator” or others)
- Need for education/awareness (teaching children, getting others interested)
- Other (responses that did not fit into any of the above categories)
Most respondents offered multiple comments (M = 4.5, sd = 2.4). On average, each person gave at least two different types of responses (M = 2.6, sd = 1.2). There was no significant difference between active users and non-users in terms of the total number of comments entered, nor was there a difference in the number of unique concepts entered. The number of responses in each category is summarized in Figure 6 below.
Fig 6: “Conservation” PMM responses by category
Respondents were most likely to offer comments on specific conservation issues, such as habitat preservation or pollution. Predictably, many comments directly concerned issues related to birds in general or particular species, but overall the responses covered a diverse array of content areas. Specific actions were frequently mentioned, including recycling, reducing consumption, and planting trees, along with government policies related to conservation.
PMM as an Evaluation Technique
At this point we have several thoughts about the use of on-line PMM as an evaluation technique. First, the format of the on-line PMM (a prompt followed by a presentation of blank text boxes) offers participants an opportunity to make many discrete comments, rather than to write a single, stream-of-consciousness paragraph that we suspect might result from the use of a single, large text box. In addition, this format seems to be less overwhelming for the user, and also appears to appropriately convey the potential to provide multiple thoughts and ideas, rather than just the first thing that comes to mind (in response to the prompt). The responses from eBird users were rich and detailed, yet the format lent itself to relatively easy coding because the thoughts were already parsed into distinct text boxes.
There are several possible reasons why respondents tended to leave their Time 1 responses unchanged. First, in a traditional PMM protocol, the presence of an interviewer could compel respondents to review their previous responses, whereas on-line respondents may be less likely to review what they previously wrote and were thus less likely to make changes. Second, because our on-line PMM was combined with a traditional survey, respondents may have felt pressured for time and skimmed through the PMM, especially on the Time 2 survey. Third, in museum settings where PMMs are frequently employed, many of the additional comments that participants generate at Time 2 are attributable to “text echo”- phrases that directly reflect text presented as part of an exhibit. eBird offers no text that might be reflected in PMM responses. A final possibility is that concepts of conservation held by eBirders at the time they registered were already quite rich, leaving little room for change in the time period covered by the New User Survey.
Overall, we believe that the rich responses we received, coupled with the ability to apply the general coding structure developed by Falk et al., indicate that on-line PMM has tremendous potential. However, the technique requires additional refinement and testing to determine whether it can be used for measuring changes in knowledge and attitudes. Alterations to the format might help elicit a greater number of participant changes and additions to the Time 2 PMMs. For example, it might be helpful to more closely replicate the physical layout of an in-person PMM by allowing on-line respondents to enter their thoughts into text boxes, and then physically arrange them on the page via a drag-and-drop method. We are currently experimenting with changes to the on-line PMM format in evaluation of other projects.
Conclusion
Through the use of a mixed-methods survey, we documented in the birding behavior of active eBird users changes that appear to be directly related to eBird participation (namely, increases in frequency and type of bird data collected, new species identification, and birding locations visited). We also learned that eBirders use the site’s View and Explore Data tools to seek answers to a variety of questions about bird populations and distributions. However, if we had not incorporated live eBird data into the survey, we might have incorrectly inferred that the majority of participants were successfully using the tools. Instead, we learned that some eBird users may be arriving at inappropriate conclusions about bird populations because they do not know which analyses are most appropriate for answering certain types of questions. Finally, while on-line PMM data were not useful in answering questions about changes in conservation concepts related to eBird participation, they provided extensive insight into the diverse conservation concepts and attitudes that participants bring to the project, and indicate that eBird tends to attract people who are already knowledgeable about and invested in conservation. With refinement, we feel that the on-line PMM technique could become a powerful tool for understanding both the baseline knowledge and the attitudes that users bring to on-line interactives, as well as the impact of such interactives on complex conceptual understandings.
Acknowledgements
We would like to thank Steven Allison-Bunnell at eduweb for his conceptual contributions to the evaluation project and development of the survey interface. We also thank Susan Earle for her role in coding the PMM data, and Chris Wood and Steve Kelling for reviewing portions of this manuscript. The work described in this paper is supported by the National Science Foundation under Grant ESI-0087760. Any opinions, findings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the National Science Foundation.
References
Adelman, L., J. Falk and S. James (2000). “Impact of National Aquarium in Baltimore on visitors’ conservation attitudes, behavior and knowledge”. Curator, 43(1), 33-60.
Bonney, R. (2004). “Understanding the process of research”. In D. Chittenden, G. Farmelo, & B. Lewenstein (Eds). Creating Connections: Museums and the Public Understanding of Current Research. Altamira Press, CA.
Brossard, D., B. Lewenstein & R. Bonney (2005). “Scientific knowledge and attitude change: The impact of a citizen science project”. International Journal of Science Education. 27(9): 1099-1121.
Cooper, C.B., W. Hochachka, T. Phillips & A.A. Dhondt (2006). “Geographic and seasonal gradients in hatching failure in Easter Bluebirds reinforce clutch size trends”. Ibis. 148: 221-230.
Dhondt, A.A., S. Altizer, E.G. Cooch, A.K. Davis, A. Dobson, M.J.L. Driscoll et al (2005). “Dynamics of a novel pathogen in an avian host: Mycoplasmal conjunctivitis in House Finches”. Acta Tropica. 94(1): 77-93.
Falk, J., T. Moussouri & D. Coulson (1998). “The effect of visitors' agendas on museum learning”. Curator 41 (2): 107-120.
Hames, R., K. Rosenberg, J. Lowe, S. Barker & A. Dhondt (2002a). “Effects of forest fragmentation on tanager and thrush species in eastern and western North America”. Studies in Avian Biology. 25: 81-91.
Hames, R., K. Rosenberg, J. Lowe, S. Barker & A. Dhondt (2002b). “Adverse effects of acid rain on the distribution of the Wood Thrush Hylocichla mustelina in North America”. Proceedings of the National Academy of Sciences. 99: 11235-11240.
Hochachka, W., J. Wells, K. Rosenberg, D. Tessaglia-Hymes & A. Dhondt (1999). “Irruptive migration of common redpolls”. Condor. 101: 195-204.
Krasny, M., & R. Bonney (2005). Environmental education through citizen science and participatory action research. In E.A. Johnson & M.J. Mappin (Eds). Environmental education or advocacy: Perspectives of ecology and education in environmental education. Cambridge University Press.
Phillips, T., B. Lewenstein & R. Bonney (2006). A case study of citizen science. pp. 317-334 In D. Cheng, J. Metcalfe, & B. Schiele (Eds.) At the human scale: International practices in science communication (pp. 317-334). Science Press, Beijing.
Trumbull, D.J., R. Bonney, D. Bascon & A. Cabral (2000). “Thinking scientifically during participation in a citizen-science project” Science Education. 84: 265-275.
Cite as:
Thompson, S., and R. Bonney, Evaluating the Impact of Participation in an On-line Citizen Science Project: A Mixed-methods approach, in J. Trant and D. Bearman (eds.). Museums and the Web 2007: Proceedings, Toronto: Archives & Museum Informatics, published March 1, 2007 Consulted http://www.archimuse.com/mw2007/papers/thompson/thompson.html
Editorial Note