Building Informal Science Education (BISE) Project

Within the field of evaluation, there are a limited number of places where evaluators can share their reports. We are fortunate that one such resource exists in the informal learning community — informalscience.org. They provide evaluators access to a rich collection of reports they can use to inform their practice and learn about a wide variety of designs, methods, and measures used in evaluating informal education projects. In what ways might the evaluation and research community use a collection of evaluation reports to generate and share useful new knowledge? The Building Informal Science Education (BISE) project set out to answer this question.

BISE was a NSF-funded collaboration between the University of Pittsburgh Center for Learning in Out of School Environments, the Science Museum of Minnesota, and the Visitor Studies Association. The BISE project team spent five years diving deep into the evaluation reports that have been uploaded to informalscience.org through May 2013 in order to begin to understand what the field can learn from such a rich resource. We are happy to share our project resources and white papers here with the informal learning field.

This project helps VSA to address its core mission to build and share a research base in informal learning and use evaluation outcomes to improve informal learning.

BISE Project Resources

The BISE project has a wide range of freely available resources to help inform your own evaluation practice. The BISE team created an extensive coding framework to code all 520 reports included in the project database. Coding categories and related codes were created to align with key features of evaluation reports and the coding needs of the BISE white paper authors.

BISE Coding Framework


BISE NVivo Database 

BISE’s NVivo database includes all of the coding applied by the BISE team based on the BISE Coding Framework. This includes codes that were applied to specific sections of a report (referred to as “nodes” in Nvivo) and codes that were applied to an entire report (referred to as “attributes” in Nvivo).

There are three separate NVivo projects available for use: NVivo 9 for PC, NVivo 10 for PC, and NVivo 10 for Mac. Unfortunately, older versions of NVivo are unable to open these files. Note: The database is over 1 GB, so it will take awhile to download.

BISE NVivo Database for Mac Nvivo 10  |  BISE NVivo Database for PC NVivo 9  |  BISE NVivo Database for PC Nvivo 10

How To Use BISE NVivo Database

This document provides examples of questions you can answer in NVivo by running matrix queries, running coding queries, and creating sets.

How To Use the BISE NVivo Database

 

Excel File of BISE Report Level Codes

This Excel file includes all of the 520 reports coded at the report level based on the BISE Coding Framework.

Excel File of BISE Report Level Codes


BISE Reports 

This zip file includes the 520 reports that were downloaded from informalscience.org and coded as part of the BISE project. Each of the reports is referred to by a project ID number that is used across all of the BISE resources. Note: The zip file is close to 1 GB, so it will take awhile to download.

Folder of BISE Reports

 

BISE Project's Resources Worksheet 

This worksheet helps you think through ways you might use the BISE project’s resources to plan your own evaluation or learn about evaluation practices in the informal learning field.

Exploring the BISE Project’s Resources Worksheet

 

BISE EndNote Library

The EndNote library includes citations for all 520 reports that were coded as part of the BISE project. PDF copies of each report are included with the citations. Note: Since the EndNote file includes all of the reports, it is quite large (close to 1 GB), so it will take awhile to download.

BISE EndNote Library

 

BISE Citations

This is a file downloaded from EndNote that can be imported into Mendeley citation management software. Disclaimer: Citations may need to be cleaned once imported into Mendeley, as it may not be a clean transfer from EndNote.

BISE Citations to Import into Mendeley

Questions about any of the BISE resources? Contact Amy Grack Nelson, Evaluation & Research Manager at the Science Museum of Minnesota.

 

Publications 

As part of the BISE project, a number of VSA members used the BISE database to produce papers about the informal learning field. The papers, shared below, represent what can be learned from synthesizing evaluation reports in the BISE database. Shorter summaries of the papers can be found in the BISE blog series on informalscience.org.

Reporting for Evaluator Audiences

Amy Grack Nelson and Zdanna Tranby

There are a number of places evaluators can share their reports with each other, such as the American Evaluation Association’s eLibrary, the website informalscience.org, and organizations’ own websites. Even though opportunities to share reports online are increasing, the evaluation field lacks guidance on what to include in evaluation reports meant for an evaluator audience. If the evaluation field wants to learn from evaluation reports posted to online repositories, how can evaluators help to ensure the reports they share are useful to this audience? This paper explores this question through the analysis of 520 evaluation reports uploaded to informalscience.org. The researchers created an extensive coding framework to align with features of evaluation reports and evaluators’ needs. It was used to identify how often elements were included or lacking in evaluation reports. This analysis resulted in a set of guiding questions for evaluators preparing reports to share with other evaluators.

Download the complete synthesis paper

 

Websites: A Guiding Framework for Focusing Website Evaluations

Carey Tisdal

The aim of this study was to explore 22 Web site evaluation reports, or sections of larger evaluation reports centering on a Web site, to identify, define, and provide examples of the range of evaluation focus areas to inform the design of Web site evaluation studies. The sample included a group of reports contributed to the Informalscience.org online database. Prior to this study, staff members at the Science Museum of Minnesota organized and coded the database of evaluation reports as part of the Building Informal Science Education (BISE) project funded by the National Science Foundation (NSF). In this analysis, grounded theory methodology and the constant comparative method (Glaser & Strauss, 2009) were used to identify and define nine major evaluation focus areas that appeared in one or more of the 22 reports: Target Audience and User Characteristics, Awareness, Motivation, Access, Usability, Use, and User Impact and System Effectiveness. In addition, the analysis identified connections among these elements to present a guiding framework for website evaluation design. The guiding framework displays 7 major evaluation focus areas as sequential, necessary steps to accomplish User Impacts and System Effectiveness.

Download the complete synthesis paper

 

Museums & Social Issues: A Research Synthesis of an Emerging Trend

Kris Morrissey, Kaylan Petrie, Katherine Canning, Travis W. Windleharth, Patricia Montaño

Museums are increasingly engaging with their communities in understanding and addressing the complex questions of our society. How is this effort manifested in museum practice, and what is the impact of this work? Our study attempted to explore the boundaries of these questions by reviewing and synthesizing reports on InformalScience.org. The work was part of the NSF-funded Building Informal Science Education project (BISE). We selected a small set of reports of projects that aligned with our definition of social issues as conditions that are harmful to society, complex and characterized by a lack of agreement. 

The review and synthesis of the selected studies suggests that museums are addressing a very limited number and scope of social problems; museums are ignoring social problems that are grounded in mathematical or economic knowledge. When museums do address social problems, it appears that the efforts are generally successful. Two promising trends and strategic opportunities revolve around the use of dialogue in museum programming and the design for collective impact across institutions and organizations.

Synthesizing and aggregating what is known about this emerging area of practice is limited by a lack of shared vernacular or platforms where the conversations take place. Articulating and operationalizing a definition of social issues was the most difficult and perhaps the most useful part of the challenge.

Download the complete synthesis paper

 

A Review of Recommendations in Exhibition Summative Evaluation Reports

Beverly Serrell

Summative evaluations of museum exhibitions are generally conducted with the aims of measuring whether an exhibition met its goals, identifying areas for improvement, and assessing impact. In many cases, evaluation studies also serve to advance the field by providing lessons for funders, policy makers, or practitioners beyond the project. This report includes details from summative evaluations that included recommendations, particularly those that might be useful for lessons learned and suggestions for improvements to the exhibitions that were evaluated. Using a bottom-up method of review, the issues that emerged as most common included orientation, conceptual communication, boundaries, the need for prototyping, and utilization (both under- and over-). These topics have wide applicability across types of institutions and topics for exhibits. This report concludes with the author’s recommendations for making recommendations in summative evaluations of exhibitions.

Download the complete synthesis paper

 

Data Collection Methods for Evaluating Museum Programs and Exhibitions

Amy Grack Nelson, Sarah Cohn

Museums often evaluate various aspects of their audiences' experiences, be it what they learn from a program or how they react to an exhibition. Each museum program or exhibition has its own set of goals, which can drive what an evaluator studies and how an evaluation evolves. When designing an evaluation, data collection methods are purposefully selected to provide the data needed to measure those goals and answer the evaluation questions at hand. This article provides an overview of the data collection methods commonly used in museum-related evaluations, informed by the work of the Building Informal Science Education (BISE) project.

Access the article in the Journal of Museum Education