Stephen Brown, De Montfort University, United Kingdom
This paper examines the extent to which user-centred design can be applied to the development of effective learning experiences in museum Web sites. If it is good practice to formatively test Web designs during development to ensure they deliver what they are supposed to, then arguably this applies as much to learning activities as well. But we cannot test predictively if we don't know what the design is intended to achieve. So if the purpose is to support and encourage learning, then specifying learning outcomes is essential. While some authors argue that specific learning outcomes are undesirable because they limit the usability or relevance of an activity, this paper suggests that they are essential because they offer a benchmark against which designers, Museum directors and funding bodies can measure how effective the investment of time, money, and imagination has been. Prediction is not the same as prescription, and well designed learning activities based on predictive learning outcomes need not result in learning activities that proscribe alternative uses.
Keywords: user-centred design; return on investment; learning outcomes; GLOs; user testing; user requirements; objectives;
The Importance Of Learning In Museums
Debate about the importance of education and learning in museums has a long history. As far back as 1942, The Committee on Education of the American Association of Museums commissioned a report on the educational problems of museums (Low 1942). While acknowledging that museums have a threefold purpose: collection, scholarship, and education, Low recommended that museums should base their future identity and purpose on education. This view was echoed 40 years later when the American Association of Museums’ report of the Commission on Museums for a New Century (American Association of Museums 1984) concluded that education would be among the primary issues facing museums in the 21st century.
Of course, this is not just an American perspective. The potential of museums to support and encourage learning is generally recognised. The UK Museums Libraries and Archives Council (MLA) notes that museums “can make a real difference to people’s lives by using their collections for inspiration, learning and enjoyment” (MLA 2001). Between 1999 and 2004 the UK Department for Education and Science invested £4 million specifically in schools-focused museums and gallery learning through the Museums and Galleries Education Programme (MGEP). In April, 2004, the Department for Education and Science and the Department for Culture, Media and Sport announced a joint investment of more than £7 million over the following two years to strengthen the capacity of museums and galleries to support children and young people's education (DfES 2004). In a joint introduction to the Department of Education and Employment’s report on the learning power of museums, the then (2000) UK Secretaries of State for Education and for Culture Media and Sport said:
Learning is at the heart of this Government’s agenda because it is the key to a rich life for individuals and prosperity for the nation…. the Government is seeking to create the ‘learning habit’ across the country, so that people of all ages can understand and enjoy the great cultural achievements of the past and the present, and gain the skills, attitudes and knowledge they need to contribute to and share in the information and communication age of this new century (Smith and Blunkett 2000).
In 2006, 86% of UK museums were used by formal educational groups and 88% by informal education groups; 40% of museums reported outreach to community groups, and 29% of museums reported outreach to older visitors (MLA 2006).
On-line exhibits attract visitors from a wide range of backgrounds (Marable 2004). Not surprisingly, therefore, we are seeing increasing forays into learning by museums through their Web sites. In 2006, 27% of museums reported on-line activity / e-learning by formal education groups.
Museums are being reinvented as physical and virtual spaces in which people engage and learn, interacting with objects and discovering their stories. Interweaving the real and the virtual creates a powerful brand, enabling museums to occupy centre stage in cultural cyberspace (MLA 2001).
Why Return On Investment Matters For Museum Web Designers
Web site investment costs are non-trivial. At the V&A, the on-line Museum costs for 2005/6 were £452,266, including staff (V&A 2005a). The Making the Modern World exhibition at the National Science Museum, London, alone cost £500,000. Even leaving aside staff costs, annual non-staff budgets for Web development at major UK national museums range from £20,000 to £150,000 (NMOLP 2006). The consequences of getting it wrong can be damaging in terms of not only waste of money but also public perception of and attitudes towards the institution. The importance of Web sites to Museums’ strategies is evident from the level of traffic they attract and the budgets allocated to them. For example, visits to the Natural History Museum, London, Web site (http://www.nhm.ac.uk) rose from 1.7 million in 2000 to 3.1 million in 2001 (NHM 2001). The Victoria and Albert Museum (V&A) Web site (http://www.vam.ac.uk) had 1.1 million user sessions in 2001/2002, rising to over 6 million in 2004/2005 (V&A 2005b). Significantly, 20% of the V&A Web site visitors in 2003 had never visited the physical Museum, demonstrating the powerful outreach effects of the Web site (V&A 2003). Return on investment is an important issue, therefore, that most Museum managers cannot ignore. This has important implications for Museum Web developers. It is important that we develop sites that work well and that we do so as efficiently as possible. But how can we demonstrate that our designs are effective and, more important, how can we predict reliably, before a significant investment has been made, how effective a particular design is likely to be?
Using User-Centred Design To Predict Design Performance
User-centred design (Katz-Haas 1998, Vergo et al 2001) offers a methodology for ensuring designs are effective by closely matching them to user expectations and needs, based on user requirements analysis and testing. This approach makes the user rather than the product the focus of attention:
User-Centered Design (UCD) is a user interface design process that focuses on usability goals, user characteristics, environment, tasks, and workflow in the design of an interface. UCD follows a series of well-defined methods and techniques for analysis, design, and evaluation of mainstream hardware, software, and Web interfaces. The UCD process is an iterative process, where design and evaluation steps are built in from the first stage of projects, through implementation (Henry and Martinson 2003).
The results of the user trials are fed back into the design process to shape the design ideas formatively. A typical sequence of design and evaluation is shown in figure1, and figure 2 shows low cost early paper prototypes used for testing.
User-centred design is not a new concept in museums (Arseneault and Robert 2003; Jefsoutine et al 2004; Mitroff et al 2003; Peacock et al 2004; Vergo, et al 2001).
Implicit in this approach is the idea that what the final product should be able to do is both knowable in advance and measurable. By being able to measure typical user responses to successively more sophisticated design drafts, we can adjust the design to accommodate user requirements as closely as possible. Therefore, providing we can specify the intended learning outcomes of museum Web sites, user-centred design should enable us to predict with some confidence how well the final product will perform
Using Learning Outcomes As Measures Of Performance
The idea of measuring the learning outcomes of cultural organisations is not new. Increasingly, museums are seen as instruments to implement policies on social inclusion, cohesion and access (Lawley 2003, Sandell 2003) and are required to present evidence of their contributions (Hooper-Greenhill 2004). In the UK this has become a serious issue for publicly funded museums, galleries and archives. Funding levels across the sector are contingent on being able to present such evidence (Selwood 2001).
In formal education, learning outcomes, or learning “objectives” as they are sometime called, are used to assess the extent to which the learner, the teacher and the learning activity have been successful. The essence of a well-formulated learning outcome is that it should be specific, objective and measurable (Bloom et al 1956, Mager 1984). That is to say, it should define unambiguously what the learner should be able to do in terms that make it feasible for themselves and others to reliably measure their performance. For example, “at the end of this paragraph you, the reader, should be able to explain what learning outcomes are in terms of their distinguishing characteristics, purpose and context” is a learning outcome that could be applied to the activity of reading this paragraph. Specified learning outcomes allow educationalists to apply user-centred design principles to curriculum development. Learning activities can be tested against their intended outcomes in draft form with the help of volunteers or sample learners to assess how well they work. Distance learning and on-line teaching institutions such as the UK Open University make considerable use of this kind of iterative developmental testing (Brown et al 1981) because courses, once offered to fee paying students, have to be robust, fit for purpose and right first time.
So can we apply the concept of learning outcomes to learning activity design in museum Web sites? While attempts have been made to draw upon theories of learning to inform the design of learning activities in museums (Cassels, 1992; Davis and Gardner, 1993), some commentators have pointed out that there are limits to the transferability of such theories because of significant differences between learners in formal contexts, such as schools, and informal contexts such as museums (Low 1942, American Association of Museums 1984, Hooper-Greenhill 1991, 1992, Hein 1998, Clarke 2001). Moussouri (2002), for example, argues that the museum audience is so much more diverse in terms of age, interests, knowledge, skills and motivations than a registered student that what the visitor wants from a learning experience is essentially unknowable in advance and therefore learning outcomes for activities cannot be specified except very broadly. But if we cannot specify learning outcomes, then arguably we cannot measure them, and if we cannot measure them, then how can we demonstrate the overall impact that museums, archives and libraries have on people's informal, lifelong learning (Hooper-Greenhill 2004)?
Alternative Ways Of Measuring Learning Impact: Emergent Vs Predictive
The stakes are high, and in the UK the search for appropriate metrics has led to the development of a system of Generic Learning Outcomes for measuring the impact of museums on learners that attempts to side-step the difficulty of matching intended learning outcomes with learner requirements by distinguishing between intended learning outcomes and actual learning outcomes (Hooper-Greenhill et al 2004) Actual learning outcomes, it is argued (Hooper Greenhill 2004), can be grouped into five generic types:
- Knowledge and Understanding
- Attitudes and Values
- Enjoyment, Inspiration, Creativity
- Action, Behaviour, Progression
Knowledge and Understanding is about acquiring facts but also about making connections.
Skills include intellectual, social, emotional and physical skills.
Attitudes and Values cover feelings, perceptions, opinions, attitudes and awareness.
Enjoyment, Inspiration, Creativity is a combination of feelings, such as enjoyment, and evidence of creative activity, such as exploration, experimentation and making.
Action, Behaviour, Progression refers to what people do during or after the activity, or what their intended further actions will be.
Tables 1 and 2 show typical sets of results of learning impact assessments carried out using Generic Learning Outcomes.
|91% of KS2 and below pupils agreed ‘I enjoyed today’s visit’|
|64% of KS3 and above pupils agreed ‘A visit to the museum / gallery makes school work more inspiring’|
|When pupils at KS2 and below were asked if they had learnt some interesting new things, 90% of pupils agreed with this. There were a number of questions about knowledge and understanding for the older pupils:|
|89% of KS3 and above pupils agreed ‘I discovered some interesting things from the visit today’|
|77% of KS3 and above pupils agreed ‘The visit has given me a better understanding of the subject’|
|77% of KS3 and above pupils agreed ‘Today’s visit has given me lots to think about’|
|74% of KS3 and above pupils agreed ‘I could make sense of most of the things we saw and did at the museum’|
|88% of all participants, including teachers, judged that pupils had learnt either something or a lot about a specified topic (n=6065).|
|For Key Stage 2 pupils and above 47% of pupils judged that they had learnt a lot about a specified topic (n=3993)|
|70% of teachers judged that pupils had learnt a lot about a specified topic (n=536). 61% of Key Stage 1 pupils reported that they had learnt a lot about a specified topic (n=1413).”|
Focused as they are on user-defined, post hoc measures of impact, Generic Learning Outcomes can be characterised as “emergent”. That is to say, they emerge from the interaction between learner and learning experience, focusing on the actual learning outcomes rather than any intentions on the part of the designers.
This shift in focus to emergent learning impact has very real advantages for museum directors and funding bodies seeking to assess overall value for money and to justify allocations of resources. (Within two years of its launch, around half of museums in the UK are using a Generic Learning Outcomes based evaluation framework: MLA 2006). It provides a way of assessing the extent to which the museum experience has had a positive effect on the visitor that goes considerably beyond basic measures such as visitor numbers and “happy sheet” questions (ie. “How much did you enjoy your visit to our museum today?” ) But from a design perspective, it has a major flaw. The way these Generic Learning Outcomes are measured tends to be subjective. They are judgments made by the participants either about the impact of the experience on themselves or on others (for example, by teachers about their students). So, for example, in Tables 1 and 2, the results reported are all opinions rather than objective measures. Under this regime, the best learning activity design will be the one that engenders the most positive responses, but until the design has undergone extensive testing with large numbers of participants, its overall impact on users will not be apparent. There will not be enough interactions for the overall pattern to emerge. “Actual learning outcomes in this model may not emerge until a period of time has elapsed” (Clarke 2001: 26). Given the tight budgetary and time constraints that surround most Web development projects, lengthy field trials are usually out of the question during the development phase. This effectively leaves the designer without a tool for testing the effectiveness of the design until most of the investment has been made.
So while emergent learning outcomes are useful for retrospective assessment of overall impact, they are of little use for supporting a developmental user-centred design approach. We need “predictive” measures to assess performance iteratively against intended learning outcomes.
The argument against building museum learning activities around predictive learning outcomes is that “since each individual learns in their own way, using their own preferred learning styles, and according to what they want to know, each individual experiences their own outcomes from learning” (Hooper-Greenhill et al 2004). It follows from this that attempts to specify predictive learning outcomes are undesirable and impractical: undesirable because they restrictively prescribe learner behaviours, and impractical because the visitor will use learning activities in quite unpredictable ways (Hooper-Greenhill 2004). “Learners construct meaning on their own terms no matter what teachers do” (Richardson 1997). We reject this view not only because of its intrinsic internal contradiction but also because it represents a misunderstanding of the purpose of predictive learning outcomes. Just because a well-designed learning activity is set up to support a particular learning outcome does not prevent it from being used to deliver a range of different outcomes, depending on the users’ needs and imaginations – just as a cup can be relied upon to contain hot coffee safely, but it can also be used for cutting pastry circles, for propping open the door, for throwing as a missile, or for providing hardcore for your new driveway. Well-designed learning activities will help learners achieve particular goals if those goals are what the learners want to achieve. But if the learners feel inspired to do something different with the resources that make up the learning activity, then, unless it’s a piece of programmed learning, they should be no more disadvantaged than if they were presented with an unstructured set of resources and invited to construct their own learning activity.
Clarke (2001: 26) suggests that: “If you see your project as a means to meet individual learning needs and meaning-making, you will need to plan for open-ended, flexible and multi-layered outcomes.” This could be interpreted as meaning that the designer should not be concerned with devising activities that support particular kinds of outcome. But this would be to miss the point. Designing for flexible, multiple outcomes is not the same as planning for no outcomes at all, and so it still requires the developer to think carefully about what outcomes the activity could engender and what support the learners might need to achieve those outcomes. In other words, we need to “provide for differentiated outcomes for a range of types of learner” (ibid) by “scaffolding” their learning experience. Scaffolding (Vygotsky 1978) refers to the creation of a structure that supports learning by anticipating difficulties and offering guidance and feedback so that the learners may learn exploratively but within a framework that helps them to
- understand what they are doing and why (ie it supports meta cognition and purposeful behaviour)
- check on their understanding
- overcome typical misunderstandings and barriers to learning
Resistance to traditional learning outcomes seems to stem from confusion between “predictive” and “prescriptive”. Prescriptive activities have predetermined outcomes such that the learner is either right or wrong. In other words, “incorrect” use is proscribed by the design of the activity itself.
Figure 3 shows two activities from the learning modules in the UK National Science Museum Web site Making the Modern World (http://makingthemodernworld.org). Typically they comprise some content plus an exercise of some kind. The content comprises narrative media, such as text, and various multimedia components, such as animations and slide shows. The associated exercises variously invite learners to “explore” (i.e. read, watch and listen to) some narrative and then perform some actions such as answer multiple choice questions, match up some phrases, or take notes. For example, in the activity: ‘Examining the Evidence of Child Labour’, learners are asked:
From the interview, what phrases could be used to justify the regulation or limitation of child labour? To answer: Write five or six short sentences in note form, or alternatively a paragraph as your answer. You must include words and phrases from the extract. Then submit. You can print out and keep your answer, so try and make it long and comprehensive enough to be a good record of your work.
From the requirement to “include words and phrases from the extract” and the advice to “make it long and comprehensive enough to be a good record of your work”, we may infer that the learners’ role is to read the narrative and make such notes as will enable them to recall as closely as possible that narrative. The emphasis on using words and phrases from the extract is significant because it discourages learners from internalising the content, relating it to other knowledge or experiences and re-expressing it in their own terms.
But learning activities don’t have to be so prescriptive. AccessArt (http://www.accessart.org.uk/index.php), an on-line learning resource for “pupils, students, and lifelong learners as well as teachers, gallery educators and artists” is a good example of how on-line resources can be used to support clear learning objectives without prescribing the precise outcomes. The on-line drawing workshop for 16+ learners encourages and supports exploration of, understanding of, seeing and drawing. Another good example is the set of learning “challenges” in the British Museum’s Ancient Egypt Interactive learning Web site (see, for example, the temple challenge at http://www.ancientegypt.co.uk/temples/home.html). In both cases the activities provided carefully “scaffold” the learning experience.
As we know, learning is more than passive receipt of information. It entails “active manipulation, assimilation and the application of information” (Clarke 2001: 6). The art of effective learning design is therefore to go beyond mere provision of information, or even of opportunities for interaction. It entails anticipating learner difficulties and devising activities that scaffold the learning while leaving the learners sufficient freedom to pursue their own ideas and methods.
So rather than eliminating the need to specify predictive learning outcomes, the open-ended flexibility required of a museum learning activity requires the designer to construct a scaffold that supports a range of possible learning outcomes. This complexity makes it all the more important to be able to test predictively the performance of designs before too much investment has been made.
Most museum projects have some educational element (Clark 2001). (In 2006, 87% of UK museum curators contributed to museum education [MLA 2006]). Not surprisingly, museums are finding it increasingly important to provide access to and support for explicit learning activities via their Web sites. The high cost of developing quality Web sites means that Museum directors and Web designers alike need to be confident and able to assure others their designs represent a good return on investment. In the absence of reliable predictive methods, museums and Web designers are exposed to risk of censure or at least criticism for failing to make best use of their resources. User-centred design is an established methodology for producing Web sites that work well. But it relies on being able to specify in advance what the site should actually do. In a learning context, this means specifying the learning outcomes. While there has been some debate about how appropriate or feasible it is to specify predictive performance measures in the context of informal, adult learning, this paper has argued that clearly specifying what a particular Museum Web site design is intended to achieve in terms of learning outcomes does not limit its usability or relevance, and it is essential in order to ensure maximum return on investment in the design. Unambiguously specified designs can be tested iteratively during development to ensure that they are performing as required. Vaguely defined design goals that cannot be tested predictively leave Web developers and Museum managers exposed to the risk of failure.
Emergent learning outcomes such as Generic Learning Outcomes are not practical metrics for formatively developing Web sites, therefore. Predictive learning outcomes are needed to support user-centred design. Prediction is not the same as prescription, and well-designed learning activities based on predictive learning outcomes need not result in learning activities that proscribe alternative uses. The advantage of predictive learning outcomes for designers is that they provide us with some kind of benchmark against which to test prototype designs before too much investment has been made. And, although large-scale field trials are possible, they are not necessary for user-centred design (Neilsen 1989). Satisfactory testing can be conducted with as few as three or four participants (Krug 2000). So testing can be (and is best done) both early and frequently.
American Association of Museums (1984). Museums for a new century: A report of the Commission on Museums for a New Century, American Association of Museums. Washington, DC.
Arseneault, C. and J-M.Robert(2003). Having fun or finding information? Usability for kids sections of Web sites. In D. Bearman and J. Trant (eds.). Museums and the Web 2003: Proceedings. Toronto: Archives & Museum Informatics, 2003. last updated March 2003 consulted December 18, 2006 http://www.archimuse.com/mw2003/papers/arseneault/arseneault.html
Bloom, B. S., M.D. Engelhart, E.J. Furst, W.H. Hill, & D.R. Krathwohl (1956). Taxonomy of Educational Objectives—Handbook I: Cognitive Domain, New York: Longman.
Brown, S., G. Kirkup, M. Lewsey, M. Nathenson & I. Spratley (1981). Learning from evaluation at the Open University 1: A new model for course development. British Journal of Educational Technology 2 (2), 39.
Cassels, R. (1992). Mind, heart and soul: towards better learning in heritage parks. New Zealand Museums Journal 22 (2), 12-17.
Clarke, P. (2001). Museum Learning On Line. London: Resource: The Council for Museums, Archives and Libraries.
Davis, J. and H. Gardner (1993). Open windows, open doors. Museum News, January/February 1993.
DfES (2004) London: Department for Education and Science. last updated April 22, 2004. consulted December 18, 2006. http://www.dfes.gov.uk/pns/DisplayPN.cgi?pn_id=2004_0055
Evans, M., K. Knight, D. Boden, N. MacGregor, S.Davies, N. Serota, D. Fleming, R. Sheldon, J. Glaister (2001). Renaissance in the Regions: A New Vision for England's Museums. London: re:source The Council for Museums, Archives and Libraries 2001.
Hein, G. E. (1998). Learning in the Museum. London: Routledge.
Henry, S.L. and M. Martinson (2003). Accessibility in User-Centered Design quoted in Notes on User Centered Design Process (UCD) W3C Web Accessibility Initiative. Last updated April 1, 2004. Consulted December 18, 2006. http://www.w3.org/WAI/EO/2003/ucd
Hooper-Greenhill, E. (1991). Museum and gallery education. Leicester, UK: Leicester University Press.
Hooper-Greenhill, E. (1992). Museums and the shaping of knowledge. New York: Routledge.
Hooper-Greenhill, E. (2004) Measuring Learning Outcomes in Museums, Archives and Libraries: The Learning Impact Research Project (LIRP). International Journal of Heritage Studies, 10(2) 151-174.
Hooper-Greenhill, E., J.Dodd, M. Philips, C. Jones, J. Wooward, H. O’Rian (2004). Inspiration, Identity, Learning: The Value of Museums. Leicester: Research Centre for Museums and Galleries, Leicester University, UK.
Jefsioutine, M., J. Arthur, M. Bawa (2004). Designing the User experience: An Evolving Partnership for Collaborative Research and Development. In D. Bearman and J. Trant (eds.). Museums and the Web 2004: Proceedings. Toronto: Archives & Museum Informatics, 2004. Last updated November 14, 2006. Consulted December 27, 2006. http://www.archimuse.com/mw2004/papers/jefsioutine/jefsioutine.html
Katz-Haas, R. (1998). Usability Techniques: User-Centered Design and Web Development. Last updated unknown. Consulted November 25, 2006. http://www.stcsig.org/usability/topics/articles/ucd%20_Web_devel.html#what_is_UCD
Krug, S. (2000). Don’t Make Me Think. Indianapolis: New Riders Publishing.
Lawley, I. (2003). Local authority museums and the modernizing government agenda in England. Museum and Society, 1(2) 75-86.
Low, T. (1942). The museum as a social instrument: A study undertaken for the Committee on Education of the American Association of Museums. New York: Metropolitan Museum of Art.
Mager, R. F. (1984). Preparing instructional objectives (2nd ed.). Belmont, CA: Pitman.
Marable, B. (2004). Experience, Learning, And Research: Co-ordinating the Multiple Roles of On-Line Exhibitions. In D. Bearman and J. Trant (eds.). Museums and the Web 2004: Proceedings. Toronto: Archives & Museum Informatics, 2004. Last updated November 14, 2006. Consulted December 27, 2006. http://www.archimuse.com/mw2004/papers/marable/marable.html
Mitroff, M., M. Misunas, S. Wise (2003). Bringing It All Together: A User-Centered Search Experience On the SFMOMA Web Site. In D. Bearman and J. Trant (eds.). Museums and the Web 2003: Proceedings. Toronto: Archives & Museum Informatics, 2003. Last updated March 2003. Consulted December 18, 2006. http://www.archimuse.com/mw2003/papers/mitroff/mitroff.html
MLA (2004). More About the Generic Learning Outcomes. London: Museums Libraries and Archives Council. Last updated 2004. Consulted December 12, 2006. http://www.inspiringlearningforall.gov.uk/measuring_learning/learning_outcomes/ why_do_we_need_glos/_217/default.aspx
MLA (2006). Museum Learning Survey 2006: Final report. London: Museums Libraries and Archives Council. Last updated 2006. Consulted January 3, 2007. http://www.mla.gov.uk/resources/assets/M/museum_learning_survey_2006_10480.pdf
Moussouri T. (2002). A Context for the Development of Learning Outcomes in Museums, Libraries and Archives. London: re:source The Council for Museums, Archives and Libraries/Learning Impact Research Team, Research Centre for Museums and Galleries, University of Leicester, United Kingdom.
Neilsen, J. (1989). Usability Engineering at a Discount. Proceedings of the Third International Conference on Human-Computer Interaction on Designing and using human-computer interfaces and knowledge-based systems.(2nd ed.), p394-401 Boston, MA, September 1989, Boston, Massachusetts, United States.
NHM (2001). The Natural History Museum annual report 2001. London: Natural History Museum. Last updated 2001. Consulted December 15, 2006. http://www.nhm.ac.uk/about-us/corporate-information/annual-reports/report/report2001/pages/textonly/00-textonly.html#communicating
NMOLP (2006). National Museums On-line Learning Project. Project Implementation Plan. Work in Progress V2. 2006. London: V&A Museum.
Peacock, D., D. Ellis, J. Doolan (2004). Searching For Meaning: Not Just Records. In D. Bearman and J. Trant (eds.). Museums and the Web 2004: Proceedings. Toronto: Archives & Museum Informatics, 2004. Last updated November 14, 2006. Consulted December 27, 2006. http://www.archimuse.com/mw2004/papers/peacock/peacock.html
Richardson, V. (ed) (1997). Constructivist teacher education: building a world of new understandings, London: Falmer Press. p.62
Sandell, R. (2003). Social inclusion, the museum and the dynamics of sectoral change. Museum and Society, 1 (1) 45-62.
Selwood, S. (ed) (2001). The UK cultural sector: profile and policy issues. London: Cultural Trends and Policy Studies Institute.
Stanley, J., P. Huddleston, C. Grewcock, F. Muir, S. Galloway, A. Newman, S. Clive (2004). Final Report on the Impact of Phase 2 of the Museums and Galleries Education Programme. London: DfES. Last updated September 2004. Consulted January 2, 2007. http://www.dfes.gov.uk/research/data/uploadfiles/RBX09-04.pdf
Smith, C., D. Blunkett, 2000. The Learning power of Museums – A vision for museum education. London: DCMS/DfEE
V&A (2003). Victoria and Albert Museum Board of Trustees Minutes 11 September 2003. Last updated September 2003. Consulted December 18, 2006. http://www.vam.ac.uk/files/file_upload/5767_file.doc
V&A (2005a). Victoria and Albert Museum Strategic Plan 2005-2010. London: V&A. Last updated unknown. Consulted December 18, 2006. http://www.vam.ac.uk/files/file_upload/13138_file.pdf
V&A (2005b). Victoria and Albert Museum Board of Trustees Minutes 17 March 2005. Last updated March 2005. Consulted December 18, 2006. http://www.vam.ac.uk/files/file_upload/13472_file.doc
Vergo, J.,C-M. Karat, J. Karat, C. Pinhanez, R. Aroroa, T. Cofino, D. Riecken, M. Podlaseck, T.J. Watson (2001). Less Clicking, More Watching”: Results From the User-Centered Design Of A Multi-Institutional Web Site for Art and Culture. In D. Bearman and J. Trant (eds.). Museums and the Web 2001: Proceedings. Toronto: Archives & Museum Informatics, 2001. Last updated unknown. Consulted December 15 2006. http://www.archimuse.com/mw2001/papers/vergo/vergo.html
Vygotsky, L. (1978). Mind in Society. MA: Harvard University Press.
Brown, S., Let's Be Specific: Predicting Return On Investment In On-line Learning Activity Design, in J. Trant and D. Bearman (eds.). Museums and the Web 2007: Proceedings, Toronto: Archives & Museum Informatics, published March 1, 2007 Consulted http://www.archimuse.com/mw2007/papers/brown/brown.html