Skip to main content

Museums and the Web

An annual conference exploring the social, cultural, design, technological, economic, and organizational issues of culture, science and heritage on-line.

Beyond Cool: Making Mobile Augmented Reality Work for Museum Education

Shelley Mannion, Samsung Digital Discovery Centre at The British Museum, United Kingdom

http://www.britishmuseum.org/samsungcentre

Abstract

In fall 2010, the British Museum’s education team embarked on a multi-year plan to explore the potential of Augmented Reality (AR) in our galleries. We have since run four AR projects for young people, each of which focused on a different aspect of the technology. Our first project focused on revealing content on markers; the second on creating alternative views on the collection; the third on location-based AR for navigation; and the fourth on exhibiting virtual art in the galleries. These projects have both challenged and surprised us. Technical problems like the instability of compasses for positioning and poor 3G reception have challenged us to find workable solutions. Working with different age groups in formal and informal education settings led to discoveries about AR’s ability to delight and engage young learners and to teach fine motor skills. Shoestring budgets forced us to be creative and resourceful in our approach to mobile app development. Here, we share our experiences and lessons learned in integrating AR into museum and gallery learning programs.

Despite widespread curiosity about AR, in 2010 few museums had used it in a serious way with large numbers of visitors. AR is essentially a new style of interaction which belongs to a wider set of Natural User Interactions (NUI). Other natural interactions include touch, gesture, eye tracking and emerging interaction styles that accept user input through intuitive physical actions and movements rather than the keyboard and mouse. The elegant touch screens of Apple products like the iPad are the best examples of this new style of interaction; they have dramatically changed users’ expectations about how user interfaces should work. Increasingly, children who attend our digital learning events expect these styles of interaction. Driven by the demands of our young audiences, the British Museum’s education team decided to experiment with AR and how it could be used to deliver learning programs for our collection. Thinking about AR in education forced us to go beyond the cool factor. We knew it was an exciting technology, but we wanted to understand what it could contribute to learning.

We were not the first to experiment AR in this context. Among the pioneers was media artist Hugo Barroso whose interactive artwork Pret-a-Porte (2005) was installed at Centro Nacional de los Artes in Mexico City. Children donned wearable AR markers and stood in front of a two-way mirror. The mirror displayed unusual costumes, armor and extra appendages depending on which markers they wore. In 2009, London’s V&A in collaborated with Open Frameworks developers to create an AR experience triggered by facial recognition. Children sat in front of a webcam and saw a mask generated over their face which was dynamically assembled from Baroque patterns in the collection. No two masks were the same. In May 2010, the Getty Museum created an AR activity for the Augsburg Display Cabinet. Users downloaded a marker from a Web page, printed it out at home and used a webcam to see a 3D reconstruction of the cabinet which they could manipulate. An ambitious tablet-based AR application at the Natural History Museum in London populates the Attenborough Studio with fish, birds, dinosaurs and other animals as part of an interactive film about evolution.

Figure 1: Media artist Hugo Barroso’s interactive installation Pret-a-Porte (2005) explored the potential of wearable AR in a learning context.

AR for Ancient Egypt

These examples were compelling, but none fulfilled our goal of getting children into the galleries with mobile devices. The arrival of 20 Samsung Galaxy Apollo phones in November 2012 provided the opportunity for our first mobile project. The delivery of the phones coincided with a major temporary exhibition about the ancient Egyptian Book of the Dead. With a tiny budget of roughly $2500 USD building a fully-featured AR app was out of the question. We concentrated on building a simple and inexpensive AR interaction into a wider activity for families which also included a paper-based gallery trail and a desktop publishing task on the computer. The activity, called Passport to the Afterlife, had two parts:

1) Families followed a trail around the Egyptian galleries using mobile phones (Android phones supplied by us) to scan AR markers. The 3D objects that displayed when the markers were scanned represented missing words they needed to collect. Collection here involved writing down the word on a paper worksheet.

2) Families brought their finished trails back to the Samsung Digital Discovery Centre and used the information they had gathered to create a colorful document on a computer.

Figure 2: In Passport to the Afterlife, AR interaction was integrated into a two-part activity that included a gallery trail followed by a desktop publishing task

The surprise of kinesthetic learning

Passport was a success. Children and parents enjoyed the AR interaction despite slow loading due to poor network reception and overcrowded galleries which made finding markers difficult. Reinforced by the computer activity, children got the point and understood that the missing words completed spells from ancient Egyptian Book of the Dead. For younger children aged 6-9, AR delivered an unanticipated learning outcome. At first these children struggled with the physical coordination required to scan markers. They stood too close or too far away from the marker. They had trouble holding the phone steady with one hand while tapping the screen with the other. In the context of Csikszentmihalyi’s theory of Flow (1991), they began the activity with poor skill and high challenge. After seeing the interaction modeled by adults nearly all managed to master it and they enjoyed an overwhelming sense of accomplishment when they succeeded. Scanning markers made children more attentive to their bodies and improved their coordination – two surprising outcomes associated with kinesthetic learning.

Not all 3D animators are created equal

Our aim in designing the AR interaction for Passport was to convey the delight of decoding mysterious-looking codes like hieroglyphs. The bulk of the budget was spent on animated 3D models, some exact replicas of objects in the collection, which appeared when the markers were scanned. Finding the right 3D animator for mobile AR projects can be tricky, since models must be produced according to the technical restrictions of your chosen platform. This can be difficult for animators used to working in film or animation who typically produce high resolution output which is too large for mobile AR. Gaming animators are more familiar with the optimizations AR requires. We saved money by using a combination of custom-built and purchased models. The freelance animator we hired adapted models we bought from Turbo Squid (http://www.turbosquid.com), reducing the number of vertices and simplifying texture maps to make them suitable for mobile delivery. The rest of the budget was spent on web hosting and materials to produce the paper trail.

Figure 3: Scanning an AR marker in the gallery reveals an animated 3D object. This object is a replica of an ancient Egyptian heart scarab amulet.

The technical side

Having invested our budget in the development of 3D models, we had to tackle the programming ourselves. Of the two main styles of AR, marker-based and location based, the majority of apps are location-based. Common examples are “what’s near me?” applications which display information about nearby restaurants, gas stations or movie theaters. Marker-based AR applications like the Getty’s Augsburg Cabinet use printed images to reveal hidden content. Unlike location-based AR, they work indoors and do not require GPS. For Passport we needed a cheap, marker-based solution for Android. We evaluated four potential platforms:

1) Layar was eliminated early on because, at the time, it supported only location-based AR (http://www.layar.com);

2) Second Sight, a robust AR platform used widely in UK schools, ran only on Sony PSPs (http://www.mysecondsight.com);

3) The open source AR Toolkit had the features we needed, but required too much development time (http://www.hitl.washington.edu/artoolkit);

4) Junaio, a free, cross-platform AR browser supported markers and offered a developer API (http://www.junaio.com).

Junaio suited our low-budget rapid development plan. It was free to register as a developer and the API came with sample code in different languages. Using an existing web hosting platform and one staff programmer, we built, tested and launched the Junaio channel for Passport in three weeks. One crucial decision was to use AR markers rather than images as triggers for the 3D content. Although Junaio supported the use of images as markers, testing favored the black and white markers. Light levels in the gallery were very low, which meant the high contrast, two-color markers were picked up more reliably than images. The markers also provided good reference points for participants because they stood out in the busy visual landscape of crowded galleries. Finally, decoding mystery characters on the markers was a good metaphor for the activity, which involved completing magic spells and translating hieroglyphs. Many exhibit designers view AR markers and QR codes unwelcome interventions, but there may be legitimate reasons to use them, even after image recognition and indoor location-based AR become more feasible.

Figure 4: AR markers worked more reliably than images in galleries with low light conditions.

Young people make their own AR trails

After building a Junaio channel ourselves, we wanted to help others to create them. Our second AR project, Talking Objects: Museum as Object, ran in December 2010. The participants, fifteen young adults aged 16-18, designed and built their own AR trails through museum galleries. The young people toured the museum with an historian and then split into teams to design interpretive trails on themes which interested them. The content of their trails included videos, images and texts. Like Passport, content was triggered when AR markers were manually scanned. As the trails took shape, we discovered the limitations of 3G connections in the galleries. Upper floor galleries had a reasonably strong signal, especially those with skylights. Lower floor galleries containing large stone sculptures were dead zones. In areas of patchy coverage, strategically placed markers which tested well initially did not work on the day students presented their trails to museum staff. This experience informed future projects in which we only included galleries with consistently strong signals.

Even when markers scanned correctly, videos stuttered because they were streamed from YouTube. Video slideshows of objects with solid backgrounds, simple transitions and music soundtracks worked best. While not an option for us in this project, video content should be cached locally on the phones before a live event. Issues arose around the placement of markers in low lighting conditions, since some of the objects included in the trails were displayed in dimly lit cases. Repositioning them or asking participants to choose different objects solved these problems. On the final day of the week-long project, young people led museum staff on their trails, demonstrating the use of the phones to scan markers.

Figure 5: A Talking Objects participant places an AR marker on a display case as part of a student-designed trail through the Ancient Egypt gallery.

Despite technical glitches, the project was an exciting leap forward. With a budget of under $1000 and less than a week’s development time, it proved we could easily and cheaply integrate AR experiences into our education work with young people. It also showed how AR might provide an alternative to the traditional multimedia guide and meet the aims of participatory engagement by allowing visitors to create their own themed trails through the galleries.

 Testing the limits of AR with Cultures in Contact

Building on our experiences with Passport and Talking Objects, a third and more ambitious AR project got underway in late spring 2011. Funding for Cultures in Contact came from corporate sponsorship and Samsung contributed 20 3G-enabled Galaxy Tablets. The arrival of the tablets offered us the chance to work with larger (seven-inch) screens. Not quite as big as the iPad’s 9.7 inch screen, the Galaxy Tablets were light, compact and easy for small hands to grasp. The tablets were at the heart of a gallery activity for over 700 teenagers who visited the museum in summer 2011. Students worked in groups to first research objects in the gallery and then make a short film about them. The AR interaction was integrated into a learning framework which included:

An interactive lecture and film screening explaining purpose of the workshop

Four-part gallery activity in groups:

  • Finding objects with AR
  • Reading and discussing questions on screen
  • Answering questions on a worksheet
  • Rating the objects against five thematic categories

Analyzing results of the ratings in a classroom space

Making a short film informed by the gallery activity

Figure 6: In Cultures in Contact, location-based AR was used in a gallery activity where students collected objects that were used to make films in the second half of the workshop.

AR and indoor positioning

From a technical point of view, Cultures in Contact offered the opportunity to experiment with location-based AR. In Passport and Talking Objects, we had used marker-based AR to reveal or deliver rich media content, but now it became a navigation tool to find physical objects. True location-based AR was not an option because GPS positioning does not function indoors. Other solutions such as wifi triangulation or even sonic sensors can be used to obtain users’ location, but we could not afford these. With no way of automatically determining location, we decided to use a hybrid approach. Within Junaio, users could manually scan a marker to tell the mobile device where they were. Once their location was found, users looked through the live camera view to see five objects near them. The objects appeared as animated 3D cubes with pictures of real objects mapped onto them. The positions of the cubes on the screen corresponded to the locations of the real objects they represented. Users chose an object, approached it in the gallery, and then tapped its picture. Tapping its picture launched a dialogue box (superimposed on the live view of the gallery) with a set of questions that were answered on a paper worksheet.

Figure 7: Two screenshots from the AR module of Cultures in Contact. Above, the live view of the China gallery with 3D cubes displayed near real objects. Below, the question screen superimposed over the live view.

Using AR for indoor navigation proved challenging. First, we discovered that when markers are used to determine position, the user cannot move freely around the gallery. The display of all content (here, the 3D cubes with pictures of objects) is based on hard coded coordinates calculated from the spot where they perform the original scan. When the user moves, the display is not updated to show the position of objects relative to his or her new location. This made the activity less compelling from a user experience point of view. It also posed a logistical problem since 5-6 groups of students needed to scan from the same point in each gallery. If one group scanned and then moved away to make room for the next, it would invalidate the location of objects on their screen. We addressed these problems through a variety of workarounds. To improve the user experience, we turned static scanning into a game by placing a colorful “scanning point” on the gallery floor. We involved live volunteers in each gallery who held the markers, helped users to scan and explained how the process worked. To reduce the number of groups needing to scan at the same time, we staggered their start times by a few minutes.

The biggest challenge was the positioning itself. The digital compasses in our Galaxy Tabs (and nearly all the other devices we tested including iPhones) were incredibly unreliable in calculating position. Testing 20 identical devices, we found that only a handful gave the same reading when placed in the same location. The compasses were easily disrupted by other mobile devices nearby and electrical currents in the galleries. In the India gallery, compasses were thrown off by an electrical cable under the floorboards running straight through the center of the room. The instability of the compasses caused the 3D cubes to jitter and jump around the screen, sometimes wildly. Calibrating the compasses helped stabilize them. This was accomplished by vigorously waving the tablets in a figure eight pattern, a motion that became known on the project team as the “AR wiggle” (Samsung Digital Discovery Centre, 2011).

The erratic compasses forced us to reconsider our original interface design, which called for arrows to hover above the real objects in their cases helping students to find them. Testing soon showed that arrows would not be accurate enough. They were replaced with pictures of the objects mapped onto 3D cubes. As long as the cubes appeared close to the originals, users found it somewhat easier to locate real objects by comparing them against the pictures. They delighted in seeing virtual representations of objects next to real ones and, suddenly, the AR interface took on a game-like character. Finding objects was like fishing on the tablet’s touch screen and ironically this interaction benefitted from the jitteriness of compasses. In some cases though, compasses malfunctioned and the object pictures did not correspond at all with their real counterparts. No doubt compasses will improve with the next generation of mobile phones and tablets, but users with older devices are likely to struggle. These issues highlight the challenge of using AR for indoor navigation.

Live view: We’re loving it!

The biggest surprise with the AR activity was students’ utter fascination with the live camera view. It was so entrancing that, from the moment the AR module went live half way through the project, camera usage on the phones dropped dramatically. During the first half of the project run with just the web-based ratings app, cameras were used constantly. Photography was not part of the activity and students were never instructed or encouraged to take pictures. Even so, they used the camera at some point during the gallery session. The content of the photos fell into two categories: 1) Photos of each other; 2) Photos of objects. As educators, we were pleasantly surprised to see so many relevant photos, especially considering that this photography was technically off-task. Despite investing considerable effort in taking the photos, students seemed unconcerned about receiving copies of their pictures afterwards.

Camera usage changed immediately when the AR module was introduced. One facilitator writes:

Once [the AR component] came onboard I didn’t see anyone using the camera function of the phone. They didn’t try to save images, but they did use Junaio as a viewer.

The live view of the Juanio browser replaced the camera as a way to mediate looking and exploring in the galleries. It seemed to hold an irresistible fascination for users who held the tablet at different angles, swung the device around and shouted at classmates to move into or out of their live view. Unfortunately, because no images were saved, we don’t know what kinds of subjects students tried to capture, but simply viewing objects and friends through the camera’s lens produced visible enjoyment. More research is needed to understand this phenomenon, but it suggests applications that combine live camera views with people and virtual objects are appealing to users. The Exploratorium did this successfully with Junaio at its Get Surreal event (Rothfarb, 2011).

Using tablets with worksheets in the gallery

The model of a gallery activity followed by a classroom-based one worked well. Other elements were trickier, particularly the integration between the AR module and question screens, both of which ran inside the live camera view, and the rating exercise which ran inside the web browser. The worksheet used to capture answers created confusion because it competed with the tablet. The worksheet was not part of the gallery activity as first conceived.  It was introduced to address a problem observed in testing of users not taking enough time to look carefully at objects before rating them. Testing also indicated that large groups of 5-6 students had difficulty staying focused around a single tablet. Passport had already demonstrated that worksheets can complement a mobile AR activity. In that context, the worksheets provided a perfect way for families to share the task of collecting objects. Parents looked after the worksheets while children used the phones to scan makers. Once the scanning was completed, parents helped children to write down their answers.

In Cultures in Contact things worked differently. For many groups, the worksheet led the gallery experience, whereas we had intended the tablets to lead. When facilitators intervened and encouraged students to use the tablet as their primary tool, there was often lingering confusion. Some groups split up into pairs or threes, taking the worksheet and tablet in opposite directions. Without more data, it is difficult to say why the students struggled to understand the role of the worksheet even after it was repeatedly explained. Perhaps this is a legacy of traditional museum education sessions, which are often centered on completing worksheets. Having attended other workshops where worksheets were their only outcome, students may have assumed this activity worked in the same way.

Worksheets and tablets in group work

The interplay between worksheet and mobile device was one of the most puzzling aspects of the project. Comparing the use of worksheets here with Passport (where they worked effectively), suggests that the dynamics between group members influence how interpretive media are used. Passport is a family activity in which parents or carers willingly take on a subordinate role to assist their children. John Falk (2009) writes that when adults visit museums with young children they become facilitators whose priority is their children’s experience. Facilitators are happy to perform any task that supports the child, such as holding the worksheet. In a group of teens, however, everyone is equal in their desire to have a good experience and less likely to take on a role that would help the group at the expense of their own enjoyment. When tablets were handed out, minor arguments broke out about who should hold it. These frictions often continued for the duration of the gallery session and led to teams splitting up or teammates fighting amongst themselves. This sent the tablet and worksheet in different directions, severing the functional link between.

Figure 8: Participants in Cultures in Contact using a tablet and worksheet in a gallery. The two competed with each other for students’ attention.

Overall, friendship groups worked more effectively than those assigned by teachers or adults. Self-selected friendship groups were successful because the natural camaraderie among friends facilitated the sharing of the device. Even in friendship groups where sharing did not occur, individuals willingly took on the roles that seemed natural to them. The adoption of roles was usually prompted and readily accepted by their teammates with phrases like “You’re good at technology, so you should hold the tablet;” “You have neat handwriting, so you do the worksheet”. These groups cooperated well and worked through problems harmoniously. Non-friendship groups tended to split up under the pretense of “doing things faster” with one half of the group taking worksheet and the other taking the tablet.

Group size and mobile learning

We suspected from the outset that groups of 5-6 students were too large and the more we observed students working, the more obvious it became. We concluded that three students per group is the maximum for this style of activity. In a different mobile learning project with smaller groups of teenagers, each student was given a device. Despite our initial concerns, having their own device did not discourage students from interacting with their peers during the course of the session. Even in highly structured sessions where students had few opportunities to chat, most students felt they talked “loads” with their classmates. When they captured or created media such as photos, videos and voice recordings, they were anxious to share. Sharing led to productive, on-topic discussions. Sadly, one device per student is rarely possible. Practical realities dictate that groups of 3-6 students must share a single mobile device. Data from Cultures in Contact suggests one way of coping with large groups is to build distinct phases or “chapters” into the activity. These present an opportunity for a facilitator or even the device itself to suggest or insist on role swapping. Within friendship groups, switching roles and sharing the device occurred spontaneously at these transition points.

The rating activity and other people’s ratings

The rating activity is where we hoped crucial learning would take place. Each object was given a star rating (1-5 stars) for five thematic categories linked to the culture it belonged to. Ratings were saved by group name and accessible on a web page in the afternoon session when students created their films. Nearly all the groups rated objects successfully, and though some struggled to understand the principle behind it, they quickly got the hang of it. Groups took 5-12 minutes to rate their first objects, but sped up significantly (70-80%) on subsequent ratings. The final ratings took 30 seconds to three minutes. Surprisingly, very few groups (less than 5%) changed their initial ratings after seeing how others rated. Because of its similarity to ratings on YouTube and Amazon.com we hoped students would appreciate this feature. Particularly when their own ratings differed significantly from what others said, we thought it would trigger discussion, this was not the case. The few times that someone attempted to revise ratings he or she was immediately discouraged by teammates who asserted complete confidence in the team’s own ratings and encouraged them to move on. As a result, nearly all groups kept their original ratings and the comparison screen became simply another page to click through.

Figure 9: Screenshot of the ratings comparison screen from Cultures in Contact. Students dismissed what others said and kept their original ratings.

Using the tablet

From a usability point of view, the design of the tablet was problematic. The most common problem users encountered was jumping out of the application back to the home screen. In AR applications devices are usually held in landscape orientation rather than portrait, which forces users to grasp it in a way that easily triggers the built-in function buttons. Function buttons on the Galaxy tablet are embedded in the frame of the device and positioned close to the edge, making it almost impossible not to touch them while holding it in landscape mode. This has been a recurring usability issue with all our mobile activities: Users became frustrated because they unintentionally and repeatedly exited the application. Fortunately, the newest generation of Android devices reduces this problem by incorporating built-in function buttons into the screen rather than the frame of the device. We anticipate the issue would be resolved if students used their own mobile devices simply because they would be familiar with the idiosyncrasies of operating their own equipment.

Re-thinking the role of AR

Cultures in Contact could be called a successful failure. Although we set out to use AR as a navigation tool, the poor performance of the tablets’ digital compasses paired with the limitations of marker-based positioning forced us to rethink the role of AR in the gallery activity. When working smoothly, AR did provide some help in locating objects, but it proved more useful in enriching the user experience. It delivered a game-like interaction which involved users “fishing for” and “catching” objects on the screen. It offered a delightful new way to look for and at real objects by comparing them with their virtual counterparts. And it provided an intensely enjoyable twist on traditional photography by allowing users to view live scenes and events, people and objects through a layer of virtual content.

The project yielded many practical insights about how to run mobile learning activitie, the most important of which are the impact of interpersonal dynamics in groups on the use of interpretive media such as worksheets and tablets, and the way larger groups can be managed by creating breaks or transitions in the activity. As a result of the technical challenges we encountered, we have a clearer understanding of the effort required to implement a large scale AR project on a shoestring budget. Using Junaio was a sensible choice, but the integration between the live view and web module (with the rating activity) needed more attention to streamline the user experience. Ideally, students and teachers would have been able to access their ratings back at school through a secure system during the follow-up visits by teaching staff that took place in the weeks after their visits.

A History of the Future: Timepieces

Post-Cultures in Contact we have continued to pursue indoor location-based AR. Inspired by the work of the Stedelijk Museum (Schavemaker, M., et al., 2011) in Amsterdam and the artists’ collective Manifest AR (http://www.manifestar.com) who have staged several installations of AR artwork, we ran a workshop for young people which culminated in a virtual exhibition. In A History of the Future, a collaboration with science fiction writer and game designer Adrian Hon (2011), young people created their own futuristic timepieces based on the museum’s collection of clocks and watches. They made digital drawings of their inventions, written descriptions and short videos about them, all of which became part of an AR exhibition in the Clocks and Watches gallery. Looking ahead, we’re hoping to use the recently launched Junaio Creator to teach children to build their own AR channels and install virtual content around the museum.

Figure 10: Using Junaio’s live view on the Samsung Galaxy Tablet to locate virtual installations created by participants in A History of the Future.

Lessons learned

Eighteen months worth of experimentation with AR has left us with an appetite for more. Despite the technical and logistical challenges involved in AR applications, it has repeatedly surprised us with its ability to engage and delight our audiences. The list below summarizes our main insights about working with AR in a mobile learning context:

AR is an inherently engaging interaction style. It’s magical. Kids like it. Grown-ups like it.

Users love AR’s live camera view. The live camera view alone is satisfying enough to keep users engaged. Try to find a way for them to capture views that include objects, people, museum spaces and virtual objects.

AR markers are not a bad thing. They have an interesting retro quality. Try to design metaphorically to take advantage of AR’s “hide and reveal” quality.

Design for kinesthetic learning. AR offers a novel way for children aged 6-9 to develop fine motor skills. Challenge them to do two things at once: hold the phone steady, scan, tap, rotate, etc.

Integrate AR interactions into a learning framework, and use paper-based tools sparingly and wisely. While the Passport activity integrated a paper trail in a helpful way, the experience with Cultures in Contact indicated that students expected the worksheet rather than the tablet to lead. If the point is to get students to make use of the technology, then consider scrapping the paper to ensure the device leads. On the other hand, paper worksheets for families give carers something to do.

Reduce group size as much as possible when working with school children and teenagers. Even tablets with screens everyone can see are unlikely to be shared easily in non-friendship groups. Create groups of 2-3 students maximum. One device per student is ideal.

Design activities where students capture or create media on their own devices. Teenagers especially enjoy sharing what they have made and engage friends or classmates in discussion about it. This promotes much more natural interactions than forced group work.

If you have to work with groups, provide clear phases or breaks in the activity where the mobile device can change hands.

Where possible build a native app. If that is beyond your budget and expertise, then use free tools and builders. Be wary of cobbling together multiple platforms, however, as this introduces usability problems when users move between different parts of the application.

If you need 3D models, hire a game animator who knows how to optimize models for mobile delivery. Reduce costs by purchasing models online and adapting them.

Do thorough, realistic testing of 3G or wifi signal strength in galleries long before development begins. Identify sweet spots where the signal is especially strong and be aware of potential disruptions from existing gallery media, cabling or other infrastructure elements.

Compasses are wonky. If you are working with indoor location-based AR, then design carefully and test with as many devices as possible. Don’t rely on accurate positioning, but create an experience that users find engaging.

Acknowledgements

A huge thank you to all of the interns, volunteers and facilitators in the Samsung Digital Discovery Center who worked hard to make AR projects happen. Special thanks to Elena Saura Ramos, Katherine Biggs and Alessandra von Aesch whose expert and dedicated assistance made these projects possible. Thanks to colleagues Faye Ellis, Katharine Hoare, Sarah Longair, Emma Poulter and Richard Woff at the British Museum who supported our AR work with their project budgets. Thanks to Olly Venning for excellent 3D models for Junaio GLUE. Thanks to Frank Angermann the whole team at Metaio for being incredibly supportive and responsive to bug reports and change requests.

References

Barroso, H. (2005). Pret-a-Porte. Centro Nacional de los Artes. Mexico City, Mexico. Published September 14, 2008. Consulted June 1, 2011. http://www.youtube.com/watch?v=IVrQ7VpB8Fw

British Museum, London. Talking Objects: Museum as Object. Consulted 20 March 2012. http://www.britishmuseum.org/channel/object_stories/talking_objects/video_the_museum_as_object.aspx 

Csikszentmihalyi, M. (1991). Flow: The Psychology of Optimal Experience. New York: HarperPerennial.

Falk, J. (2009). Identity and the Museum Visitor Experience. Walnut Creek, California: Left Coast Press.

Hon, A. (2011). Update #8: A History of the Future at the British Museum. Published September 20, 2011. Consulted March 20, 2012. http://www.kickstarter.com/projects/adrian/a-history-of-the-future-in-100-objects/posts/120807

J. Paul Getty Museum. (2010). Augmented Reality of the Augsburg Display Cabinet. Published May 17, 2010. Consulted October 15, 2011. http://www.getty.edu/art/exhibitions/north_pavilion/ar/index.html

Natural History Museum, London. Interactive film - Who do you think you really are? Consulted April 2, 2012.
http://www.nhm.ac.uk/visit-us/darwin-centre-visitors/attenborough-studio/interactive-film/index.html

Rothfarb., R. (2011). Mixing Realities to Connect People, Places, and Exhibits Using Mobile Augmented-Reality Applications. In J. Trant and D. Bearman (eds). Museums and the Web 2011: Proceedings. Toronto: Archives & Museum Informatics. Published March 31, 2011. Consulted February 20, 2012. http://conference.archimuse.com/mw2011/papers/mixing_realities_connect_p...

Samsung Digital Discovery Centre at the British Museum. (2010). Passport to the Afterlife: A Collection on Flickr. Published December 20, 2010. Consulted April 2, 2012. http://www.flickr.com/photos/britishmuseum_samsungcentre/collections/72157625445071736/

Samsung Digital Discovery Centre at the British Museum. (2011). Testing AR Gallery Explorer. Published June 29, 2011. Consulted 20 March 2012. http://vimeo.com/25782400

Samsung Digital Discovery Centre at the British Museum. (2011). Mobile Explorer on Galaxy Tabs. Published November 25, 2011. Consulted 20 March 2012. http://vimeo.com/25782400

Schavemaker, M., et al. (2011). Augmented Reality and the Museum Experience. In J. Trant and D. Bearman (eds). Museums and the Web 2011: Proceedings. Toronto: Archives & Museum Informatics. Published March 31, 2011. Consulted March 10, 2012. http://conference.archimuse.com/mw2011/papers/augmented_reality_museum_experience

Victoria & Albert Museum. (2009). Mirror, Mirror – A User Generated Performance. July 22, 2009. Consulted October 15, 2011. http://www.youtube.com/watch?v=AmgqwtOmsOM