Skip to main content

Museums and the Web

An annual conference exploring the social, cultural, design, technological, economic, and organizational issues of culture, science and heritage on-line.

Implementing Mobile Augmented Reality Applications for Cultural Institutions

Deborah Boyer and Josh Marcus, Azavea, USA


In Spring 2010, the City of Philadelphia Department of Records received an NEH Digital Humanities Start-Up grant to investigate mobile augmented reality technology for use in displaying overlays of historic photographs on the current urban landscape. The project utilized the resources of, a collaborative online database of historic photographs and maps from five Philadelphia area institutions. This paper will examine the state of mobile augmented reality technology and its current and possible applications for use within cultural institutions. Through an exploration and evaluation of the successes and challenges of the Department of Records’ augmented reality project, we will investigate how this technology can be applied to other archives and museums.

Keywords: augmented reality, historic photographs, GIS, Philadelphia, mobile technology

1.   Background

Creation of

For more than 140 years, the City of Philadelphia has employed City photographers to capture images of building projects, public works activities, and other events occurring throughout the city. Although many of the photographs were originally taken for risk management purposes, the passage of time has made them an invaluable source of insight into the City’s rich past. With images depicting everything from the Sesquicentennial Exposition celebrations of 1926 to a simple residential street in the 1950s, the photos provide visual insight into the people and places of Philadelphia’s history.

While the images were of interest to both the general public and scholars, accessing them was difficult. The estimated two million photographs are held and maintained by the Philadelphia City Archives under the administration of the City of Philadelphia Department of Records (DOR). Individuals who wished to view the images were required to travel to the City Archives during open hours, page through an index, and submit requests for certain photographs – a process which only a few hundred people undertook each year. In the early 2000s, the DOR began investigating the best method for digitizing the images and making them more easily accessible to the general public through an online presence.

To implement the project, DOR partnered with Azavea, a local software company specializing in Geographic Information Systems (GIS) technology, to develop, a web-based digital asset management system that provides for both public access to the images as well as administrative management of the archival metadata. Launched in 2005, includes a publicly visible search page that enables users to search through the images by geographic location (address, intersection, place name, neighborhood), keyword, date, topic, and other criteria. Each image is visible as a thumbnail and in a larger detail view that includes a variety of metadata fields, a small map indicating the location of the photo, and the ability to view that location in Google Street View or Google Earth. The site also includes many interactive features that encourage public engagement with the images.

Fig 1: The map-based search page emphasizes the geographic information available for many of the imagesFig 1: The map-based search page emphasizes the geographic information available for many of the images

On the administrative side, authorized users can add, update, and delete records in a simple web-based system. Administrators can also respond to user-submitted error reports, scan requests and comments, as well as leave internal management notes, create featured photos and searches, answer licensing requests, and view site statistics.

Throughout the nearly six years of its existence, has been updated regularly in response to changing technology, administrative requirements, and public feedback. Since 2008, the site has also served as portal for users to access images from organizations other than the City Archives. In the past three years, the Philadelphia Water Department, the Free Library of Philadelphia, the Library Company of Philadelphia, and the Philadelphia Office of the City Representative have contributed thousands of historic photographs and maps to, all searchable through the search page. A series of watermarks and customizable metadata fields identifies each organization, and each image includes a link back to the original record on the organization’s site. Various access levels ensure that each organization can only review and edit its own data. The result is a system that provides the public with unified access to images from a consortium of organizations around the city while giving those organizations the opportunity to introduce their collections to new audiences. The DOR adds new images to regularly and looks for other opportunities for collaboration. What began as a simple website with only 90 photographs has grown to an online database with over 93,000 images from five organizations.

Perhaps the most innovative feature of is its focus on geography. The majority of the images in the City Archives connect to a geographic location such as an address or intersection, enabling the Archives staff to geocode (assign latitude and longitude to) the images. When confronted with thousands of images, it seemed likely that users would search first for photographs connected to their personal history – the homes, businesses, schools, and other locations they associated with themselves and their family and friends. A study conducted by the Dutch government (Liberge & Gerlings, 2008) identified a broad public interest in local and family history, a finding that influenced the creation of, a map-based system for historical information. Our experience with has also demonstrated this frequent public focus on personal history. In a January 2010 survey of 213 users, 151 users selected “neighborhood” as one of the search categories they were most likely to use on the website, and 149 users selected “address.” Keyword and time period were the next most frequently selected options with 81 users responding that they would be likely to use those categories. Activity on the website confirms these survey answers. Map- and address-based searches have consistently been the most often used search criteria on, with over 682,000 address searches performed in the last year compared to only 160,000 keyword searches. Users who leave feedback on the site often comment on how much they enjoyed seeing images of their childhood homes or of a relative’s house. As noted in a previous paper on the use of geography in (Heckert, 2009), the users of the website are “sharing very personal reactions to the images, illustrating the stories of places that are very dear to them and that anchor them to their city and community’s history.”

Development of mobile access to

As the public embraced and clamored for more images, the DOR and Azavea sought other ways to utilize the geographic element of the site. A mobile application seemed a logical step. If users enjoyed looking at historic location-based images on their computers, how much more would they enjoy the chance to look at those images while standing at the original location where they were taken? In summer 2007, the DOR launched Mobile, making the entire collection accessible via cell phone and other Internet-capable mobile devices. The simple interface enabled users to search the images by location, keyword, or date. The search results were displayed as flags on a map with a limited description for each flag linked to a larger version of the image.

Fig 2: The original version of Mobile created in 2007Fig 2: The original version of Mobile created in 2007

While the application may seem rudimentary now, it enabled the team to undertake some initial research into the best way to search and display historic photographs in a geographic context within the confines of a small screen. In 2009, the team improved on this mobile access by launching a smartphone version of the Web application that provides access to via Apple iPhone and Google Android devices. As a Web application, the smartphone version is tailored to the design of a smartphone and works on multiple mobile platforms but is not available through a platform-specific app store. Learning from the first iteration of Mobile, the developers adopted a screen-per-function layout that focused on the geographic search capabilities of the website. Users can search for images based on location, view the search results as flags on a full-screen map, and select a flag to view a much larger image with complete metadata. The application also makes use of location technology. If a phone has an available internal GPS system, a user can simply visit the search page and the mobile version will load historic photos near their current location.

Fig 3: A version of optimized for smartphones provides users with geographic search access to the entire collection of images  Fig 3: A version of optimized for smartphones provides users with geographic search access to the entire collection of images

Mobile access to the images on created another opportunity to introduce these historic images to new audiences who might not previously have known of their existence. It also led the DOR and Azavea to question how else mobile technology could be used to engage with students, historians, and the general public and assist them in experiencing and investigating the history of Philadelphia. Some of the most striking opportunities appeared to be in the field of augmented reality, loosely defined by the PhillyHistory team as the overlaying of digital data or computer generated imagery on a live view of the physical world. In March 2010, the Department of Records received a Digital Humanities Start-Up Grant from the National Endowment for the Humanities Office of Digital Humanities to research and develop a prototype mobile phone application that would enable users, via their smartphones, to view historic photographs of Philadelphia as overlays on the current landscape – a form of augmented reality. By using the geographic coordinates tracked as part of the digitization process, the DOR hoped the prototype application could more accurately place the images in 3D space and create a better user experience. The final goal was a prototype application that would enable a more immersive experience with these historic images and encourage the public to look at Philadelphia’s history and the connections between the past and the present.

2.   Augmented reality in cultural institutions

The DOR is certainly not the first cultural institution to investigate the use of augmented reality in promoting historic materials. In the past five years, advances in augmented reality technology, and mobile technology specifically, have made augmented reality applications more common and easily accessible. Apple iPhone/iPod Touch devices and phones running the Google Android platform were accommodating augmented reality applications by the summer of 2009. Since both platforms provide robust sets of software development tools, many software developers have produced a variety of mobile augmented reality applications that increased public familiarity with the technology.

Several institutions have begun experimenting with mobile augmented reality as an educational and outreach tool. In 2009, the Powerhouse Museum in Sydney, Australia ( created an augmented reality application using Layar to display historic photographs of Sydney from their collection. In 2010, the Museum of London released Streetmuseum (, an augmented reality application that provided access to hundreds of historic images of London as both 2D and 3D overlays. The application received a large amount of media coverage and, demonstrating the public’s increasing interest in augmented reality, was downloaded over 50,000 times in two weeks (Ellis, 2010). Additional augmented reality applications have been created by the Netherlands Architecture Institute ( and The Andy Warhol Museum ( On a more local level, the Battle of Franklin Sites ( provides augmented reality access to sites related to the Battle of Franklin fought on November 30, 1864 in Tennessee. Additional research into using historic images with augmented reality, including the work done by Gene Becker and Adriano Farano in San Francisco (, is being undertaken by several cultural institutions as interest in the technology spreads.

Each of these applications and others like them connect to the public’s interest in comparing the past to the present. Based on the popularity of and the public’s enthusiasm for submitting error reports, requesting additional scans, and otherwise interacting with the images, we felt that an augmented reality application could be an exciting opportunity to make Philadelphia history more widely accessible.

3.   Implementing augmented reality on

Original plan

In the original NEH grant application, the DOR proposed researching and developing a prototype that would provide mobile access to approximately 500 images as overlays on the current urban landscape. Since each image is already geocoded, the goal was that users be able to point their phone at a location and view the historic images as an overlay on the view shown through the phone’s camera display. These coordinates would hopefully enable the project developers to place the photos in the 3D space where they were taken, rather than requiring the user to align the two views. Each image would be accompanied by minimal descriptive text, including title, data, collection name, and location. The DOR would also work with an Advisory Committee, which includes the co-editors of the Encyclopedia of Greater Philadelphia project, to create additional interpretive text for at least fifteen of the images. The images would be selected from the collections of the five organizations contributing to, although the majority of the images would come from the City Archives. While the prototype application would focus on the neighborhoods in the downtown area, the project team also would include images from neighborhoods throughout the city in order to evaluate accuracy issues related to tree cover, building height, and other multi-path errors that could affect the display.

With this overall goal in mind, the DOR proposed investigating three different options for creating the augmented reality prototype. The approaches reflect the myriad and swiftly changing options that currently exist for augmented reality development.

  • Use an existing proprietary framework from one of the leading companies working in augmented reality, such as Layar, Metaio, or Wikitude
  • Create a custom application that will run on the Android platform
  • Create a custom application that will make use of the iOS and run on the Apple iPhone.

For each of these options, the DOR hoped to use the geographic data tracked in to place the photos in the 3D space where they were originally taken. To do this, the project team intended to use the Google Street View information associated with many of the images. In early 2009, Azavea added support for Google Street View to If 360 degree street level views were available for the location where an image was taken, users could select a button to view that location in Google Street View. This feature provided a simple method for users to compare a historic photo to the contemporary landscape where it was taken without requiring physical travel to that location. The management features of also included an option for administrators to coordinate the angle (pitch and yaw) and zoom of the Google Street View imagery to match the angle and zoom of the historic photograph. With the coordinates of pitch, yaw, and zoom in addition to the latitude and longitude for each image, the DOR felt that there was enough geographic information to facilitate the display of the images as an augmented reality style overlay.

While the project seemed possible, especially based on the success other cultural institutions had experienced with somewhat similar augmented reality applications, the DOR recognized several significant questions that needed to be addressed.

  • Will the level of GPS accuracy inherent in smartphones allow true point and view functionality in the crowded urban built environment?
  • Will the available technology allow for the images to be placed in the 3D space where they were taken?
  • What is the most effective user interface for displaying and browsing historical images and for displaying additional text?
  • Will processing times be fast enough to provide the augmented reality data in real-time fashion?

We hoped to answer these questions and more by experimenting with available augmented reality technology, constructing a prototype application, and publishing our findings in a white paper to be distributed through NEH.

Initial research

Upon receipt of the Digital Humanities Start-Up Grant, the DOR partnered with Azavea, the software development firm that created, to begin research and development of the augmented reality prototype. Before embarking on development, the development team researched the current status of augmented reality technology. Those initial findings are summarized by Josh Marcus (2011) as part of a developer journal entry published on the Azavea Labs blog.

Marcus (2011) notes that the majority of augmented reality applications fall within two categories: GPS-based and computer vision-based. GPS-based applications make use of a phone’s GPS and accelerometer, gyroscope, and other technology to determine the location (particularly in urban areas), heading, and direction of the phone. The result is an augmented reality application that takes “the form of little floating balls or symbols on the horizon in the direction you are looking at.” This type of technology, however, still faces several major drawbacks, including imprecise location data, difficulty discerning the heading of the phone, and data points that appear jittery rather than stable – a serious drawback when hoping to create photo overlays. Computer vision based applications “use powerful computer vision libraries to help the computer identify what it is seeing through a digital camera” (Marcus, 2011). In many cases, this is done through the creation of a unique symbol that the computer identifies and then uses to create a 3D object. While visually stunning, these applications have several limitations, including limited practical applicability and the necessity of large amounts of processing power that may overwhelm the capabilities of a phone.

The development team also surveyed the available augmented reality technology libraries, specifically looking at open source libraries and toolkits such as OpenCV and ARToolkit. Working with open source technology would potentially enable the prototype to be replicated more easily by another institution seeking to implement an augmented reality project.

The world of augmented reality technology development is certainly much more complex than this overview. Technology changes quickly and the release of a new update in a framework like Layar or a library like OpenCV can determine whether or not a particular project is possible. Much of the technology that the project team summarized in the original grant application submitted only a year ago has already been superseded by new development.

Creating data services

After this initial research, we turned to examining the idea of creating data services. Since the project also included experimenting with multiple client applications, we wanted to build an architecture that separated the data services from the client-side augmented reality viewers. Regardless of the end technology, we wanted to be able to create a single source of digital asset information. After reviewing the available technology and standards, the team chose to build Web services that conformed to the Layar standards, a mobile augmented reality platform developed in The Netherlands. Launched in 2009, Layar has quickly become ubiquitous as a platform for augmented reality applications. To implement an augmented reality layer in Layar, one publishes the "augmentations," the points of interest that are visible in the augmented reality application, by creating a Web service that client applications can query for information about what's around them. A Web service is simply a term for a standard method of allowing two computer programs to request and communicate data in a structured way. For example, a request to this "augmentation" database might request all of the points of interest within 200 meters of a given latitude and longitude. While the Layar Web service format has some limitations, it is relatively simple to implement both server-and client-side support for it. While not strictly what is often called a "RESTful" Web service, which is a lightweight style of Web service that in many ways is similar to loading a Web page, the service can be implemented with a simple Web application that can read in POST variables. As there is no independent standard for requesting and publishing augmented reality points of interest, the Layar service is as close as we could find. It had the additional advantage that we could directly test the result in the Layar clients available for both the Android and iPhone platforms.

While there are always challenges involved in any data translation project, some challenges we faced are specific to creating augmented reality services. As discussed earlier, we used Google Street View as a tool to identify and select the desired angle in 3D space that we wished to place each photograph. However, Google Street View and Layar specify this angle differently. Both Layar and Google Street View represent a viewer facing north with a value of 0 degrees (in Layar this is the "angle" parameter and in Google Street View it is called "yaw"). However, in Google Street View, rotations go clockwise (so 90 degrees is East and -90 degrees is West) whereas in Layar, rotations go counter-clockwise (so 90 degrees is West and -90 degrees is East). However, the effort required to make these transformations was worthwhile. By January 2011, the database management team had “pinned” more than 10,850 images to their Google Street View coordinates, providing a large subset of materials with which to test the 3D space options.

Some image processing also needs to occur before images can be displayed on the small screen of a mobile device. The file size of all images must be smaller than 75 kB, and there are specific resolution limitations (e.g. full images in Layar must be less than 640x480). Given that mobile device screens have significantly smaller resolution than 640x480, that resolution is probably much higher than necessary. Additionally, some clients (like Layar) do not support making images transparent. It is therefore necessary to set the alpha channel of the photos in a pre-processing step. For example, using the open source ImageMagick package, the following command line invocation could perform the necessary scaling and transparency conversion on an incoming image stream: "convert - PNG32:- | convert -scale 240x180 -channel Alpha -evaluate Multiply 0.8 output.png".

There are already a number of open source platforms for publishing Layar content, most notably PorPOISe (PHP), django-layar (Python) and LayarDotNet (C#). Unfortunately, none of these platforms has support for 3D objects yet. The beta release of an online service called Hoppala Augmentation does support 3D layers (, but we were unable to get the 3D service to work and found the documentation and usability to be underdeveloped. It is certainly necessary to have a full understanding of the Layar protocol to use the Hoppala service (at this point) as the API allows developers to set a range of settings without explanation or checks on invalid or conflicting settings. Given these limitations and our desire to implement our own interactive capabilities and user settings, we developed our own data Web services in Python.

Layar and branding

Before beginning the project, we harbored a number of concerns regarding the Layar technology. While we had initially reviewed the Layar infrastructure, it had required the user to access the institution’s augmented reality layer by first searching for it in the Layar mobile app. The resulting layer contained the assets selected by the institution but did not include many opportunities for skinning or packaging the layer so that it was easily identifiable as connected to the institution. Since is a project that exists almost solely as an online presence, it was crucial that our users be able to quickly identify our augmented reality prototype as being connected to the PhillyHistory website. A recent Layar update, however, has both extended the technical capabilities of the platform and provided additional branding and packaging options that allayed some of our fears.

Requiring users to access materials through the Layar app first also made accessing an institution’s augmented reality layer a multiple step process that might confuse users. Even willing users might not understand each of the intermediate steps or why the application was branded 'Layar' with no reference to the content the users thought they were trying to load. But two developments - one publically released during the period of our research project - have removed several of these steps and made Layar much more customizable.

On the Android platform, it is possible to create an application - with one's own branding and content - that launches the user into Layar. The user must download Layar, which is a process unto itself, but the user can at least be prompted to download Layar and be sent to the Android Market. The user experience would be as follows. The user downloads the augmented reality application from the Android Market. The application can have its own branded icon and any desired custom content such as an opening, informational splash screen. The application can then send the user directly to the Layar layer at a designated point (e.g. when the user taps a button or after a certain number of seconds). The Layar logo fills the screen as the Layar application loads, at which point the user is brought to the 'settings' screen of the Layar layer which can include up to five custom parameters. While this requires custom Android platform development, there exists an application called the ‘Layar Launcher Creator,’ documented on the Layar wiki (, which can create the basic skeleton of an Android application that launches a Layar layer. The core technology is the Android ‘intent’ that would also allow the inclusion of a Web page direct link to a Layar layer if the Layar application has been installed.

While not perfect, this functionality allows for limited branding that can frame the user experience and significantly simplify the process of bringing a user into the augmented reality experience.

On January 27th, a new service was launched for the iPhone platform called the ‘Layar Player’ which contains a software development kit (SDK) for including Layar as a component within a custom application similar to the Layar launcher for Android. The service has some features that were not previously available, including the ability to provide information from your external application in the Layar client's requests to the Web services that provide the point of interest information. This SDK also obviates the need for the user to download Layar as a separate application. Overall, the idea is very similar to the Android launcher - one can 'sandwich' the Layar experience with one’s own branding and content.

Layar and 3D points of interest

Another core limitation of Layar in the past was that while it offered a 3D-like visualization of nearby points of interest, it did not in fact support the visualization of 3D objects placed in 3D space. Given that a key goal of our project was to explore the feasibility of placing historic photos in 3D space with an orientation that matched the original perspective of the photographer, we needed the ability to place the 2D photos as a 3D dimensional ‘billboard’ with a specific fixed placement and angle. Often augmented reality applications will place an image in the visual plane of the user, but the ‘billboard’ (if you imagine the image or icon as a billboard being viewed by the user) always faces the user. Sometimes a 3D effect is created by changing the apparent placement of these ‘billboards’ at the edges, but we wanted to have a fixed placement of images as if they were real, physical billboards with a specific fixed location and orientation.

We were very excited to discover that Layar has now implemented support for 3D models and 2D images as 3D billboards. The 3D object support does not yet seem to be fully stable. Several significant bugs were found and fixed during our development period, and there are still known bugs in the Layar Player SDK regarding 3D objects. Despite the issues, we were able to create 3D ‘billboards’ with our photos that were placed at a specific location, altitude, and angle. While placing 3D models in Layar requires both additional tools for 3D model editing and Layar's own tool to convert standard formats into their own 3D model format, it was possible to create 3D ‘billboards’ from 2D photos by implementing some basic image processing and a Web service that conformed to Layar's specifications.

Fig 4: A screenshot demonstrating the use of 3D ‘billboards’ in the augmented reality prototypeFig 4: A screenshot demonstrating the use of 3D ‘billboards’ in the augmented reality prototype

Developing a custom application

While Layar seems to have the leading augmented reality platform, there are key limitations to keep in mind. Although some configuration is enabled, fully custom controls are neither possible nor allowed, even when using Layar as an integrated SDK. This prevents the creation of a custom experience or interactions beyond what the Layar clients already offer. Layar is also a platform in rapid development, and bugs are not uncommon. While it possible to integrate Layar into your own application to maintain your own brand, the integration is not seamless, and your users will still see a full-screen Layar logo before reaching your content. As developers, we were often frustrated as issues arose that we could not debug directly; for example, when Layar had cached our content and didn't seem to be querying the database on our servers.

But the most significant limitation goes beyond Layar and extends to most augmented reality platforms. An unfortunate fact of most augmented reality applications is that screenshots of an application give a much better impression of the functionality than does actual use. In the real world, augmented reality applications are often very frustrating. Images do not inhabit a fixed place in your view of the world around you; they wobble, bounce, jitter, fly away, or disappear entirely. Sensors on mobile devices still have very significant limitations. We are exploring the feasibility of a next generation of augmented reality applications that can more successfully create the illusion of a virtual object in a fixed location in the 3D space around us. Some newer phones, like the iPhone 4, and some Android-based phones, like the Google Nexus S, include a gyroscope that can be leveraged to create a more stable understanding of the phone's orientation in 3D space. While the gyroscope has its own limitations and error, there are techniques to integrate the gyroscope's data with the data from the accelerometer, compass, and GPS.

We are experimenting with modifying existing open source augmented reality frameworks to use the gyroscope to improve the overall user experience:’mixare’ is an open source augmented reality engine with code for both the iPhone and Android platform. While it is relatively immature and does not support 3D objects, it is the most advanced open source framework we were able to identify. Given our experimentation thus far, we think that there is a promising approach in which the compass and GPS are only occasionally sampled – to prevent the inconsistency of GPS readings from suddenly shifting the user’s location in a way that disrupts the illusion – and the gyroscope and accelerometer sensors are used in an integrated fashion to establish a less frustrating and more consistent user experience. We have not yet found good documentation about using the gyroscope and accelerometer together to act as a 6-degrees-of-freedom sensor, but we hope to generate a tutorial for future use.

4.   Response to augmented reality application

Based on our work with Layar, we hope to release a publicly usable version of the augmented reality prototype. The public response to has been overwhelmingly positive. The site currently has over 6,400 registered users and regularly receives an average of 13,000 unique visitors per month. The response to our initial promotional materials for the augmented reality application has been extremely encouraging. At the time of this writing, the application is not currently available to the public. However, we have posted a number of blog articles and press releases that include mock-ups of the application design (Boyer, 2011; Marcus, 2011).

Fig 5: An artist’s depiction of a possible augmented reality prototypeFig 5: An artist’s depiction of a possible augmented reality prototype

In a demonstration of the power of social media, those articles were quickly referenced by online news media outlets, including Philebrity, Technically Philly, and Philadelphia. We have given a number of presentations at conferences and workshops on the project, and they have been met with great enthusiasm. Since the prototype has not been released, we are unable to offer specific statistics related to its use. The response to our articles, presentations, and blog entries, however, has led us to believe that the augmented reality prototype will be warmly embraced.

5.   Opportunities for augmented reality in cultural institutions

Throughout the process of creating a augmented reality application, we have considered how our research findings and technology experiments can be applied to the larger community of cultural institutions. Our final whitepaper to be distributed through NEH will be publicly available online and will hopefully provide some guidance to other individuals investigating the use of augmented reality as an educational and outreach tool. Although our research is not completed, we have drawn several conclusions regarding the benefits and challenges of implementing augmented reality applications in libraries, archives, and museums.

Possible benefits of investing in augmented reality

1. Generate excitement and introduce collections to new audiences

Perhaps one of the greatest benefits of augmented reality is its simple ability to generate excitement about an institution’s collections. While augmented reality is becoming more common, it still maintains an aura of science fiction mystery for many people. Upon announcing receipt of the grant, the DOR received an e-mail from an area archivist declaring that the project “sounds like magic – wonderful, wonderful magic.” That enthusiasm was mirrored in the responses we received upon releasing our first mock-ups of the application. The combination of technology and history is a powerful tool for engaging individuals who may not see the benefit or excitement of looking at the physical negatives or travelling to an archive.

The difficulty for many institutions will be in extending this initial enthusiasm for an augmented reality application into a more lasting relationship between the users of the application and the institution. In some ways, simply making the public aware of the institution, perhaps through a ‘flashy’ technology like augmented reality, is the first step for building that relationship. Many institutions struggle to attract younger visitors, particularly archives which are often associated with nothing more than tedious property deeds and uninteresting genealogy records. For the DOR, has served as an opportunity over the last five years to introduce an amazing photo collection to a broader audience than the few thousand people who visited the City Archives annually. Since augmented reality applications are still unique enough to attract broad attention among those interested in both the technological and cultural communities, this prototype also serves as an initiative to introduce the photograph collection and the work of the Philadelphia Department of Records to more national and global audiences.

2. Take advantage of growing smartphone use

Since the introduction of the Apple iPhone in 2007 and the Android mobile platform in 2008, the use of smartphones has become increasingly prevalent. According to Nielsenwire (2010), smartphone use is growing rapidly. “As of the third quarter of 2010, 28 percent of U.S. mobile subscribers now have smartphones” and of those people “who acquired a new cellphone in the past six months, 41 percent opted for a smartphone over a standard feature phone, up from 35 percent last quarter.” Mobile augmented reality applications serve as a method for engaging with smartphone users as they conduct their daily tasks, rather than requiring them to visit a physical building or invest time in a laptop or desktop computer.

3. Attract a younger and more diverse demographic

If smartphone use is growing across all demographics, it is perhaps increasing most quickly among college students and individuals in their 20s and 30s. A study by Michael Hanley at Ball State University (Truong, 2010) found that 49 percent of the 500 students surveyed owned smartphones. Previous studies in October 2009 and February 2009 had shown 38 percent and 27 percent, respectively, of students using smartphones. Smartphone users are also more diverse than other mobile phone users. According to Nielsenwire (2010), Caucasians comprise 76% of feature phone users but only 62% of smartphone users. Hispanic users, however, comprise 9% of feature phone users but 19% of smartphone users.

Cultural institutions that have chosen to focus on attracting audiences from a younger and more diverse demographic may find augmented reality and other smartphone applications to be a powerful tool in these efforts.

3. Draw on interest in local history collections

As stated previously, both research by the Dutch government (Liberge & Gerlings, 2008) and our experience with have shown that people have a great interest in local and family history. Collections of assets connected to a particular locale may prove the most useful for an augmented reality application. Not only does the local focus increase the likelihood that the application will be used, but these collections are also more likely to have the level of geographic detail required for use with augmented reality.

4. Open up additional educational and interpretative possibilities

Augmented reality applications introduce several new areas of educational and interpretive programming for adults, families, and students. Augmented reality applications could supplement existing walking tours or lead to the creation of new tours. An institution could also create self-guided tours using the augmented reality application to lead visitors to select locations related to a theme, neighborhood, or time period. The application could be an especially valuable tool for connecting with students. Learning about the history of a neighborhood in a classroom is educational, but actually standing in a neighborhood and comparing historic images to the present landscape can inspire students to engage more deeply with the past.

An educational team could easily develop many other ways to use an augmented reality application. Any programming, however, must be designed with the knowledge that the intended audience will be limited to those with smartphones or will require the museum to provide smartphones for the group.

5. Allow for collaborative opportunities

Since mobile augmented reality applications are intimately connected to geography, organizations have the opportunity to collaborate with other institutions in their area. A project might include assets from a number of institutions with the logo visible on the map or in the camera view showing the originating organization. In the case of, the prototype augmented reality application will include images from five institutions, and the information available for each asset will indicate the collection and organization to which it belongs.

The challenges of using augmented reality

While there are many benefits to investigating augmented reality, there are significant challenges related simply to how new the technology is. The rapid changes in development capabilities and the frequent changes in the smartphones that support mobile augmented reality technology indicate that this is not yet technology that cultural institutions will be able to easily embrace.

1. Advanced technical knowledge

Due to rapid technological changes, specifically in the technology used in mobile augmented reality or in placing objects in 3D space as in the case of, implementing an augmented reality project requires an advanced level of technical knowledge. The internal IT staff at a cultural institution may need to commit a significant amount of their time and resources to become familiar with the libraries and toolkits associated with creating an augmented reality application. This resource burden can quickly become prohibitive or detrimental to the creation or maintenance of other technical projects. Contracting with an outside firm that specializes in augmented reality applications is an option although the connected expenses may make this an unaffordable alternative.

2. The necessity of location based assets

Mobile augmented reality applications are dependent upon location based assets. Since the purpose of such an application is to place digital data in a physical space, the application has limited usability if geographic information is not available for the assets. While many organizations track some geographic data, augmented reality applications require a level of detail (at the address level often) that may not be readily available. This information can be gathered, but it may extend the time required to complete the project.

3. Smartphone requirement is an inherent barrier to use

Augmented reality applications have an inherently limited user base. A true augmented reality application requires a smartphone, tablet computer, or other mobile device that includes a camera and GPS. The application will perform better if the device also has an accelerometer and a gyroscope, features that are only available on the latest generation of smartphones. Although the smartphone market is growing rapidly, many individuals do not own a device that would enable them to access an augmented reality application. Institutions must weigh the benefits of embarking on such an innovative technology project with the realization that the project will be inaccessible to a significant number of people.

4. Institutional sign-off on making materials available online

Any digitization project that makes digital materials publicly available raises issues of copyright and public use. The images included on an augmented reality application are usually small and of a low enough quality that copying the images is a minor concern. Since augmented reality is a relatively unfamiliar technology to many people, however, a project team working on an augmented reality application should plan on holding various discussions with an institution’s administration and other stakeholders to explain the project and address various concerns.

5. Difficulty in attracting repeat visitors

Augmented reality may have a certain excitement factor that will attract people to the application, but garnering repeat users of the application can be difficult. If the actual application is disappointing compared to the initial marketing campaign, users may interact with the application once but never return again. As Seb Chan notes (2010), “the problem with a lot of these augmented reality and mobile apps that museums are doing is that they face a huge user motivation hurdle – ‘why would you bother’?” Overcoming that hurdle and getting users to visit the application not once but multiple times can be a formidable task. In the case of, we have tried to encourage repeat visitors by focusing on the connection to local and personal history, partnering with other organizations with their own active users, and providing content from the Encyclopedia of Philadelphia that is not available elsewhere online. Despite these efforts, actual usage and repeat visits to the site is a major concern as we prepare for a public launch.

6.   Conclusion

The 2010 Horizon Report: Museum Edition distributed by the New Media Consortium (Johnson, Witchey, Smith, Levine, & Haywood, 2010) identifies augmented reality as a technology that will become relevant in the field of museum education and interpretation within the next two to three years. They argue that “applications that convey information about a place can open the door to powerful forms of discovery-based learning.” Despite the difficulties outlined above, augmented reality applications appear to have many potential benefits for museum collections and programming. As both the technology and smartphones improve, further experimentation by cultural institutions will assist in determining if and how these projects can be used to further the outreach and educational goals of libraries, archives, and museums.

As of January 2011, we are planning to release the augmented reality application to the public by April 2011. Depending on the success of the prototype and the public response, the DOR may seek additional funding to experiment with the creation of an independent system that would not require the use of Layar. We also hope to investigate whether an augmented reality application could be scaled to include the entire collection of over 93,000 photographs. Whether or not our efforts are successful, creating an augmented reality prototype has provided us with an excellent opportunity to research this new technology and its applicability in the archival field.

7.   Acknowledgements

We would like to acknowledge Commissioner Joan Decker at the City of Philadelphia Department of Records for her ongoing and enthusiastic support of We would also like to acknowledge the National Endowment for the Humanities Office of Digital Humanities for their support of this project in the form of a Digital Humanities Start-Up Grant. Any views, findings, conclusions, or recommendations expressed in this paper and accompanying presentation do not necessarily reflect those of the National Endowment for the Humanities.

8.   References

Boyer, D. (2011, January 17). Augmented Reality Coming Soon! In Azavea Atlas. Consulted January 28, 2011.

Chan, S. (2010, October 26). On augmented reality (again) – time with UAR, Layar, Streetmuseum & the CBA. In Fresh + Newer. Consulted January 28, 2011.

Ellis, M. (2010, June 1). Streetmuseum: Q&A with Museum of London. In Electronic Museum. Consulted January 28, 2011.

Heckert, M. (2009). “Putting Museum Collections on the Map: Application of Geographic Information Systems”. In J. Trant and D. Bearman (eds). Museums and the Web 2009: Proceedings. Toronto: Archives & Museum Informatics, 2009. Published March 31, 2009. Consulted January 26, 2011.

Johnson, L., H. Witchey, R. Smith, A. Levine and K. Haywood (2010). The 2010 Horizon Report: Museum Edition. Austin, Texas: The New Media Consortium. Consulted January 26, 2011.

Liberge, L. and J. Gerlings (2008). “Cultural Heritage on the (Geographical) Map”. In J. Trant and D. Bearman (eds.). Museums and the Web 2008: Proceedings, Toronto: Archives & Museum Informatics, 2008. Published March 31, 2008. Consulted January 26, 2011.

Marcus, J. (2011, January 24). PhillyHistory Augmented Reality: Developer Journal 1. In Azavea Labs. Consulted January 28, 2011.

Nielsenwire (2010, November 1). Mobile Snapshot: Smartphones Now 28% of U.S. Cellphone Market. Consulted January 28, 2011.

Truong, K. (2010, June 17). “Student Smartphone Use Doubles; Instant Messaging Loses Favor”. The Chronicle of Higher Education. Consulted January 28, 2011.

Cite as:

Boyer, D. and J. Marcus. Implementing Mobile Augmented Reality Applications for Cultural Institutions . In J. Trant and D. Bearman (eds). Museums and the Web 2011: Proceedings. Toronto: Archives & Museum Informatics. Published March 31, 2011. Consulted implementing_mobile_augmented_reality_applications