Skip to main content

Museums and the Web

An annual conference exploring the social, cultural, design, technological, economic, and organizational issues of culture, science and heritage on-line.

Mobile Means Multi-Platform: Producing Content for the Fast-Changing Mobile Space

Erica Gangsei and Tim Svenonius, San Francisco Museum of Modern Art, USA


Starting in early 2009, SFMOMA embarked on an ambitious plan to bring the production, publishing and distribution of its mobile tour in-house. This meant investing in a fleet of Apple iPod Touches, scripting and producing hours of new audio and multimedia content, and adopting a new authoring system to custom craft the user experience.

In the months since we began developing the new tour, innovations in the mobile app space have grabbed tech headlines everywhere. More and more, users of iPhones and other mobile devices want to know: can I download this to my own phone? (or: Why can't I?) Had we started our development the following year, we might have revised our approach in light of the rapid evolutions taking place. Teeming masses of developers have descended on the app space and quickly made it clear that many distinct approaches to the mobile problem space are available and within reach.

Now, with our home-grown mobile tour in place, we need to design a streamlined approach to serving our visitors in-house both with our devices and their own. We also need to keep in mind the thousands of Web visitors who use and enjoy our online content via SFMOMA's website or RSS feeds – an audience potentially neglected as the emphasis shifts to the mobile space. How can we produce and publish multimedia content effectively in the mobile sphere? And how can we optimize content designed for in-gallery use when it may be experienced, via apps or the Web, far from the museum?

Finally, most of our mobile stops are published to multiple delivery platforms: 1) the in-house mobile multimedia tour, 2) download via RSS, 3) Guide-by-cell, and 4) an app for iPhone and Android platforms. Can we create a one-size-fits-all solution all mobile platforms? In this paper we'll explore the viability of platform-independent content for museums, discuss how content can be tailored to specific delivery platforms, and share some key considerations in repurposing Web content for mobile use and vice versa.

Keywords: Apps, RSS, in-house production, multi-media tour, multi-platform, on-site vs on-line

1. Navigating the new platform proliferation

In recent years, producers of online content have been facing a new and different predicament. Since the iPhone's debut in 2007, the number of popular delivery platforms has been increasing, while the differences between the platforms are widening. Though we have been challenged before, many times, to make our content available to multiple platforms and operating systems, the new mobile proliferation is something else entirely. The emerging platforms – the iPhone, iPad, and Android – are all growing quickly, even as the venerable desktop computer maintains its formidable foothold.

The streamlining that we witnessed in the last decade, where the emerging monopolies reduced the number of options in many cases to a pair of heavyweights (Windows vs. Mac, Explorer vs. Firefox, etc.), made life easier for developers. Now, we're facing a fragmentation which is bound to continue, because the new platforms are not all redundant, but address different desires and needs.

At SFMOMA, all of this has been cast into sharp relief over the last two years as the interpretive media team took the production of the Museum's mobile tour in-house, supplanting the legacy audio tour. In conjunction with SFMOMA's 75th anniversary reinstallation of its permanent collection in January 2010, we launched the new mobile tour on a fleet of iPod Touch devices. The transition, requiring new hardware, a new publishing system, new production processes and new content, may qualify as the interpretive media team's (IET, hereafter) most ambitious undertaking to date.

Producing the tour content for the iPod Touch, a consumer device, should theoretically pave the way for us to offer the same content to visitors who have their own iPhones or iPod Touches. But in fact the expectations of the iPhone owner are very different from those of the museum mobile tour user. When the IET team began to envision consumer apps which could deliver the same information, significant differences immediately surfaced between the environment we had created for our audio-tour users and one which would ideally suit the iPhone owner.

While the emphasis on mobile media leaves our core practices relatively unchanged – interviewing artists and curators, then surfacing the content with the greatest value to our audiences – the production processes which surround that practice are now multiplied. Becoming familiar with the particulars of each new device takes time as well.

Since we cannot reasonably abandon development on the legacy systems (i.e. Web browsers for desktop and notebook computers), we find ourselves facing the proliferation predicament: unless we ignore the emerging platforms, we must add production processes to serve them.

2. Tailoring content streams to new platforms

The content which comprises the mobile tour came largely from three sources:

  1. SFMOMA purchased the rights to all the audio content produced during our long partnership with Antenna Audio;
  2. we conducted numerous new interviews with curators and artists; and
  3. we re-purposed or re-crafted many hours of audio and video content which we had previously published in other formats.

In an ideal world, all of our online content could be stored in a single content management system (CMS) and published to multiple platforms with minimal effort. The reality that we're seeing, however, is that often a single media element requires special considerations for each delivery platform.

Fig 1: Quintuple publishing of a single media elementFig 1: Quintuple publishing of a single media element

Consider the lifecycle of a single audio clip (fig 1):

  1. For the in-house mobile tour we craft an immersive story, one to two minutes in length. To encourage the use of sub-level content, we might include a closing audio cue, which says "Choose Go Deeper to learn more." If a supplemental image or slideshow is included, we may have other cues, prompting the user to look at the screen at certain moments. These call-outs are necessary because the mobile tour is primarily an audio experience, and any visual interaction with the device is best prompted with audio. The supplemental slideshow images are added to the in-house tour using a proprietary authoring tool.
  2. When the same clip is published on the Museum's website, any image or slideshow component must be embedded into the file using GarageBand. Removed from the structure of the mobile tour, the relationships of main or sub-level are dissolved, so navigational ("Go Deeper") cues must be removed.
  3. If the clip is included as part of an enhanced podcast (an .m4a file containing audio and images), we retain verbal cues to look at the screen.
  4. Each podcast is also made available in an audio-only .mp3 format, with no verbal cues.
  5. Porting the content to a cell phone tour, we find that the music and effects which we add for texture and dimension provide little more than interference when heard through a cell phone. Therefore, we strip out the music in this version as well as verbal cues.

In this example, the same content is republished in five slightly different formats. This presents issues both in areas of workload and of file management. When we consider these permutations, it seems that a CMS can only manage the profusion of derivative files; creating the variants is unavoidable. We encounter similar complications when re-purposing Web content to the mobile tour, a process which also requires case-by-case consideration of each story and of each media resource.

Fig 2: A video and its derivatives published to eight environmentsFig 2: A video and its derivatives published to eight environments

Each time a media asset is published to a new environment, we have to take into account what context it requires. A segment from an interview conducted with Jeff Wall has been published in eight different environments (fig. 2).

In the gallery, audio can be a very powerful medium. Many highly successful mobile tour stops are ones that encourage visual engagement, or even guide the visitor's eye through the work. Offsite, however, video is without doubt vastly more effective. When jpeg images provide woefully inadequate surrogates for real artworks, a video can often more effectively convey the particular presence of a work.

Fig 3: Cross-platform usage of audio and videoFig 3: Cross-platform usage of audio and video

If we measure the popularity of audio versus video on the Web, it is abundantly clear that video is more attractive to our Web visitors (fig. 3). We can publish videos to YouTube or other environments far removed from the Museum and its members, still confident that they will be understood. Audio does not give us the same affordances; it is far more dependent on the frameworks where it is published.

Broad coverage, sharp focus

Multimedia features designed for Web or kiosk use may offer multiple points of view on a topic, even when that topic is a single artwork. Within these confines, the user is encouraged to roam, to explore through a sprawl of cross-referenced content.

When thinking about in-gallery mobile content, we must recalibrate our instruments to focus on the few moments when the visitor is engaged face-to-face with the work. In this light, the multi-threaded interactive features of our past resemble panoramas: expansive scenes that provide a macroscopic context for a work, or for a body of work. In contrast to these wide vistas, a stop on the mobile tour must provide a snapshot: a tightly cropped, self-contained story with a small cast of characters.

Can we repurpose our Web content for the mobile user? We've learned from experience that there certainly is no easy answer, just as there is no "magic bullet" format that will fit all contexts. We can trim, edit, and repackage preexisting audio and text, but in many instances we find the experiential difference between target platforms is greater than we anticipated.

Fig 4: Web "hotspot" screen adapted for in-gallery useFig 4: Web "hotspot" screen adapted for in-gallery use

As we bring content from our Web features to the mobile tour, we are not just pulling media assets straight out of Web interfaces, but in many cases rethinking the content and producing it in a new format. We have sometimes repurposed text content as audio (fig 4), but not without re-scripting it for the ear.

Another issue that arises is that of specificity. While we know the ideal artwork stop addresses the qualities of the work which the visitor is in front of, we achieve greater coverage in the galleries if we include the aforementioned "panoramic" content – overviews of artists or series.

Much of our collection remains in a slow, perpetual rotation, with even the most beloved or iconic works occasionally going out on loan. In many cases, the Museum owns a broad selection of an artist's work, which it rotates frequently for curatorial or conservation needs. Given these uncertainties, we may produce artist overviews at the expense of artwork-specific stories. But often we find that these broad strokes may fall short of satisfying the visitor's curiosity.

Fig 5: Silhouette Paintings by Ed RuschaFig 5: Silhouette Paintings by Ed Ruscha

In a typical overview stop, we must identify lowest common denominators between works. A tour stop on the silhouette paintings by Ed Ruscha 9, fig. 5), places the series in the context of Ruscha's other work, and explains the technique used. The works are unified by style, not iconography; therefore the commentary leaves out any mention of the paintings' subjects.

Conversely, if we only create artwork-specific tour stops, our content may languish unused for long periods while the relevant works are not on view, which amounts to wasted effort.

3. Toward platform-independent content

Much of this exploration concerns the evolving expectations of consumers. In very recent years we are becoming acquainted with a new kind of user who uses more than one kind of web-enabled device. While the user may perform distinct tasks on a mobile platform versus a desktop, there are many areas of crossover.

The assumption of the user who performs task x on platform A is that he/she should also be able to perform task x on platform B. This is a relatively new mindset that has emerged concurrently with the mobile explosion.

Consumers of media increasingly move toward small, discrete, media bytes that they can share in a variety of ways. Posting a link on one's public forum of choice is now such a vital part of user engagement that any media which doesn't permit this kind of usage will simply get less exposure, since it cannot be disseminated through the social and viral channels.

To address this, we anticipate a two-stage approach:

i. Atomize first, ask questions later.

While we've typically produced media for use in a particular context, we may need to relinquish some of the control we've grown accustomed to. Rather than say, this requires context, so we shouldn't atomize it, perhaps we should atomize it all, and trust users to provide their own context as they repurpose our content.

ii. Provide a learning layer.

Users need transparent relationships between content elements in order to effectively understand and navigate our content.

A key point to remember: your archive is not a context. In the same way, your art storage facilities are not a context, nor are our Web archives. A user cannot learn from adjacencies in these collections of elements. The gallery – a curated collection of works – functions, ideally, as a learning layer. The proximities and adjacencies are orchestrated to influence and inform the ways we see and understand the objects.

With each new user interface we produce, we conceive new sets of relationships, meant to provide the user with paths of navigation and ways of looking at content. We attempt to surface connections between works or artists that aren't necessarily obvious. These networks of relationships are all approaches to providing a learning layer.

Fig 6: Using adjacenciesFig 6: Using adjacencies

For Full Circle: 75 Years of Wood Chairs, an Architecture and Design gallery in SFMOMA's Anniversary Show, an overview tour stop draws conceptual relationships between a group of objects arranged in close proximity. This tour stop could only be used for the duration of one installation, after which these objects will likely never converge again.

Fig 7: Mapping spatial relationships to mobile mediaFig 7: Mapping spatial relationships to mobile media

In the app SFMOMA Mobile: The Fisher Collection, audio content could be browsed in list views that reflected the spatial arrangement of the artworks.

Fig 8: Related Multimedia in the Web spaceFig 8: Related Multimedia in the Web space

SFMOMA's website recommends "related multimedia" for visitors browsing exhibitions and artworks. These artist and exhibition relationships are defined and managed within the Web CMS.

Fig 9: Content RecommendationsFig 9: Content Recommendations

YouTube suggests "related" videos with varying degrees of success. The content creator – in this case SFMOMA – has no control over what videos YouTube will show as related.

As these examples illustrate, a set of relationships that is useful in one case case may not be useful in another. In galleries, we navigate spatially. On the Web, we navigate by hyperlinks. The interfaces designed for each need to work within these frameworks. Additionally, when producing content for the mobile space, we need to always remember that more isn't necessarily better and refrain from deluging the user with options.

Because both platforms and publishing methods are moving targets, we must suspend judgment on whether platform-independent content is a realistic goal or a utopian ideal. Our conclusion at this stage is that we can streamline our process to reduce reduplication of effort, but that each platform will continue to demand special considerations.

4. The curation question

You could choose to think of mediation, or the curation of content, as antithetical to interactivity. YouTube automatically selects the video it wants you to watch next – the technology may be sophisticated enough to make this choice effectively, but the aggressive recommending removes a degree of interactivity. The user's actions are accounted in a generalized way, and the recommendations are much more populist than personal. For example, YouTube does not (yet) adjust its recommendation for those who watch only a fraction of a video!

In contrast to the YouTubes and the Netflixes of the world, think of any magazine-style site which hand-picks interesting new things and places for its readers – Daily Candy, Cool Hunting, or almost any blog which showcases current stuff on the Web. Human curation provides a seemingly personalized experience, but in the unbounded space of Web content, there is no real concern about how-much-is-too-much: we, the consumers, browse until we are sated.

Museums, of course, favor curation, and for good reason. Our institutions display strange and rarified objects which beg explanation and often require context. Our museums are also accustomed to controlling the volume of information in the gallery; this is where mobile devices become so crucial.

It would be a mistake to think that we must remedy this curated experience with interactivity, or that interactive = good while linear = bad. Many visitors who take a mobile tour want a guided experience – they wish to be shown the highlights. Regarding mobile tours in particular, we could say that some users appreciate an interactive experience, but not everyone does.

Why are the demands of mobile media so different from those of popular media channels? It all comes down to the way we perceive time when we're on our feet, versus in our seats. It's the very nature of mobile media to assume the user is a multitasker – who is already doing something while he/she accesses this content. In an unfamiliar place, I look at Google maps. Caught in indecision in a video store, I consult IMDB. Hungry, I might try Urbanspoon, and so on. We can use these tools at home or at our desks, in preparation, but the true reward of mobile computing often comes to us in these multitasking moments.

How does this apply to museum interpretation? First, remember that mobile tours make our visitors into multitaskers. The visitor's primary purpose is to see the museum, and the mobile device is an accessory to that purpose.

The explanation a visitor wants or needs is fundamentally different from questions like, Is this restaurant any good? or How do I get from here to there? What we're seeking in a museum is often far less quantitative. The visitor may want the explanation, not three different points of view. Visitors may not have the patience for a lot of back story, but if we serve them the basics and they are hungry for more, we must be sure to provide a thread for them to follow.

5. Making difficult choices

It is possible to say that content producers without a long history of media publishing have a leg up in the mobile world. Those entering the field in the 2.0 era – the twitterers, bloggers, vloggers, etc. – are creating discrete media items which are easy to share, bookmark, and distribute; content that's created at an already atomized level passes easily between the mobile and desktop worlds.

However, for SFMOMA, or anyone else who has a legacy of creating media content for Web and kiosk use, the recent platform proliferation has resulted in multiplying production tasks, and in some cases radically reconceiving the ways we tell stories. It has also introduced a host of new file management challenges.

We're coming from a paradigm where a single delivery platform – the Web – dominated, and we've relied on systems designed to publish browser-based content. Now we've entered an age where browsers are just part of the big picture. Even before SFMOMA had entered the mobile arena, our team's resources were chronically stretched to their limits. Increasing our personnel or our productivity is simply not an available option.

In their 2009 paper, After the Heroism, Collaboration: Organizational Learning and the Mobile Space, Peter Samis and Stephanie Pau identified our team's need for "a flexible authoring and publishing platform for mobile devices." Now, three years later, we have a publishing system for mobile devices, but the flexibility needed is still not in easy reach. We do not yet have a system with which we can produce both the in-house mobile tour and the consumer app we envision. Furthermore, our tour publishing system still doesn't communicate with other in-house systems, such as our Web CMS.

In the months ahead we need to determine how much value there is for our visitors in catering to each emerging platform, knowing of course that the stakes will continue to change. We understand we must be agile in adapting to these platforms as well as others yet unseen. More than anything, we are learning how our content management systems, our production systems, and not least our content, need to be responsive and receptive to the continually shifting landscape.

6. References

Samis, P. (2009) and S. Pau. "After the Heroism, Collaboration: Organizational Learning and the Mobile Space:. In D. Bearman & J. Trant (Eds.) Museums and the Web: Selected papers from Museums and the Web 2008. Toronto: Archives & Museum Informatics. 77-88. Also available at

Cite as:

Gangsei, E., and T. Svenonius, Mobile Means Multi-Platform: Producing Content for the Fast-Changing Mobile Space. In J. Trant and D. Bearman (eds). Museums and the Web 2011: Proceedings. Toronto: Archives & Museum Informatics. Published March 31, 2011. Consulted