1. Introduction and motivations
This paper deals with three relevant issues in the domain of cultural heritage communication via ICT: 1) reuse of content; 2) content adaptation; 3) content authoring.
1. Reuse of Content
In cultural heritage communication, there are different kinds of content: exhibits’ descriptions, exhibits’ interpretations, general introductions (e.g. to an artistic movement or a technique), historical overviews, artists’ biographies, etc... Generally speaking, content is expensive, since it involves knowledgeable people; therefore, the possibility of ‘reusing’ is quite appealing, in two respects:
- in view of providing a richer variety of formats and user experiences, over different technologies, and fitting different contexts of use;
- in view of repurposing it for a different occasion (e.g. a new exhibition, another multimedia guide, a multimedia catalogue, etc…)
2. Content adaption
In most cases, content can’t be reused ‘as is’. One possibility is to recreate it from scratch, but this is quite expensive and not always feasible. Another possibility is to readapt it: for example, a catalogue entry could be placed in the audio-guide of a specific exhibition by adding some cultural information to frame it in the new context, with instructions on how to physically locate it in the gallery. Another form of adaptation may involve technological issues (e.g. the screen size) or the situation of usage (e.g. the user can’t look at the screen). More difficult adaptations involve user profiles (i.e. their background): when presenting a work by Botticelli, should a brief description of ‘Italian Renaissance’ be introduced? Different versions of a multimedia-guide, aiming at different audiences, could take different options.
3. Content authoring
The practical feasibility of adaptively reusing content depends on three crucial elements:
- The design model (i.e. the methodology for conceiving multimedia interactive applications, in a variety of versions fitting a variety of purposes).
- The authoring process (the workflow defining the different authoring steps).
- The authoring/delivery tools (specifically oriented to multiple versioning of the same application and explicitly supporting reuse and adaptation).
In this paper we discuss the above-mentioned issues in the light of the experience gained with Nippon, a family of multi-format, multichannel, adaptive applications on four exhibitions held in Lugano (Switzerland) from October 2010 to January 2011 (www.nipponlugano.ch).
4. The issue of adaptivity
Today, users have at their disposal a growing number of devices and channels: PC, smartphones, iPhone, MP3 players, iPod, iPad, multitouch tables, cell phones, social spaces, etc., in most cases connected, but also with off-line content. In addition, users may have different profiles, ‘special needs’ (e.g. visual impairment) and various purposes (e.g. getting an introduction to a subject). Moreover, they may find themselves in different situations: sitting at a desk, travelling on a train, waiting at a station, walking in an archeological park, etc… In this complex scenario, users are developing the expectation of finding the ‘same’ application ‘adapted’ across all the situations and devices available. However, the current situation is quite disappointing: today, most of the interactive multimedia systems suffer from an inability to satisfy the heterogeneous needs of many users using the same application on different devices and contexts of use. For example, Web courses present the same learning material to students with widely differing backgrounds; multimedia mobile guides offer the same ‘guided tour’ to visitors with very different goals. A remedy for the negative effects of the traditional ‘one-size-fits-all’ approach is to develop systems with an ability to adapt their behaviour to the goals, interests, devices, and other features of individual users or groups of users. This issue is addressed by a wide research area known as Adaptive Software Systems, which has been specialized into a number of subareas such as Adaptive Hypermedia, Adaptive Web, Context-Dependent Computing.
But in spite of many valuable attempts, most of the multimedia applications today do not adequately support adaptivity: most of the time, they are available over a few devices only; if over several, the application is either clumsily ‘squeezed’ to fit them, or ‘redesigned’ from scratch for each one. The other adaptive features are, in most cases, neglected: e.g. one standard situation is assumed, user profiles are grossly standardized, the visual medium is at large predominant, contexts of usage are oversimplified, special needs are dealt with superficially, etc...
One of the reasons for this failure is that the current generation of authoring environments is quite unsatisfactory. Many authoring tools are officially aimed at specific technologies for specific situations (for example, an iPhone in a gallery). Other tools are apparently aimed at multiple technologies and situations of usage, but they are actually biased towards a limited set, especially as far as situations of usage are concerned (e.g. try to listen to a mobile guide for a gallery while sitting at home), and their information architecture cannot be easily bent to fit new needs. The challenge for designers and developers is to face this complex and multifaceted issue in a cost-effective way, i.e. without starting from scratch for every new device, situation, purpose etc... In this paper, we discuss an initial solution, as applied to a concrete case-study: Nippon Multimedia, described in the next paragraph.
Nippon Multimedia
NIPPON is a set of four exhibitions plus a number of collateral events on Japanese culture taking place in Lugano (Switzerland) in winter 2010-11. The four exhibitions are dedicated to:
- the world-renowned photographer Nobuyoshi Araki ( Araki. Love and Death)
- albumen photography ( Ineffable Perfection)
- erotic prints from the XVII-XIX century ( Shunga)
- the Gutai artistic movement ( Gutai. Painting with Time and Space).
Nippon Multimedia is the result of cooperation between the city of Lugano, its museums, and the Università della Svizzera italiana. It is a family of multimedia applications, available in various formats over several devices/channels: Web (also from mobile devices, like iPhone, iPad, smartphones etc.), podcast, CD-rom, social spaces etc. (figures 1 and 2).
Fig 1: Nippon Multimedia: the (Web) home page of the section of the exhibition: “Araki. Love and Death”
Fig 2: Nippon Multimedia, “Araki. Love and Death” exhibition on Youtube
To effectively support a number of different user experiences, four different communication formats were developed, exploiting an innovative approach to adaptivity.
1. Thematic narratives
For each exhibition, a thematic multimedia narrative was developed, providing information about the exhibition’s main themes, the artist(s) involved, the artistic movement, the historical context, etc... Each thematic narrative is organized as a sequence of pieces of content, each about a specific subject (figure 3).
Fig 3: The sequence of topics of the thematic narrative for the exhibition “Araki. Love and Death”
Each piece of content consists of an audio lasting one minute approximately, plus a slideshow of images (5 to 6) with their captions (fig. 4). Thus, in 8-10 minutes, the user can get a complete overview of the exhibition’s themes.
Fig 4: A screenshot from the thematic narrative of the exhibition “Ineffable Perfection”, about albumen photography. On the left, the list of the narrative’s pieces of content; in the middle, the slideshow of images; on the right, the captions and the links to the relevant highlights
The users can either listen to the entire sequence automatically or select what they are interested in. Some highlights from the exhibition are offered as additional links (figure 5).
Fig 5: The user can access a highlight while consuming a thematic narrative. Once the short description of the highlight is over, the user is brought back to the thematic narrative
The thematic narratives act as introductions, to be used before the visit (at home, while driving to the exhibition, on a train, etc.) or after the visit, to enhance recollection.
2. Catalogue (of highlights)
For each exhibition, a number of highlights (circa 20 each) were selected for a ‘closer view’ (figure 1, on the right). Each highlight goes with some images and a comment. The highlights can be consumed either in a row or by selection. Possible scenarios of use include preparation before the visit (preview of the best exhibits), recollection after the visit (search for specific exhibits). If accessed via mobile device (either online or as a downloaded podcast), they can be used during the visit as audio-guides or interactive guides (see below).
3. Interactive guides
The pieces of content developed for the thematic narratives and the multimedia catalogue can be re-purposed to work as interactive guides (over mobile devices). The highlights comment on the exhibits; the pieces of content of the thematic narratives introduce the background.
Fig 6: A visitor using the catalogue on iPhone as interactive guide at the Araki exhibition
4. Mash-up narrative
The content developed for the thematic and catalogue applications was repurposed to fit another format, a ‘mash-up narrative’ that we called “Nippon at a glance”. Each element (e.g. a topic from a thematic narrative, a highlight…) is represented by a thumbnail, in an attractive mosaic of scattered pieces (figure 7). Users select the elements that draw their attention. This format fits devices like iPad (fig. 7), tablet PCs, multi-touch tables and the like very well. It is particularly suitable in the case of users who feel like having a serendipitous experience. The mosaic can also be explored using a word cloud and a tag cloud.
Fig 7: The ‘mosaic’ of “Nippon at a glance” over iPad
All these different formats were developed thanks to an innovative approach to adaptivity, described in what follows. Basically, one single effort is required, in terms of content authoring and technology, to get the four different versions.
3. A model for adapting content
In this section we propose a practical model for adapting content (in cultural heritage) to the different needs of the different versions of the same application. The model provides guidelines for authors and requirements for the authoring environment (see next section) all at once. In order to model adaptivity, we need to model content first. Simplifying the issue, we can assume that content items in the cultural heritage domain fall into one of the following categories:
- α-Alfa: a general cultural observation, e.g. “geometry is deeply rooted in Japanese artistic culture”
- β-Beta: general “factual” information, e.g. “albumen prints are obtained using egg albumen and are easy to be painted over”
- γ-Gamma: an interpretation of factual information, e.g. “mountains’ profiles resemble triangles”
- δ-Delta: specific factual information about an exhibit, e.g. “there are mountains on a sequence of planes at different depths”.
These categories can be found in different subjects: artistic interpretation of an exhibit, artist’s biography, historical context, subject, technique, etc. In a sense, any cultural heritage communication artifact (catalogues, educational materials, audio-guides …) can be modeled as a set of different α-β-γ-δ statements. Different artifacts have different strategies: some of them freely intermix the above categories: some have all the α/β-statements in separate sections (e.g. the beginning of a catalogue); some provide δ(γ)-statements only, assuming that users are already in command of the background; some start with δ-statements, building up from there to γ, β and possibly α.
Whatever the strategy is, some common problems exist:
- How much knowledge of α/β should be assumed is already possessed by visitors?
- To what extent should α/β be conveyed to the visitors?
- When should α/β be conveyed to the users, and how often?
Let us consider now an example from the exhibition “Ineffable Perfection”.
- δ-Delta
- S1_d: There are mountains on a sequence of planes at different depths
- S2_d: Notice the symmetric shape of the building
- S3_d: Notice the wheels, the silhouette of the umbrella and the hat
- γ-Gamma
- S1_g: mountains’ profiles resemble triangles
- S2_g: the building looks like a cylinder surmounted by a cone
- S3_g: wheels, hat and umbrella create the suggestion of circles
- β-Beta
- S1,2,3_b: Japanese photographers were always fascinated by geometric shapes
- α-Alfa
- S1,2,3_a: geometry was deeply rooted in Japanese artistic culture, being important also in painting and print.
In a linear medium such as an audio-guide, the δ and γ-statements can be easily paired for each relevant exhibit; but what about the β-statement? Let’s imagine that β-statement is relevant for exhibits 1, 4, 5 and 13 (in a guided tour). Should it be provided independently from any exhibit; for example, at the beginning, together with other β-statements? Or should it be provided with exhibit 1, and not repeated later? Or should it be provided along with any relevant exhibit (1, 4, 5 and 13)? Similar considerations apply for statements α.
As followup to the above content modelling, we can now discuss adaptation. Adaptation means that the multimedia application needs to be ‘adjusted’. We have four major reasons for adapting content (that often intermix with each other.)
Adaptation to devices
Besides the strictly technological aspects (formats, players, etc…) the type of device determines the relative weight of media and the amount of information that can be delivered together. Using a device with a large screen, a proper combination of images (video), audio, text and links can be used. On a small device, instead, text must be used sparingly, audio becomes very important, images can be used, but not many at once. On a small device (say a mobile phone) content items can be delivered one at a time and few links can effectively be used. As a consequence, the information architecture will have different levels of complexity for different devices. If several links can be displayed at once, a greater control (with a larger number of options) can be left to the user. For small devices, simpler structures (such as sequences in semi-automatic playing) should be preferred.
Solution:
- Each content item (α/β/γ/δ) should be available in different versions, each one suitable for a class of devices.
- In order to support the same “user experience”, different information architectures fitting different devices should be defined.
Adaptation to user experiences
The same application may be tuned to fit different needs. As argued elsewhere (Paolini & Rubegni, 2009), 4 main situations of use must be distinguished.
- Before the visit
- users browse through the application, in order to decide if the exhibition is worth a visit
- users prepare for the exhibition they have decided to visit
- At the exhibition
- users want to understand the overall subject of the exhibition
- users want to get information about a specific exhibit
- users want to be ‘taken around’ in a path across the exhibition
- After the visit
- users want to better understand the overall subject of the exhibition
- users want to ‘virtually reenact’ the visiting experience
- Independently from any visit
- users want to understand the overall subject of the exhibition
- users want to be ‘taken around’ in a virtual tour of the exhibition
Many variants of the above list could be considered; in addition, the other adaptations elements (device, profile and context) and the physical situations should be considered. For example, can the users look at the screen or not (since they are driving car,s for example)? Can they use the audio? Content modeling is very important for the quality of the experience, since it determines the relative relevance of α/β items (more important for the overall understanding) versus γ/δ items (more important for understanding specific exhibits).
Solution:
- decide what user experiences are to be supported
- for each user experience:
- shape a different (adapted) information architecture
- select the most relevant α/β/γ/δ items and place them within the architecture
- (possibly) adapt the content items and the architecture to fit different devices
Adaptation to user profiles
User profiles may need different content for different reasons:
- different ethnic/national backgrounds: e.g. basic historical elements about Renaissance in Italy are probably irrelevant to a European audience but may be crucial for an East Asian audience;
- different individual backgrounds: e.g. an expert is likely to know that a krater was used for mixing wine and an oinokoe for pouring wine. Terminology must be clarified for lay users. A more sophisticated solution could be to describe the way Greeks used to drink wine (warming it up in a krater with spices and pouring it into cups using oinokoe). This β-statement might be crucial for fully appreciating the exhibition.
Solution:
- decide which users’ profiles need to be supported;
- define the γ/δ-items to be provided for each user profile (few variations are foreseen);
- define the α/β-items to be provided for each user profile (more variations are foreseen, since previous knowledge about α/β is what actually characterizes the different backgrounds)
- Create the proper information architecture to hold the items. This may be very difficult, since straightforward solutions can’t be easily adopted. For example, it would be easy, but costly, to create different audio-guides for different backgrounds. Another option is to use a single architecture with user-controlled variants: e.g. the audio-guide could ask the users if they want to know about oinokoe. But users do not like being offered too many options and tend to neglect them.
The common solution, today, is to take a ‘generic profile’ as point of reference, to which everyone has to accommodate. For example, the oinokoe is labeled “vase for wine”, not using the technical term, but also without explaining how the vase was used (mixing or pouring?) and why (because the wine was so bad, that spices were needed to make it drinkable). This solution is easy to implement but quite unsatisfactory for the user: no one is a real ‘generic’ user. Each user, in a given situation, is more or less qualified with respect to the information provided. We should strive for truly adaptive information architectures, instead.
Adaptation to the “context of discourse”
Content, in general, is created in a specific context for a specific purpose. The context could be: “overall subject”, “section of an exhibition”, “an exhibition”, “date of the exhibition”, “institution where the exhibition is happening”, “city”, “country” …. A specific content-item, for example, could be placed in the context of “Araki”, “the beginning [of Araki’s career]”, the exhibition “Araki” [in the context of the event “Nippon”], “October 2010”, “Museo d’Arte”, “Lugano”, “Switzerland”, …. Reusing the same content item in different contexts may disorient users, from a mild to a severe degree. We illustrate the tricky issue of “context” with a few examples from the award-winning website ARTBABBLE ( www.artbabble.org ):
- www.artbabble.org/video/ima/directors-journal-sebastiano-mainardi
A video about a specific restoration project in a specific museum (IMA), with a skilful context definition. A minor problem is given by time-context: what does “current activities” mean? The video will be still interesting 20 years from now, but not very current. - www.artbabble.org/video/kqed/both-here-and-there-february-2008
The context is provided, but it is probably useless: the subject is so general that it can transcend the specific situation for which the video was created. - www.artbabble.org/video/ngadc/lions-peter-paul-rubens
No context is provided, and it seems appropriate, since the subject is independent of a specific situation. - www.artbabble.org/video/art21/allan-mccollum-shapes-copper-cookie-cutters
No context is provided in the video and the “sub-context” is provided in the text. The video on its own would be a little puzzling. But also, the sub-text is not totally working since it assumes that the overall context is known. The overall context is provided in the text associated to the next example. If the user accesses this item from the ‘artists’ list, the episode number mentioned in the text is very puzzling. - www.artbabble.org/series/art21exclusive
Here the text provides the global context of the series where the episode number makes sense. Were the same content items placed in the vacuum of a more neutral situation (e.g. Youtube), the loss of context could be even stronger.
Solutions:
- If possible, make content items context independent. Doing this would work well only in a few cases, where the subject is so neutral that it can be placed anywhere.
- Make the context description part of the content. This would not work well if the users have to go through a series of items all sharing the same context.
- Use different media (like ARTBABBLE) for the content (video) and the context (text). This will not work well when text can’t be visualized (e.g. small screen, visual impairment, user driving etc.)
- A more general solution that we are working on is to create two different pieces of content (one ‘neutral’, context independent; one describing the context), then blend them fluidly for the users: in a sense, obtaining something like the first two examples above, combining in video two different pieces.
Adapting NIPPON
Some of the above adaptive strategies were used for Nippon Multimedia (www.nipponlugano.ch); we had only partial solutions, implemented either thanks to the power and flexibility of the authoring environment or by proper cut-&-paste.
A. We generated slightly different applications for different devices (PC, iPad, iPhone, podcast). These versions differ (mildly) in information architectures, relative weight of the media (audio vs. text or visualization) and interaction capability. All the different versions were obtained at low cost, thanks to the support of 1001stories-toolkit (see next section).
B. We tried to separate α/β items from γ/δ items. For each exhibition we have two ‘narratives’: thematic narrative (for α/β) and highlights (for γ/δ, about selected exhibits). We tried to reach several goals at the same time:
- User profiles: we assumed that users well acquainted with the subject would not need the thematic narratives, which are meant as an introduction for less experienced users.
- User experience: the ‘highlight’ narratives were intended for double use. They could be used as audio-guides or as interactive guides at the exhibition. They could also be used, via PC, to better understand, via examples, thematic narratives (and to this end, the two applications were interlinked, as shown in figure 5).
C. In ‘Nippon at a glance’ all the content items were mashed-up together, with a visual interface combining images with word and tag clouds, with a view to two different user experiences:
- After a visit, for a leisurely recollection of images, keywords and artists.
- Independently from any visit, as leisure browsing.
Since we combined content items from the 4 exhibitions, a problem of content adaptation surfaced: we realized that users could not know a content item’s context. Thus, we added to each content item a small trailer, lasting a few seconds, explaining which exhibition the content came from and what it was about. Thus users accessing the same content item via the exhibition narratives or via the visual interface got (slightly) different versions. All the above was made possible by the support provided by the 1001stories toolkit, described in the next section.
4. An innovative authoring environment
The production of content items ‘adapted’ to different devices, user experiences, user profiles and contexts, requires a good design methodology and a good authoring environment. HOC-LAB (Politecnico di Milano) and TEC-LAB (USI) are currently working on the transformation of the 1001stories toolkit (Di Blas, Bolchini, Paolini, 2007). Developed as a simple tool to create multimedia narratives, it is now becoming a full authoring/generation/delivery environment, still easy to use, but supporting a variety of options and features. The two most relevant features are sketched in figures 8 and 9.
Fig 8: Contents-items are first authored and later ‘adapted’ to different devices and user experiences, generating different applications.
A. One authoring for several applications
The core idea is (figure 8) to split ‘authoring’ in the strict sense (text, images, audio creation) from the generation of specific applications, tuned for specific devices and/or specific user experiences, user profiles or contexts. Generating a specific application means shaping the interface and the interaction mechanisms, selecting the content items, adapting them and organizing them into an information architecture. Some of the above can be automated 100%; some require additional authoring (to be kept at a minimum, however). Let us consider, for example Araki highlights that can be delivered in several versions:
- An online catalogue, accessible via WEB from PC or iPad
- An online catalogue, accessible via a mobile device (like iPhone)
- An online complement for the thematic narrative (to which they are linked)
- As a podcast, for taking the playlist at the exhibition
- As one element of ‘ Nippon at a glance’, supporting a post-visit presentation
- As the basis for a complete audio-guide or an interactive multimedia guide.
The first 4 versions can be dealt with 100% automatically. The fifth requires a small authoring addition per each content item, that can be easily done (and it was done). The sixth (building a complete audio-guide) would require a small amount of addition authoring for providing directions at the exhibition (it was not implemented).
Fig 9: Content items can be reused, adapted and combined in order to provide different user experiences
B. Supporting reuse and adaptation
The idea conveyed by figure 9 is to maximize the support for reuse and adaptation. Individual content items can be adapted (as explained in previous sections) and rearranged in different information architectures. Information architectures can be morphed, fragmented, merged, mashed-up, etc… The ultimate goal is to make it possible to take the content developed for the catalogue of the permanent collection and reuse it (possibly with some adaptation) for various exhibitions, a retrospective, a touristic guide of a city, an encyclopedia, etc. The 1 001stories toolkit makes reuse and adaptation feasible for any institution, automating what can be automated, and making the rest easy and straightforward.
5. Conclusions
Proper reuse of content means substantial benefits:
‘Horizontally’, at a certain time:
Authoring is intrinsically expensive, since knowledgeable people are involved in conveying information and emotions to users. It is therefore important to maximize the outcome. Create content that
- Supports a variety of devices
- Supports a variety of user experiences and profiles
- Supports a variety of contexts of usage
- Does all the above, combining quality with low costs and short development time.
“Vertically”, over time:
Where have all the exhibitions gone? Wonderful content, developed for a specific event, is ‘locked’ in audio-guides or multimedia guides and thrown away when the event is over, or archived, becoming practically invisible, not to be used anymore. ARTBABBLE is a clear attempt at keeping content alive over time. But it is based on a specific format (of content) and a specific way of organizing it. It would be desirable for each institution to reuse its content over time, repurposing it for different situations (a different exhibition, a catalogue, etc.)
Nippon Multimedia is an initial attempt at reuse and adaptation.The next step will be to reuse the materials from past exhibitions as a ‘multimedia album’ of cultural life in Lugano so that it could be accessed via Web, via multi-touch tables [www.ideum.com] and via mobile devices. For the forthcoming exhibitions, we would like to expand the number of formats and user experiences, still maximizing the variety of technologies supported, and minimizing the authoring effort and the resources needed.
In order to achieve these goals, we are still improving the design-authoring methodology, and making the toolkit evolve to become a truly adaptive one: A-1001stories.
6. Acknowledgements
Our thanks to the people of HOC-LAB (Politecnico di Milano, Italy) and TEC-LAB (University of Italian Switzerland) who passionately work at the development and deployment of 1001stories, and to all the “Dicasteri “of the city of Lugano, as well as to all who participated to the creation of NIPPON, in particular Matteo Agosti and Alberto Terragni.
7. References
Bolchini, D., N. Di Blas, F. Garzotto, P. Paolini, A. Torrebruno, (2007). “A. Simple, Fast, Cheap: Success Factors for Interactive Multimedia Tools”. PsychNology Journal, Volume 5, Number 3, 253 – 269.
Brusilovsky, P. & M.T. Maybury, (2002). “From adaptive hypermedia to the adaptive web”. In Communications of the ACM, Volume 45 , Issue 5, 30-33.
Brusilovsky, P., O. Stock, C. Strapparava (Eds.). (2000). “Adaptive hypermedia and adaptive Web-based systems”. AH2000. Lecture Notes in Computer Science, Springer-Verlag, Berlin
Caporusso, D., N. Di Blas, P. Franzosi. (2007). “A Family of Solutions for a Small Museum: The Case of the Archaeological Museum in Milan”. In J. Trant & D. Bearman (Eds). Museums and the Web 2007. Selected Papers from an International Conference, Archives & Museum Informatics: Toronto, Canada. http://www.archimuse.com/mw2007/papers/caporusso/caporusso.html
De Bra, P., et al (2003). “AHA ! The adaptive hypermedia architecture” . Proceedings of the ACM Hypertext Conference, Nottingham, UK, 81-84.
Di Blas, N., D. Bolchini, P. Paolini (2007). “Instant Multimedia: A New Challenge for Cultural Heritage”. In J. Trant & D. Bearman (Eds.) Museums and the Web 2007. Selected Papers from an International Conference. Archives & Museum Informatics, Toronto, Canada. http://www.archimuse.com/mw2007/papers/diBlas/diBlas.html
Di Blas, N., P. Paolini. (2010). “Multimedia for Cultural Heritage Communication: Adapting Content to Context”. In Proceedings of the Esa Culture Bocconi, 2010 conference, Milano, October 7-9, 2010.
Franciolli M., P. Paolini , E. Rubegni (2010). “Multimedia Communication Issues: Why, What and When”. In J. Trant (& D. Bearman (Eds.). Museums and the Web 2010: Proceedings.Archives & Museum Informatics: Toronto, Canada. http://www.archimuse.com/mw2010/papers/francioli/francioli.html
Marchetti, C., B. Pernici, B. Plebani (2004). “A quality model for multichannel adaptive information”. Proceedings of the 13th international World Wide Web 2004; 48 – 54
Rubegni, E., N. Di Blas, P. Paolini, and A. Sabiescu. (2010). “A Format to design Narrative Multimedia Applications for Cultural Heritage Communication”. In Proceedings of SAC’10, March 22-26, Sierre, Switzerland.