About Mark Tebeau

I am an urban history professor and digital humanist. I study how memorials, public art, gardens reveal the changing nature of cities and community identity in the twentieth century. I wrote a book that explored the history of cities by thinking about the work of firefighting and fire insurance. As a digital humanist, I am also interested in curation, public interpretation, and mobile technologies. I directed the development of the mobile application Cleveland Historical.

Mobile Visualization

(PS on this one. Links to follow over the conference…)

Mobile matters. Mobile internet users will exceed desktop internet users soon, and by 2016 as many as 80% of all internet users will do so from mobile devices. Internet dat calls are increasingly coming from mobile devices according to Cisco and others, and traffic will increase exponentially.  This “paradigm shift” toward mobile, predicted by Pew in 2010, represents both extraordinary challenges and possibilities to digital humanists and scholars.

As we think about visualization in the digital humanities, this shift is worth considering. Specifically, how do practices of visualization shift when we think about doing it within the confines of the space of a 3×5 inch phone, a tablet, or an iPad? Perhaps more important questions involve those of goals and audience. How do we think about visualization in mobile context? Who precisely is our audience? How do we handle complexity? In what ways can we simplify? Although I won’t answer these questions, I will begin with some examples of best practice, which are by no means comprehensive. Great mobile apps tell stories, they evoke places and ideas. Visualization strategies are, obviously, a huge part of that.

Examples of great visualization strategies abound, though it is clear no best practice has yet fully emerged. Some of the most effective include some of the following examples: Time Travel Explorer London does this through its effective use of background maps to connote time, even if the particular narratives attached don’t excite. (Note also, this is on an iPad.) Halsey Berglund’s Scapes captured place and art through the effective use of Sound. The London Street Museum app allows users to superimpose photographs over views of place evoking a sense of how a particular locale has changed. Time Machine Manchester takes historic video and geo-locates it. The PA Historic Markers app captures the sensibility of historic markers and enriches them with rich context, it provides lots of text as well as rich images. My own Cleveland Historical (and tool Curatescape) allows for implementation of multiple different sorts of multimedia interpretive materials and meta analytical tours. We’ve capitalized on this technical capacity by interpreting through multi-layered stories with visualizations that allow audiences the capacity to move deeply into an interpretive narrative, peripherally across a wide variety of stories, or have their experience mediated by a meta-interpretive theme. Museums, such as the Indianapolis Museum of Art, have also produced ways to visualize their collections using mobile devices, often in conjunction with visits to their institutions.

Arguably the best example that I’ve seen of brilliant visualization is by the NYPL’s Biblion World’s Fair project on the New York World’s Fair for iPad. Entirely an app-based experience, this project provides an unique non-linear approach through the fair, mimicking (I think) how you might have experienced a fair itself. With a view that changes frequently, deep interpretive analysis, and a collection of interconnected visual and sound materials, Biblion suggests the possibilities of mobile to transform how we see the past.

If the above apps suggest some best practices, what other differences might present themselves for visualualization in mobile environments?

Technology matters, and while it can impress, it is not necessarily an end in itself. For example, augmented reality is seductive but emphasizes only technology, and especially the visual. Essentially, augmented reality is the ability to juxtapose a camera view (taken from your phone) over a historical or other images. Apps that do this exceptionally well are surely magical. Check out London Street Museum or LookbackMaps. More broadly, platforms for doing this have emerged, including Layar that allows you to see more than images but all sorts of information, including restaurants, directions, shopping possibilities, or even fellow users. This sort of augmentation is meant to enhance our experience of landscape by adding information to that experience. As neat as this is when it works, it is not necessarily interpretive. Interpretation is more than merely seeing an image, it is about “seeing” and understanding in new ways.

Interpretation is about building connections, representing ideas in new and original ways, or among historians it is often about contextualizing the past. In this regard, technology may augment, but so does interpretive process and expression.

In other words, interpretation *is* augmentation.

When thinking about visualization on mobile devices, we should recognize that *mobile* matters. Mobile allows us to take the power of computers into the spaces of exhibits or city streets, so our work should use the physical landscape as part of the interpretation. Examples of this include the brilliant movies on the Gettysburg Battle App (and others by the National Trust) where a live storyteller is filmed in situ, placing the physical space where the story is taking place within the story. An example of an approach to avoid is embedded in most Historic Markers apps that show images of historic markers (literally) and display their text (literally). What’s the point of this? I have no idea. If you are at a site, then you can already view this information on the marker. These apps add nothing to the marker itself. Although they do alert you to the fact that a marker is nearby, it is little consolation because they offer nothing of analytic value.

Engagement with landscape or object or the visualized data is enhanced by tools like QR codes (and, in the very near future, near field communication). The ability to interact with the code provides interpreters another way of pushing information and ideas to audiences. One aspect of the QR code often overlooked is that the information and knowledge linked to through the code has to be sought by users. As a result, it allows for projects to call users to action, to seek more information, potentially making it possible to provide deeper and richer interpretive visualizations than you might otherwise provide. It is worth noting that other strategies for engagement that are not necessarily technological can work. For example, we can invite user comment and feedback (Scapes), crowdsource the creating of content (Broadcastr), or as we do in Cleveland building content in collaboration with publics. QR codes (and NFC) also call forth the import of using the tactics of game makers, and/or interactive storytelling, or even geo-caching to draw users into your interpretative narratives, to enhance their interaction both with the material world, either artifacts or the landscape. QRator University College London connects people to museum physical objects through a quasi mobile, quasi crowdsourcing strategy that reimagines museum artifacts in entirely new public dimensions. If this is not precisely visualization, the act of engagement is vital to successful visualization. Likewise the app Comic Street Paris accomplishes the same thing by inviting users on a journey of discovery through Paris. Mobile is more than just the visual.

Sound matters. (See my post on the challenges of visualization itself and its privileging of sight over sound and the other senses in the digital humanities.)

For mobile apps that are outdoors and placed based, the use of the Google Map as background is pervasive. But, consider using historic map layers or stylized maps. Precise representations of reality can help you find something on a map, but they don’t necessarily enhance interpretation. Open Street Maps, Vector Maps, or other historical rectified backgrounds might be of greater help in evoking place.

Curiously relatively few apps have embedded visualizations such as those produced by scholars in the context of their scholarship or digital projects. On desktops and laptops, our strategies for visual or other storytelling techniques have been much richer. Perhaps, it is because of the differences in mobile? I doubt this however. It is more likely caused by the fact that the technology—both for visualization and for mobile representation are relatively new and only slowly coming together. Even so, it is clear we need to imagine our visualizations being encountered by audiences in mobile contexts and move toward figuring how best to present in this environment. I am confident that there surely is room for more richly nuanced and detail visual strategies on mobile, but they have not yet emerged. Indeed, I am in favor of having scholars lead the way in reimagining how techniques of visualization can be adapted to mobile technologies, whose pervasiveness will reshape how we communicate with one another as well as with our students.

Mobile, as I hope that I have suggested, produces a different sort of user experience—neither better or worse. It is just different. Those differences don’t preclude more traditional would recommend that we focus our time, efforts, and resources (financial and other) on building new interpretive strategies for mobile. Many of these exist within our professional practice already, but they will surely have to be re-imagined in the context of mobile expression. More importantly, visualization depends on building and responding to audiences within the framework of their experience.

Visualizing Oral History?

The oxymoron embedded in the title reveals the contradiction behind any attempt to “visualize” oral history for historical curation. One could argue that oral history, and sound, more broadly, are such fundamentally aural experiences that they can’t be visualized at all. Even so, for historians, the meaning and magic of oral history has been long hidden behind the veil of the visual. Typically this has happened through representing oral history as text transcription or encased in long-form video interviews. Sadly, such presentation prioritizes one sense–sight–above the others. This point was clearly made in the emergence of sensory history and has more recently been emphasized by oral historians seeking to recover the meanings embedded in the aural experience and expression.

Digital practice has uncritically adopted that trope in discussing and valorizing evidence for historical curation. And, arguably, our efforts to deal with visual evidence in archives, online exhibits, and other presentation has far outstripped our work in dealing with sound (much less the other sensory-based) materials. To a degree, this is ironic because some of the premier digital tools that have so altered daily life–cell phones and portable music players–have their origins in transforming voice and sound experiences, making them pervasive and deeply individualized.

In May, the IMLS-funded Oral History in the Digital Age initiative will be releasing recommendations that rethink oral history, folklore, and ethnographic practice in terms of digital innovation. Convened by MATRIX, the Oral History Association, and the Library of Congress, this project ventures into a host of important questions that are emerging in our era of digital scholarship and interpretation.

As we await that report, the curatorial challenges facing how we curate sound materials can be suggested by highlighting some well-known digital oral history projects and newly emergent tools for curating oral history.

On public radio, we regularly hear the moving narratives collected as part of StoryCorps. Storycorps reveals the extraordinary power of oral history, with those so-called driveway moments in which we find ourselves listening to Storycorps interviews on National Public Radio. Developed as a way to document America, ala the New Deal WPA projects, StoryCorps was primarily designed for radio audiences, but it also captured some of the ethos of the digital age, with individuals interviewing people well known to them, using scripts and approaches suggested by StoryCorps. Long before the term crowdsourcing emerged, StoryCorps had set about capturing the narratives by empowering ordinary folks.  Even so, StoryCorps has been widely criticized by oral historians for its methods, especially the lack of training of interviewers and conversational quality of the materials collected.

If these critiques bear merit, StoryCorps’ major failing is that it has relegated most of its “born digital” oral histories to an archival hades in which they are accessible, with some exceptions, only through a visit to the Library of Congress. Only small snippets of interviews, generally those chosen for radio, appear on a project website, which promises to archive the materials, never mentioning a more sophisticated approach to making them discoverable in digital environments. Imagine, if you will, more than 40,000 hours of oral history from across America stuck in a server, unavailable, and disconnected. By funding collection over curation, StoryCorps provides a model of opportunities lost.

By contrast, the Veteran’s Oral History Project has taken a more sophisticated approach to crowdsourcing and curation. Like StoryCorps, VOHP collects materials from the crowd, but provides a detailed training kit that emphasizes the collection not just of the story but metadata surrounding it. Even better, VOHP takes the processing and archiving of the oral histories seriously. As digital objects, the oral histories can be discovered and searched, though only at the interview level (with some exceptions.)

The Historymakers project offers yet another view into how we might process and connect oral histories. Distinctive for methodically having collected over 8000 hours of interviews with African Americans, Historymakers is also distinctive in its sophisticated approach to digital archiving, creating archival meaning and linking at the level of the audio segments (as opposed to just the entire interview), including common-sense indexing. Developed with desktop tools, the index is not publicly available, but the project nonetheless points toward a future in which oral history is indexed and connected in more sophisticated ways.

Indeed, over the last decades new techniques have emerged for processing and imagining oral history, such as that from Mike Frisch and his team at the University of Buffalo’s Randforce Associates, have emerged that use common-sense tagging and interpretive metadata that is associated not merely with an entire interview but with particular clips. Moreover, indexing interviews in this way creates a network of meaning that crosses interviews and collections.

Following Frisch’s theorizing, oral history centers have explored the development of new tools that provide for the richer production of meaning within and across oral history collections. These endeavors have taken different forms.  Concordia University’s, Stories Matters Project, follow’s Frisch’s models. It allows users to create clips according to personal and/or interpretive criteria, and then create personalized playlists of clips that speak to specific themes.  The Nunn Center for Oral History at the University of Kentucky created the OHMS tool that allows for transcription and indexing of interviews within ContentDM environments, which connects audio to searchable transcripts/indexes. More recently, Annotator’s Workbench, a tool for ethnographic research developed at the University of Indiana, obtained an NEH ODH start-up grant to connect its tool to Omeka databases. Meanwhile, my research colleagues at the Center for Public History + Digital Humanities, have been exploring a hybrid approach that would involve indexing interviews in google documents, and then creating compound archival objects in Omeka that would link audio indexed thematically to archive sound. The list of innovations in this category is expanding dramatically, and I expect that we’ll see lots of new and exciting work in this area over the next 24 months.

More recently, technology projects have been exploring how another generation of digital tools, in particular mobile, could use oral history as the basis for a richer interpretive . These include mobile-based projects like the beta effort at Broadcastr, which seek to link oral testimony to place through crowdsourcing the collection of memories. Unfortunately, creating sophisticated approaches to metadata to make such memories discoverable does not appear to be part of the project. However, there is now doubt that sound can be used to recover place in dramatic ways as I’ve learned through our efforts with the Cleveland Historical Project to curate cities in mobile environment using oral history. Listening–not just reading text or looking at photographs–can offer evocative sensations of place. Indeed, Halsey Bergland’s Scapes project might offer the best example of how sound curation can transform a museum exhibit.

Meanwhile SoundCloud is a tool that has been built for curating sound. In addition to making sound sharable, SoundCloud has a variety of features, which include: an open api, an emphasis on social sharing, pervasiveness on multiple platforms, and allowances for community commenting on sound clips. Like scholars’ efforts to curate oral histories at the segment or clip level, SoundCloud allows users to connect to sound and oral history at the level of meaning through rich annotation. Moreover, because SoundCloud files can be embedded into digital projects, it allows for rich use of sound both to build community and to explore how communities can contribute to interpretive digital humanities. Finally, with an open API, SoundCloud is being extended into a variety of ancillary sound projects making it a potentially valuable tool in the digital humanities.

If no single best practice has yet to emerge from these various efforts, we can see the outlines for how we might think about curating oral history in a manner that offers a richer perspective on sound. These points of agreement might include some of the following: a) recognition that oral history is fundamentally an aural experience and not just a text; b) oral history should be evaluated for meaning at a clip or segment level, not just at the level of the 60-minute or 90-minute interview; c) clips and segments should be connectable across interview or even collections (or archives); d) our metadata schemes, as well as our work in representing oral history to public audiences, have to account such rich metadata schemes (think linked open data); e) collecting oral history is one thing, making it open and accessible is another goal that the oral history community should embrace; f) efforts at linked open data have to account for segment and clip-level metadata; g) and, it is vital that we involve communities in processing and connecting to oral histories together–much as we are learning to involve those same crowds in collecting oral interviews.

As for the other senses, I fear that the literature is less well developed. And although I can’t imagine how the digital age will recover smell, the advent of three-dimensional printing might soon present a new tactile experience in which historical artifacts could be reprinted and re-imagined in physical space.

That said, perhaps I protest too much about the over-emphasis on visual metaphors in our digital humanities conversation. I might be a bit like Cassandra, and sort of like the musicians who organized to protest against recorded sound in the 1930s. (On the image below, and those protests, check out this dandy link from Smithsonian.com.)