Women Architects and World Fairs

Author: Emily Hynes (Page 1 of 2)

I’d Like to Thank the Academy… (or not?)

Kidding. Well, not so much – I am grateful every day for the chances I have to study here. Yet, as technology and digital concerns are increasingly prevalent in everyday life and discourse, it is incredibly discouraging to see that the academy in both art history and musicology are so behind when it comes to embracing digital method and digital scholarship and its validity. Ayers notes in “Does Digital Scholarship Have a Future?” how the academy and path to tenure has not changed, despite huge changes in the technological world around us. This is not necessarily encouraging. Sidni Dunn echoes these concerns in ” Digital Humanists: If You Want Tenure, Do Double the Work,” an article in the Chronicle of Higher Education.

This is especially difficult as the scholarly community is developing online in ways it doesn’t in person. Musicology and digital humanities scholars have very active twitter platforms, where scholars critique, promote, and discuss new scholarship. Not only that, but they are finding their voices in critiquing the state of academic life and the issues inherent to the structures of the academy.

More than that, I have found Twitter in particular to be a great way to guage interest in ideas, test the waters, and announce fellowships. This community building is incredibly important for scholars. Just last week, a couple students in a seminar I’m in sent out the word on the results of a group project we had done. This reader, a History of Musicology, was our updated and modern approach to addressing a historically very elitist field by highlighting interventionary texts in the discourse. People responded quickly and with great excitement! One person’s post had 253 likes, 93 retweets, and 70 comments, while another had over 100 likes, retweets, and comments. That is incredibly overwhelming feedback from a community of scholars that rivals that of a conference presentation or paper.

The means through which we communicate with each other are changing. Actually, I’d venture to say they have already changed permanently.

The publishing has changed significantly, as well, as more blogs and strictly online repositories and archives are gaining MLA classification and repudiation. More importantly, scholars are just using these sources more often. They get the foot traffic of left clicks. They have people’s interest and ease of access (sometimes). And, scholars such as Joan Fragaszy Troyano  are working to publish guides like “Discovering Scholarship on the Open Web: Communities and Methods” so that this new world of online publication and research is supported, ethically sound, reputable, and trustworthy.

But how long will it take us to reach that point? How long will we sit in acquiescence while we are keeping up with one world of new technology while still conforming to the monographic tenure approval process?

I have no idea. Because really, we can talk all we want about the issue but never actually fix the problem. I’ve never had a conversation about this privately or in class that has produced viable solutions or suggestions for alleviating this incredible pressure on young scholars to be hip and new yet also continue the time-honored tradition of working alone in a dark room, driving yourself crazy writing a book you don’t know if you care about in order to have some job security and maybe buy a home or start a family.

All this in perspective, though, there are worse things to have to deal with in order to have a job and get paid. I don’t mean to complain. I’m merely throwing my opinion out into the ether while hypocritically offering no solution other than that we need a solution (or two. or three).

A Musicologist Trying to Crowdsource

This week, as we tackle crowdsourcing in digital art history projects, I’m still at a loss for how the crowd mentality can work in musicology. As most academics in msuicology who need transcriptions do them themselves or pay someone else, I’m unfamiliar with cases of widespread public textual transctiption for msuicological purposes. Moreover, musical transcription should be something all musicologists are trained in, and transcription of aural events to written notation (while problematic) is usually done by the resarcher, as it’s not ususally done for large-scale projects.

Crowdsourcing sheet music, I believe, has been done before, like through Sheet Music Consortium, though I’m not sure how that quite relates to art history. Certainly, the crowdsourcing of songs themselves has been a version of ethnomusicolog, with ethnographeric research practices originating from anthropology. So, while crowdsourcing in ethnomusicology exists, and like in many other crowdsourcing projects, the people helping aren’t always properly credited. There might not be a comparable situation to transcription projects’ crowdsourcing in digital art history projects.

The issue and opportunity with crowdsourcing is more prevalent in DAH (digital art history) projects, I think. The Tate and Smithsonian museums have already implemented crowdsourcing as part of their projects. Many of these have come in the form of audiences participating in photo or object contribution and transcription. This can be used as a way to circumvent issues of hegemony in museum display work. This way, broader audiences are represented and reached through information in part curated by their peers. There are issues with this, however, as crediting these contributors is important, and it is also difficult to make sure that people who are experts with or without degrees in these fields have a chance to contribute and organize in such a way that it can be useful for their CV’s. While I don’t like the idea of experts being possessive over their work as a form of elitism, I do support the valuing of experts for the work they do and the unique experience and perspective they bring. The notion, though, that experts are the only ones who can contribute valuable and informed information, seems incredibly exclusionary to me. I think that embracing crowdsourcing as a means of not only getting interested people involved but also embracing a multiplicity of perspectives is incredibly valuable.

This segues into what I think is a mroe practical form of crowdsourcing for musicology projects – edit-a-thons for wikipedia or other online public knowledge sites. Wikipedia is the best example I have of this, as it has become ubiquitous for nearly everyone who uses the internet on a regular basis. Wikipedia also has a couple ways in which it tries to get informaiton from experts on its bibliographic pages. Contributing to GLAMwiki or edit-a-thons is a good way for academics to get involved. After all, if professors know their students will go to the wiki page, they might as well take advantage of the fact that they themselves can help control what’s on those pages!

While the art library at UNC has edit-a-thons, I haven’t seen such opportunities in the music library. Though, I’m sure this opportunity could be very exciting for those who love music! GLAMwiki is also an exciting opportunity for scholars and institutions to propose and execute projects which contribute to public knowledge of a subject. The GLAMwiki initiative – an acronym for “galleries, libraries, archives, and museums,” is a great place for “cultural professionals” and “wikimedians” to contribute published research, images, artworks, biographies, video/audio archival objects, and bibliographic references. GLAM events include edit-a-thons and other events where contributors are encouraged to participate in curating this online information.

Since wiki bios have to be about people notable enough to ahve a bio, I would love to see an edit-a-thon for female composers that includes uploading lots of archival proof of their achievements, so that their work is highlighted and used as evidence of their relevance.

Overall, I think that I need to brainstorm more ways in which crowdsourced transcription will either help or hurt musicological research.

Turns Out, Networks Aren’t Just for Wifi

This week, I write of nodes, edges, centrality, bimodes, and more – these new concepts for me are particularly confusing, to be honest. So, I’d take my analysis with a teaspoon of salt. My previous experience with networks include “what’s your wifi network password?” and “social networking is a strange form of delicious evil,” so needless to say, I’m a new-bie. This doesn’t mean, though, that I can’t find ways to engage with network theory and digital humanities projects, so hold on to your hats, folks, it’s going to be a bumpy ride with no in-flight wi-fi.

To start, I’ll cover some key concepts – the first of which is a node. In networks, nodes are points of information which often are shown connecting to each other. Nodes connected to more often have more centrality. Centrality increases when more instances of it occur in the data. This does not necessarily mean that centrality means something is more important, though – just that it has more connections to other nodes.

The visual connections between nodes are called edges. Contrary to what I thought, edges aren’t just connections – they themselves are a type of data. For example, if you were onnecting Tolstoy (node type 1 – author) to War and Peace (node type 2 – book), the edge would mean “is author of”. But, if you were connecting Audrey Hepburn (node 1 – actor) with War and Peace (Film)(node 2 – film), the edge would mean “stars in.” Connecting two types of information, stars to movies, authors and books, etc, means a network is bimodial – the “bi” prefix meaning 2 kinds of nodes. This understanding of networks as many living pieces is new to me – and helps me see them as livingorganisms as opposed to confusing, static visuals.

A challenge with these visuals is embracing the living aspect of them without sacrificing complexity – something that the nature of nodes necessitates, notes Weingart in his article, “Demystifying Networks.” I recommend this article to other in the humanities grappling with the uncertain nature of our data and the certainty that forms of analysis like these require.

This week, as I worked with both Palladio and Gephi, my concerns about relaying uncertainty and my desire to capture the action colored my interactions with both the software and the networking examples I analyzed.

Palladio is certainly fun to use. It creates connections similarly to Tableau, though it seems helpful with the mapping and timeline features and graphs, and is more intuitive than Tableau, which is also a little less refined looking. You cannot recolor or resize in Palladio, which is a feature I’d have liked to see. You can download the results of your network analysis – what you are downloading is a screenshot of part of your network. Since you don’t have to have an account, it is user friendly at first, access-wise, but you cannot save it for later, which is a considerable downfault, in my opinion.

Gephi is far more complicated than Palladio, but you have more control. To begin, open Gephi, open graph file, and get to work! When you make the graph, you’ll have circles and lines and can have different colors. These nodes and edges are moveable, depending on the theme and the tightness you give the overall network. It is extremely not user friendly – to change appearance, for example, you must click partition, modulatory class, palletes, generate new pallete, and move even further from there – and none of those titles are terms I, as a non-expert, would be able to associate with changing pink to blue. Not that I could do that anyway – the color variations do not offer a wide variety.

Gephi, as I have said is not intuitive, but great for editing specifics. You can spread things out more in layout – the Reingold layout makes the network into a pretty ring and spreads it out, which is actually quite beautiful. But getting your data into Gephi is a whole process and very complicated, so if you want your pretty data ring, be prepared to work hard for it.

The one redeemer for Gephi’s difficult interface is the online user manual – they have a lot of information in the online manual that is very specific and useful for anyone hoping to use Gephi. You can save Gephi projects, which is a step up from Palladio, but to present in a presenation at a conference you’d likely need to screenshot the static network, which is the same process as Palladio. As it is not based online, it’s unlikely that you’d be able to embed it as an interactive network within a presentation.

I Dream of 3D

Now, I must admit: I am very excited about 3D modelling. As a person who lvoes 3D printing, I was very excited this week to dip a toe into the process of scanning a real object, making it digital, and then setting it up with the potential to be 3D printed! My dreams can become a reality! As a musicologist, there aren’t a lot of ways I can immediately find this useful, as 3D printed instruments tend to work questionably at best and I don’t know of any other solid objects closely related to musicology. For art history, however, I can see so many ways in which this could be useful! As a method of complicating discussions surrounding aesthetics and historical reproduction, 3D scanning and modeling can certainly act as a research tool.

Briefly, Though, I would like to describe my experience using scanning technology. The program itself is fairly intuitive, which is something that pleasantly surprised me. With our former work on networks showing us some complicated software interfaces, my hopes weren’t high. But, as the software frays out sections you don’t need at the beginning, I found that very helpful. I’m easily overwhelmed by tons of options ad buttons with no clear usage, so this was intuitive for me. Uploading the images of the side of Person Hall on the UNC campus was straightforward, and the software walks you through uploading th eimages, creating preliminary dots, and expanding that to more dots anbd mesh, and finally creating the 3D model in rudimentary and later, more advanced detail. Even putting the texture back on the model was relatively easy! Most of these steps took just a few clicks. I’m very pleased with the finished product. Once a project is finished, it can be exported to PDF for presentational use, which I also found helpful.

The Agisoft Photoscan software worked alright for my own personal project, but not as well as the demo we did in class. I had a really difficult time figuring out how to get it to recognize all of the photos even though there was overlap. I sectioned out parts and removed them from the picture to eliminate confusing shadows that might mess with the software, but did not have much luck. Regardless of if I had an all-white or all-black background, the program still would not recognize about half of my pictures, at best. I spent a while trying to figure out these issues, to no avail. Below is the best that I could do with the antique pencil sharpener I used as my object. I followed the directions as best as I could, and still wound up with a sharpener that looks more like shrapnel.

Directions:

  • No wide or fisheye lenses, JPG is fine
  • Send photos in orginial resolution to your phone
  • Avoid smooth, shiny, mirror, or transparent objects – hair too
  • Avoid unwanted foregrounds
  • Avoid moving objects within the scene
  • Avoid flat objects or scenes. Don’t edit the photos
  • Take as many pictures as you can. Try doing entire circle high, straight on, and low
  • Object should take up entire area possible
  • Overcast days are great for this – if you can’t get soft light, get golden hour. No flash
  • Keep camera straight on for the object, which means you do the moving
SjarpenerDownload

I’m interested to learn the next steps that take it from 3D model to printed object – from what I understand, there are quite a ew more steps to get it to that stage in meshlab or another software. That in and of itself is daunting, but I’m very excited and up to the challenge. My biggest concern for 3D modeling is understanding the implications of 3D images as research and pedagogical tools. I’m so excited to use and create with this software, and I feel like my need for creativity is making me so excited that it’s crowding my ability to be critical of 3D modeling on a higher level. I think that will come with more time and experience!

Hold Me Closer, Time Mapper

I will start off by saying that I love digital mapping. I love the coalescence of text, media, geolocation, history, images, and prose to tell stories and relay histories to a wider audience in a more interesting package than a research paper. Our explorations this week were a great chance to delve into another kind of mapping – TimeMapper, run by KnightLab and closely related to TimeLine JS. While I embrace this exciting and relatively painless way to quickly throw information into the interactive, digital ether, I really do wish that this software would embrace me back and let me personalize more of the great features it has to offer.

The idea of TimeLine JS is intriguing enough – timelines without the mapping aspect, that is. If we see timelines as Michael Goodchild does – as the potential for mapping lifespans – timelines become something much more relevant to real, lived experiences of people. Our place in time and the chronology of history’s unfolding is integral to the human experience. Part of the reason why I like the TimeMapper application, though, is that it not only incorporates the chronological aspects of existence, but the location of experience. As much as our sense of place in time matters to living, our actual place in space carries much cultural and social meaning as well.

The thing about timelines is that if they’re not intuitive, the reader can get stuck easily on a snapshot, a single point of the timeline with little sense of where the timeline goes or will end. The great thing about mapping capabilities is that often times you can see all of the points of the timeline not arranged in a linear fashion, which makes it difficult to navigate on screens at times, but in a cluster based on location, which can give some sense of understanding beyond cultural.

Below is my TimeMapper, a map of about 23 points detailing some gulag music research I’d done a while back. I red memoirs and diaries of former Gulag prisoners, and pulled out names (when able) of people participating in musical events in the camps. I then added geolocation and prose to the names to create this timeline. I think the difficult thing about putting this information into TimeMapper is that I’m not sure the timeline tells a new story or is an effective way of presenting the information. The timeline doesn’t let me easily show cateories of musicking or types of people, or in any way let me visually group people because it’s a timeline. In the past, I’ve been interested in finding common themes across the people’s stories. I haven’t thought before about whether or not the chronology of all of their experiences made a difference, however, and I think that this timeline does make me consider it. I’m just not sure someone would find it useful for informational purposes as I think they’d get bored fairly quickly.

Altogether, I think the Timeline feature could be useful depending on the project, and can help us approach our research data from new perspectives that may or may not be obvious.

Tableau, end of “Digital Art History,” Act I sc. ii

One of the thing sI like most is when a data visualization tool encourages play, the more intuitively, the better!If all the world’s a stage, with our parts, entrances, and exits, should “digital humanities” not, by the very nature of its name, promote digital methods to which we see the complicated storylines, character arcs, and acting methods? I believe the best ones do. There are, however, static aspects of a theatrical play, such as the tableau, a still frome easily photographed, pleasing visually and telling of the story in some cases. The Tableau software we worked with this week accomplished just that, but for data sets. And, while I believe Tableau does well enough at accomplishing its purpose (a still visualization of a complex, real life system), we should not get bogged down too much in analyzing it. If we do, we run the risk of losing sight of the intricacies of the human system – or stage, in our metaphor – the nervousness, the adrenaline, the facade, the spectacle, the introspection, the emotion… the part that makes it as much a living, human thing as anything else. The part “the humanities” clings to, quite desperately.

Now, I don’t mean for this to be a heavy critique of Tableau by griping about what it doesn’t do. I would rather engage with it on its own terms. It is moreso the attitude around digital humanities critique that I would like to, well, critique. To start our show, though, let’s just walk through Tableau.

Tableau

Map of Ballads Collected by John and Ruby Lomax on their 1939 Field Recording Trip

My experience with Tableau was not remarkable compared to other data analysis methods I have used in the past. I am impressed with the ease of operations in tableau and the ability to switch between different modes of visualizations, such as polygon map, point map, and basic chart. I also was impressed with the options for topographic map layers supplied by Tableau – I used one above, called “Percentage of Population, Black/African American.” This was an interesting map layer to put under my layer of pin drops showing the frequency of “negro ballads” recorded by John and Ruby Terrill Lomax on their 1939 Southern Field Recording Trip. Notably, it is particularly helpful that in the search for “negro ballads,” the Lomaxes left out a state with a considerably-sized black population. The only issue for these topographic layers supplied by Tableau, though, is that they don’t provide the source or the specific numbers – so it is difficult to say exactly what the statistic is or from what year it was taken. Since Georgia had a sizeable black population in 1939, I still find it a helpful visualization for my purposes, but also would need to be more heavily researched.

The Tableau maps and charts, while fairly quick to make and easy to modify, are not interactive. While the goal of Tableau is not to make interactive maps and visualizations, I will not fault them for this. That would be unfair. I will, however, note that there are other mapping platforms that perform similar functions and let the user (not just the creator) play with map layers, pin-drop pop ups, and topographical layers, such as ArcGIS, CartoDB, and Omeka. While they are less intuitive than Tableau in the map-making process, Tableau is great for creating maps and intentionally thinking about the data in charts, as opposed to batch uploads which magically produce drops on a map.

I hope that in my comparison of these mapping tools I have emphasized a form of writing most scholars are unnused to – compare and contrast, with no objective “best” form of software that does everything. There is no software that does everything – so based on your needs, different software will be best for you. That doesn’t mean that something else isn’t extremely useful to someone else. That being said, there are some forms of digital tool critique that I find quite helpful and illuminating. The first is accessibility and readability.

Especially when concerned with data visualization, we have to keep our reader’s visual literacy in mind. Nessa’s “Visual Literacy in an age of data” offers a nice critique of the ways in which increasingly complicated ways of depicting data actually create more distance between the researcher and the audience and makes you lose readers by making your work visually inaccessible. Practical solutions included opting for bar graphs instead of pie charts; clean, simple text that explains even complex arguments with as much layman’s terminology as possible; and omitting arbitrary visualizations.

We also have to be careful about the way we frame the possibilities and practicalities of digital tools. Physics arXiv Blog’s article “When a Machine Learning Algorithm Studied Fine Art Paintings, It Saw Things Art Historians Had Never Noticed,” makes it sound like the AI potential in their software is capable of things that human art historians were unable to do – this is untrue. They were able to make one connection in particular between two unrelated artworks that hadn’t been made before (presumably whoever noticed this has read all of the literature possible on these artworks and therefore can confidently say that connection had not been previously made). The title of the post is misleading, as even in the article they say that the AI in no way could replace a real, human art historian. The flashy headline to catch readers’ attention to a DH project ended up negatively affecting the overall tone of the article and came across as very off-putting to every art historian I’ve talked about it with.

“Well, well well, how the turntables…” (A humanities scholar cleaning big data)

As someone who has catalogued thousands of music performances, big data is no mystery to me. Neither is data mining or tidying data. This week, however, I learned not only some new tools and tricks of the trade, but was able to further my musicological explanations for the importance of big data in humanities work. As I explored these new (to me) methodologies for data mining, both in the theoretical and practical sense, I strengthened my ability to argue for the usefulness of these seemingly disparate methodologies.

The debates over the usefullness of data mining in humanities are ubiquitous to the work of those conducting this kind of work – if an article concerning Digital Humanities and Art History , Musicology, or any other humanities field is published, it is often polemical, reacting to or pushing away from the idea that the digital and the human are mutually exclusive. This binary view of humanities and technology, setting the emotion and experience of the human against the binary coded computer, has been disputed for nearly 50 years. Jules Prown argues for the usefulness of computational analysis in art history as far back as 1966.

The troubling human/technological binary is too much to unpack here, but it is an elephant in the room as soon as someone mentions digital humanities. It often leads to questions like “so what?” “where’s the buck?” or “how does this do anything more than what we can do already in prose?” These questions I will answer below by discussing a couple of digital humanities tools.

Text Mining

Now, text mining might sound intimidating, but it is useful for both research and educational purposes. Say, for example, your work hinges on the fact that someone was the first to coin a term, or that a term doesn’t exist until a certain time. Or, you want to confirm that a word has or had a specific usage before that might be different from what is assumed. How can you possibly know this information and be fairly certain of its accuracy? While text mining through tools like Google NGRAM and Voyant are by no means 100% accurate, they’re at least a step towards discovery and potentially validating or disqualifying claims.

Using tools like Google NGRAM and Voyant, you can enter a word or group of words and the software will show you the prevalence of that word via charts which are generated after the algorithm searches through every OCR’d word in its arsenal. This includes millions of books, more than a person could read in ten lifetimes. Now, just because a book has a word and it shows up on the usage chart, it doesn’t mean that this method isn’t problematic. To say nothing of the neglect for oral history and the language barriers of this method, it’s already extremely important to acknowledge that context is key, here – is a book using a term as a part of its vocabulary? Or saying something outdated? Is it at a time when that term is in colloquial use? Or under scrutiny? All of these contexts are essential to consider. A book with a talking dog as its main character will have a different context from the analysis of Pavlov’s dog, but both will appear in the search.

If you control the variables enough, however, this could be an immensely helpful research and educational tool. Perhaps you want to use it like they can on the DataBasic site, where they can use text mining from a spreadsheet to generate networking maps. It also can be helpful for personal use, as you can search through any OCR’d research materials ou have for key words or phrases in common, or very simply, find terms and quotes you remember vaguely but cannot seem to find on the page. Control/Command + “F” in an OCR’d PDF is just as much text mining as anything else. Overall, text mining can be helpful, with the potential to corroborate or challenge research, lead to new questions, and act as a research and educational tool.

Data Analysis and Display (Charts, oh my!)

Most forms of research are no stranger to charts. Even musicological work can include music theory form charts and conceptualization charts. These are beneficial to take in ideas and see trends in data – they often are also given a lot of trust when consumed by an uncritical reader, so we much be careful about how we present data and enter it into the chart.

Tidy Data

The cleaner the data, the better the chart. Knowing how a mapping software will find geolocations can help you format excel sheets, for example. And, knowing what fields you want to display helps you chart your course into the organization of your sheets. In other words, knowing something about where you’re going can help you have the cleanest data possible. It is unfortunate to get a third of the way through entering spreadsheet data and to realize you forgot a column with the year of a piece and have now decided you want to map the pieces chronologically. It’s all in the details.

Graphs and data, whether they be pie charts or flow charts, bar graphs or scatter plots, can make statistical arguments that corroborate claims. In an often anecdotal field like musicology or art history, sometimes it is easier to make a claim that applies to many things if you can prove that it does in fact apply just by looking at the numbers. This corroboration of specific stories through big data is legitimizing in many contexts – it does not have to make a new argument or create some breakthrough for it to be relevant, which is often the desire of the people who say “so what?” Odds are, if you’re saying “so what,” you aren’t thinking creatively enough.

How are new theories created? How are new methodologies established? When someone has the understanding that it takes a lot of thinking creatively to connect disparate themes, to tackle binaries, to completely legitimize something that once seemed impossible. We will not expand or contribute to our fields by only working with what is comfortable and relevant. I believe we should lean into these moments of seeming irrelevance to discover something truly original, something outside of the box that destroys the box altogether. In embracing our disinterest and discomfort, we may fail, but we may also discover greatly.

A Map has no name. A Map knows many faces.

Before our map has a name and a detailed display, what exactly is it? What is the core? Now, we dont’ need to be a stealthy assassin like Arya Stark to appreciate the many faces we can put on our maps to present them to the world the way we want. We also don’t need to go to Braavos to learn these skills. This week, we venture into an exciting world of . . . knowns. And fairly chartered territory. Nevertheless, our ventures into mapping nearly always seem exciting, since the prospect of digital mapping combines visual and spatial ideas with the relatively easy -to – use internet. That combination lets us have instant gratifications in creating something which, before digital technology, would have taken far longer to create by hand or with pre-digital computing processes.

There are sevveral ways we can approach digital mapping – as cartographic maps, with any combination of prose, pin drops, chronology, images, aggregated data, and other media; as conceptual maps which isplay ideas, theories, and connections between other ideas and things; or, as narrative maps which tell stories with any amoung of extra information from the two other kinds mentioned above. All of these have a certain amount of data and all analyze it in some way, through a recipe of aggregation, explanation, and presentation. The key difference between these is what is at their core, behind the face we give it.

I have enjoyed mapping both in more cartographic and narrative contexts. This week, I mainly stuck to the Storymap JS platform (not to be confused with ArcGIS StoryMaps) from the KnightLab website of University of Northwestern libraries. While it’s possible to do geolocated maps on StoryMaps, I opted to try doing something new – a gigapixel image map. The work I chose to showcase, Thomas Wilmer Dewing’s The Spinet, acted as the core of the StoryMap. In the gigapixel image map, you can have your narrative go through points on the image and superzoom into these spots. This is especially helpful for those in Art History to illustrate small parts of paintings like details, brushwork, and flaws. My StoryMap walks the viewer through the painting with prose and visual additions throughout to assist in the exploration of the work.

The StoryMap process, while not intuitive at first, is well-explained in the KnightLab website, and once I had the hang of it, I could easily edit the entire project by choosing a different image to use as the base of the map (more on that later). Once you have an image (try to find the biggest one you can – I got mine from the Smithsonian Institute), you can edit it in Photoshop and export to “Zoomify” – once you do this, zip the file and upload it to your file server so that you can have a URL to paste in the “gigapixel url” box when prompted. This does require access to photoshop and a file server, and both tend to come with monetary compensation for the services, but the good thing is that they are useful for far more than just StoryMaps. More tools in the toolbox, as it were.

The biggest struggle I had when creating this StoryMap was actually not an issue with the software, but with finding a good image for my map base image. I went through three images (below) before finding one that didn’t seem to have drastically different coloring from the original. Two of them were both from the Smithsonian, even!

The Final Image I chose, via Smithsonian Institution.

Via Wikimedia Commons
Via Smithsonian Institution

Technical aspects aside, the broad scope of digital mapping possibilities, with so many different combinations of cores, displays, and names, provide us with endless opportunity to present research and data. So, what will you do? Create a map of 19th c. London’s Art Market? Will you explore how the environment of the painting, take a landscape or busy street for example, tells us about the subject of the painting? Or is the landscape the core? Will you map Digital Harlem? These projects have already been done, BUT what if we were to take the same content, the same core, and display it differently, call it by a new name, work the data to our liking to make a different argument? Is that not what scholars to all the time with information from books and archival material? I am merely playing devil’s advocate here, as the issue is clearly more complex than this, and issues of intellectual property, copyright, fair use, and more are at play in decisions like these. But, it is an interesting way to see how the many names and faces we can give our work, and the many cores they mask, create new arguments, new moments, new explorations in time and space.

Practicalities of Becoming a DH Influencer

When an Instagram influencer posts a mirror pic, reverse slo-mo video, or digitally aged photo using a special app on instagram, followers take notice, and the traffic on that influencer’s site along with increased use of these digital tools (mostly via apps) skyrocket. I’m interested in the ways in which our Digital Humanities project pages are similar to Instagram pages in this way – so, this week, we venture into the exciting world of adding content to a website – and how to create truly jazzy exhibits with some simple, practical tools that work somewhat intuitively and work well. In this post, I’ll go through some fun tools we all can use to create exciting online presences and then will follow this with some “DH influencers” who have created excellent content of their own.

Adding your own captions and annotations to videos

Ahh, the enviable ability to simply add text to videos without using iMovie or Adobe Premiere or some other software… this in and of itself is a tricky business in the first place. However, if you are looking for a free way to do this with relative ease, utilizing Youtube’s Youtube Studio (still in Beta form) can provide a way to caption and annotate videos. Since the studio is still in Beta form, there are a few aspects that are not intuitive – captions are under a strange part of the “transcriptions” tab, and going from the studio to the editor can use some finessing. The platform, however, serves its purpose. Another alternative to this is Timelinely.

Chuck Tryon, a Professor of English at Fayetteville University, has written a column post for the Chronicle of Higher Education which outlines the platform SocialBook for editing and annotating videos for educational purposes. Similarly to ThingLink (which I discuss below), this platform seems to be beneficial to students and is geared towards teachers encouraging student users. It does include the ability to connect to their social media platforms like Twitter and Facebook, so that could be either a draw or a deterrent to some students.

My personal favorite is ThingLink, which allows for the annotation of video and still image, along with other media, largely for educational purposes.

ThingLink: Educational Tools and Snazzy Exhibit Pieces

The above tour from ThingLink is an example of how you can make an annotated and tagged gallery-like tour through selected works and media. You can include websites, audio, and text in your annotated exhibit, and the exhibit itself is easily embedded in your project page (like the one above). This was quick and easy to make and serves my needs quite nicely – I will definitely be returning to ThingLink in the future and look forward to incorporating video and audio from my own work in musicology into narrative exhibits.

Other DH display tools

Now, there are plenty of other things people include in their DH displays to show research, evidence, and methods. For example, those working with oral histories might include a video or audio of interviews with transcripts. One convenient way to do this is to use the Oral History Metadata Synchronizer from University of Kentucky Libraries. This downloadable software allows you to have a transcript of your interview divided by sections, and you can also make it searchable, which is a huge benefit to people trying to engage with interviews. Finally, with the OHMS, you can have time stamps along the sides of the scrollable transcript, which aids in being able to directly reference timestamps in analysis. And, as far as displaying the oral histories we research, The Archive of American Art has a very compelling display of oral histories laid out attractively in an easy to read and navigate interface. This can be used as an exemplar “influencer” in oral history presentation.

Another tool to have in your DH toolbox is not necessarily one that can explicitly be seen by your audience, but can help you in curating exhibits with proper credit and can help in research that is displayed on your site. This tool is reverse image search – most people already know how to do this, but it is worth covering a couple of different ways to go about it. The first, and most obvious, is Google reverse image search, which is great for finding 3D objects and artworks, in particular. The second way to try reverse image search is TinEye, which is not quite as effective as Google Images for searches, and is far less effective when searching for images of 3D objects as opposed to images of paintings or other images. However, it still may be worth a look if you are struggling to find an image you have no information for.

Reverse image lookup has also been used in revolutionary ways. John Resig’s article “Using Computer Vision to Increase the Research Potential of Photo Archives” outlines how he and the Frick Art Reference Library ” utilized TinEye’s MatchEngine image similarity service and developed software to analyze images of anonymous Italian art in their photo archive.” He goes on to say:

“The result was extremely exciting: it was able to automatically find similar images which weren’t previously known and confirm existing relationshipsAnalysis of some of the limitations of image similarity technology was also conducted. “

John Resig (hyperlinks his own)

Resig does note what kinds of results TinEye brought, however, and some of the results are a mixed bag. The breakdown is below:

Clearly, the technology is not error-free, or anywhere close to as finessed as the human eye; however, it is useful and can bring about very satisfying results!

DH Influencers to Follow (not by any means a comprehensive list)

I would be remiss not to mention other DH scholars and initiatives which have helped spearhead different trends and display methods in DH digital exhibits. Michigan State University’s Institute of Museum and Library Science and the Library of Congress have partnered with other organizations to create the Oral History in the Digital Age website, which is easy to navigate and includes a great “Best Practices” page which not only explains their methods, but their suggestions for others using their resources. This clear list is extremely helpful, both organization-wise and content-wise.

The practice of studying cultural analytics (large sets of data to analyze “massive cultural data sets and flows”) and displaying them can also be useful. The Software Studies Initiative (now housed on a new website under Cultural Analytics) has published an online article on the possibilities of using cultural analytics. This proves to be a very promising, if at the very least visually stunning, way of presenting data. Below are some examples of these visualizations of big cultural data:

Visualization of one Million Manga Pages

4,535 Time Magazine Covers

These visualizations, while overwhelming at times also offer new opportunities for researchers:

“- we are offering a new way for both museum visitors (both online and physical, if we have installation in a museum) to connect to the collections;
– visualizations which show all collection organized by different criteria complement currently dominant search paradigm;
– visitors can discover patterns across all of museum holdings or particular collections – actively making new discoveries themselves as opposed to only being recipients of expert knowledge;
– visitors can discover related images using variety of criteria;
– visitors can discover images by other artists similar to their already favorite works;
– visitors can navigate through collections in many additional ways (in contrast to a physical installation allows only one way to go through the exhibits);
– our techniques are scalable – from large super high resolution displays to desktops to tablets and mobile phones.”

Software Studies Initiative Page (no author credited) 01/2014

If you are interested in working with big data, cultural analytics just might be the way you can combine your interests in tech and the humanities – more information about this kind of research and display can be found at culturalanalytics.org, home to the Cultural Analytics Journal.

Overview

Overall, there are many tools for us to use to become just a little more tech-savvy, a little more trend-setting, a little more forward-thinking in our digital displays. But if we want to be a DH influencer, we need to go above and beyond, to seek out new ways of displaying our data and our research so that we may show just how much DH can do for presenting research and educating students.

Bibliography

« Older posts

© 2024 Claiming a Place

Theme by Anders NorenUp ↑