Women Architects and World Fairs

Category: Digital Art History (Page 1 of 2)

Digital Art History & Crowdsourcing: A Look At Art&Feminism Wikipedia Edit-a-Thons

Throughout the semester, one of the tends that continuously arises in our discussion is the idea of collaboration. One of the central tenets of scholarly production in the humanities is that of work authored by a single scholar. Unlike the sciences in which it is expected for there to be multiple people- multiple scholars, graduate student assistants, lab assistants, etc.- contributing to the final publication of a research article, the disciplines in the Humanities expect one singular author to produce the entire work. One of the integral aspects of the ‘gold standard’ of the scholarly monograph is the idea that there is only one author who wrote it. That is why, when thinking about the transition to digital art history, many scholars in the humanities were skeptical. How would projects open up in this collaborative manner? This focus on the single authored work often meant that those who contributed to a digital project, including librarians, IT specialists, graduate students, and others, were often not recognized for their labor.

This tight hold on the single-authored monograph has loosened a bit, though certainly not completely. The ‘gold standard’ Digital Art History article that we have talked about throughout this semester is, of course, “Local/Global: Mapping Nineteenth-Century London’s Art Market” by Pamela Fletcher and Anne Helmreich. Not only are there two authors working together to produce this article, but they also acknowledge the work of those who helped create the tools that they used to develop the article. But what happens when collaboration goes beyond the work of multiple scholars or other individuals within the ‘Academy’? What happens when the collaboration brings in the public?

This question was the topic of our discussions this week. Mass collaboration with the public, or crowdsourcing as it is called, is an attribute of the digital world that has been increasing for the past few years. In the article “Digital Humanities and Crowdsourcing: An Exploration,” the authors offer a thoughtful definition for the term:

“Crowdsourcing is a type of participative online activity in which an individual, an institution, a non-profit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number, via a flexible open call, the voluntary undertaking of a task.”

L. Carletti, G. Giannachi, D. Price, D. McAuley, “Digital Humanities and Crowdsourcing: An Exploration,” in MW2013: Museums and the Web 2013, April 17-20, 2013,

Within this definition, crowdsourcing can span a variety of projects, both related to Digital Art History and those beyond its scope. One of the more common types of crowdsourcing is projects that deal with transcription. The New York Public Library, Tate Museum, and the Smithsonian Institution are all examples of cultural heritage institutions that make collections of material available to the public so that members of the public can engage with these materials directly. Additionally, this allows for more transcriptions to occur and the materials to (ideally) reach a wider audience.

At first glance, at looking at these types of projects, the immediate answer that comes to mind (to the question: is crowdsourcing a good/positive thing) is often yes! It is engaging with the community, reaching a wider audience, and more work is ‘getting done.’ Yet the flip side to this line of thinking is- does the value of the professionals work (namely us, as art historians/ librarians/ museum professionals) become undervalued if we broaden these types of projects to the public? Will administrators think that our work can be done by just anyone with a computer if we open up these gates? Most of these questions arose when thinking specifically about our discussion of crowdsourcing exhibition curation, and are all valid questions. Instead of focusing on the negative aspect of crowdsourcing, for this blog post I wanted to focus on one of the positive resources or examples of crowdsourcing: Art & Feminism Wikipedia Edit-a-Thon.

Image result for art and feminism

For those who don’t know, Art + Feminism is an incredible, non-profit organization that is committed to increasing a diverse representation of the arts and art history. Their mission statement is as follows:

“Art+Feminism is a campaign improving coverage of gender, feminism, and the arts on Wikipedia. From coffee shops and community centers to the largest museums and universities in the world, Art+Feminism is a do-it-yourself and do-it-with-others campaign teaching people of all gender identities and expressions to edit Wikipedia.”

http://www.artandfeminism.org/

One of the aspects of their mission statement that I find particularly important is that of the people who are editing Wikipedia. Not only is the organization committed to editing or creating better representation of women artists on Wikipedia, but it is also committed to diversifying the population of those who edit. This mission stems from the fact that less than 10% of editors on Wikipedia are women. Ten percent! And the numbers only go down when you add in race and ethnicity.

Image result for art and feminism wikipedia edit a thon

One of the aspects of their mission statement that I find particularly important is that of the people who are editing Wikipedia. Not only is the organization committed to editing or creating better representation of women artists on Wikipedia, but it is also committed to diversifying the population of those who edit. This mission stems from the fact that less than 10% of editors on Wikipedia are women. Ten percent! And the numbers only go down when you add in race and ethnicity.

At this point, many of you may be asking, okay yes, this all sounds great- but Wikipedia? Haven’t we been taught for most of our life that Wikipedia is not a reliable source?

Well, yes and no. We still urge our students not to cite (or copy!) Wikipedia as a resource for their research papers, but how many times have we looked up a fact on Wikipedia? When was the Seven Years War? What’s the capital of Azerbaijan? Who was the twelfth president of the United States?

In an increasingly digital world, Googling someone’s name is often times our first step in researching their work. Admit it- we all use Wikipedia in our day to day life. I even use it as a starting point of research each page usually has an elaborate list of bibliographic sources.

So what happens if a student can’t find someone on Wikipedia?

What happens if a student is intrigued by a femme or genderqueer artist that they learned about in class and was interested in writing about them for their research paper, but when they Googled their name, nothing came up? Often times, that student will turn elsewhere, to look for a figure that is more well-known. Someone who has a Wikipedia page. Does this happen everyday? Probably not. But when it does happen, it continues the cycle of underrepresentation. For every article added, edited, or improved, more and more underrepresented people get their voices, and work, shown to a wider audience.

Is Art+Feminism’s Wikipedia Edit-a-Thons perfect? No, of course not. Just like in any other avenue of life, there are editors on Wikipedia that try to bring those woh are new to Wikipedia Editing down or remove pages that don’t completely follow the ‘pillars’ of Wikipedia. But overall, I think this is an excellent example of positive crowdsourcing. The results speak for themselves.

Image result for art and feminism wikipedia edit a thon

“Since 2014, over 14,000 people at more than 1,100 events around the world have participated in our edit-a-thons, resulting in the creation and improvement of more than 58,000 articles on Wikipedia. We’ve created and improved pages for artists like Tina Charlie, LaToya Ruby Frazier, Ana Mendieta, Augusta Savage, and Frances Stark.”

https://artandfeminism.org/about/

3D Modeling: Google SketchUp and Replicas in Museums

This week has been all about 3D modeling. We looked at a lot of examples from scholars recreating ancient or medieval architecture and objects. There are so many benefits to 3D modeling in those realms, but I want to focus in my post on how I have used the tools and how I can envision using them in my own work as I continue to get better at them.

I want to begin this post with the only experience I’ve had prior to this class with 3D-modeling, which is working with Google SketchUp. I’ve worked with it during a variety of internships at multiple museums as part of exhibition planning. In those internship contexts I didn’t appreciate how much goes into using the program. It is easy to “hang” works in the galleries on SketchUp and to populate the architecture with works of art (you can adjust proportions and manipulate placement very easily). Because the museums already had exact models of their gallery spaces, what I didn’t realize was how much background works goes into building the physical space that I was then putting art into. That part is the real work. Since I don’t have access to most of the SketchUp files I created in those contexts, I’ll show another example that I’ve made using SketchUp for a class.

For a project in a seminar in undergrad, I was tasked to reimagine a way in which to engage with Confederate monuments. I looked at New Orleans as a case study because of how many news stories were coming out of the city regarding the topic at the time. I looked at previous examples of museum exhibitions that looked at colonial and military histories that I felt were relevant in looking at strategies to incorporate in this example.

Screenshot of my “exhibition” of New Orlean’s Confederate monuments

After trying in vain to build a museum space myself, I ended up borrowing the architectural rendering from one of the museums I had worked at. In the gallery space shown you can see the empty pedestals of the Beauregard Equestrian Statue and the statue to Confederate President Jefferson Davis are on view.  Behind both are photographs of either the vandalized original statues, or edited photographs of what the monument could be.  For example, behind the Jefferson Davis pedestal is an artist reimagining of the statue as a monument to Angela Davis.  The literal absence of the physical statues emphasizes the possibility of reimagining them, as well as decentralizes the figures from the narratives and instead underscores the response from the community.

Back to the point of this example though, you can see that my use of SketchUp is pretty limited. It is easy to incorporate flat images (see the images on the walls), but I had difficultly demonstrating that objects were three-dimensional. I wanted to show that I was including the actual pedestals (not the sculptures, just the pedestals with graffiti), but since I couldn’t include an actual 3D model I simply added a box with the same image on all four sides. I consider this a low-tech solution. Remember, that when you’re in the actual program, you can drag yourself through the space so when you’re “walking” around the center pedestal for example, you do get some sense of what you’re seeing even with just the pictures.

Let’s turn to objects…

I’d like to pivot now to a discussion of 3D scanning and the modeling of objects rather than spaces. In their article, “3D Scanning and Replication for Museum and Cultural Heritage Applications,” Melvin J. Wachowiak and Basiliki Vicky Karas write that “3D scanning neither replaces nor is fully comparable to photography, structural imaging such as radiography, computed tomography (CT scan), colorimetry, and other measurement techniques.” There are already so many tools at museum’s disposals that are used to catalog and record information regarding their collections. It is a simple next step to begin to incorporate 3D modeling into this data collection.

Beyond keeping thorough records, I think there are a number of ways in which models and replicas that are scanned and 3D-printed can be used and incorporated into museum collections. Just one example of how replicas have been used in museums to improve visitor experience is their use in allowing visually-impaired visitors to interact with the art by actually touching the recreation. The Smithsonian Magazine has a great article on this. In that article, David Hewitt writes, “The solution, the curators concluded, was not simply offering audio or braille guides, but to create elaborate 3-D replicas of key works, which visitors could touch.” 3D modeling allows curators to go beyond what can be conceived of as traditional solutions to allow for greater accessibility to collections by visitors who would otherwise be left out of traditional art museum contexts.

In addition, 3D modeling and scanning can be used in object repatriation cases and the study of indigenous art and artifacts. Again, the Smithsonian Magazine has a great article on how the tool can be used in this way. The article discusses a collaboration between the museum and the Tlingit tribe of southeastern Alaska. As a fun shout out to my university, University of North Carolina-Chapel Hill student and photogrammetry specialist Abigail Gancz was a part of this project. During a conference on the topic of 3D modeling, multiple clan artifacts were digitally scanned and replicated as “insurance” for the clan against future loss. They cited instances where important objects were lost or damaged and had to be recreated by memory. Now, with the help of this new technology, there will be thorough records that can be used to recreate these important objects.

I’d be interested to look more into how many museums have used 3D modeling as a solution for repatriation issues. By making replicas from the original object, museums that have acquired items in less than admirable ways could keep the information in their collections while still sending the originals back to their country/peoples of origin. I’d be interested to see how 3D scanning and modeling would work on classical African artifacts. Many masks or sculptures are made of multiple materials and have had multiple substances applied to them over the years so I wonder if scans could adequately capture those specificities.

Data visualization: can art historians use Excel?

This week has been all about data visualization and its ability to clarify abstract data and aid in our ability to read and absorb large amount of it. I’ll admit I was skeptical when we began our workshop in this section of the course with Excel, but I am now convinced that these tools do actually have something to offer art history. It’s important to note that although I associate Excel with middle school science projects and finance spreadsheets, both the information (the data) that art historians are displaying and generally the types of charts or visuals we are creating are quite different.

When I think about data visualization in the context of art, I think immediately of Guerrilla Girls. I wasn’t going to focus on this connection since I have tended to focus on artists using digital tools rather than art historians in my blog posts (see my last post here), but Taylor’s comment on my last post made me realize that this is actually important as artists’ use of these tools will serve as an important impetus for art historians to get on board the digital history train. Anyways, back to the Guerrilla Girls’ use of infographics and data visualization. Take for example their “Bus Companies Are More Enlightened Than NYC Art Galleries” graphic that shows the percentage of women in various jobs. The percentages themselves are easy to understand, but I think it is an instance where a graph may help to really show the discrepancies. Many of their charts and “report cards” have the potential to be visualized in this way as well. For now, I’ve taken the liberty of making a very rudimentary graph for this one graphic.

I’m definitely not providing new information or really asking any new questions with the graph of the “data” from the image, but I think it is perhaps easier to read. Having both images is redundant, but perhaps incorporating data visualizations into their infographics would be a good strategy for the Guerrilla Girls.

Let’s take a (small) step back into some theory

I think the question of “am I asking or answering any new questions” is important. In my Guerrilla Girl example, I was not, and honestly I’m struggling to think of a way that a lot of these data visualizations would ask new research questions in and of themselves. A good way to think about this conundrum would be the questions posed by Shazna Nessa in “Visual Literacy in an Age of Data,”  :

  • Who am I creating this for?
  • What journalistic impact should the visualization have?
  • If I opt for novel graphical/interaction styles, what guidance will I provide the audience?
  • Should I blend exploratory aspects with explanatory aspects?
  • How will I expose the story?
  • Can I add a narrative, causation information, or a news peg?
  • Although I’ve edited the data already, is there superfluous data that I can still edit out?

Although these questions aren’t necessarily specific to art history, I think they are interesting and vital to interrogating the role of visualizations in the field. I’d propose the addition of a few other questions: Is this visualization asking a new research question or answering an established one in a new way? Is the information that it is sharing already explained clearly enough in my writing and therefore is it redundant? There are so many visualization tools — charts, word maps, image charts, the list goes on– that it is tempting to include at least one in your project. You can easily make one of the visualization types work for your project, but should you? I’m not convinced that just because these tools can work for our discipline that they belong there. They seem to live squarely in the history side of the field rather than the art. To me, if we are to include graphics in our research, it seems best to include images of the objects we are exploring rather than graphics that visualize what we are saying about them.

So I tried to make a few visualizations…

And honestly, they didn’t turn out too well. In class we played around with the Tate’s data on the artists and artworks in their collection. This is a lot of data to handle, so usually we tended to break up the data into more manageable groupings. For example, I tended to not only to just focus on the “A’s” (meaning artists whose last name started with A), but even just a small set of those artists. First I poked around with Excel and couldn’t really make any visual aids that I thought were useful enough to include here. We did make a pie chart of male vs. female artists, which could be helpful. However, we had to switch the data input to be able to chart this. We had to switch the word “male” or “female” in the column to a numerical datapoint that the computer could add up, which was hard necessarily, but definitely took up time. Next we worked with Tableau. In some ways, I found Tableau to be a bit more intuitive, but I still struggled with this assignment. A lot of these struggles may be because I didn’t really have control over the data collection and data set. It may have been easier had I gotten my own data and chosen the fields more carefully to be able to structure my visualizations around a certain argument. In the end I only made a few visual aids that I thought could be useful. I managed to make the following graph that looked at how many pieces in a certain media various artists had in the Tate collection. Including ALL the artists in the data set was unwieldy, as was even just focusing on the A’s, so here is an instance that I included only 30-something of the artists whose last name started with A.

My attempt at a graph showing how many pieces in a certain medium are in the Tate collection by artist.

My main issue with the graph is aesthetic. The way the artists’ names appear on the top is unclear and hard to read. I could have used fewer artists to alleviate this, but then I don’t get to compare as many artists which limits the scope of my research. It is interesting to see the distribution of media in the collection, and this graph definitely does show that pretty clearly in the length of the bars, but I’m not sure it was worth the data manipulation. A simple chart or a paragraph of text could probably achieve the same result.

I want to return back to those questions posed by Nessa to evaluate my graphic. Who am I creating this for? I could be creating this graphic for an acquisition committee. It could be useful for the board to see what holes there are in the collection and to determine if another oil painting or print by a certain artist is really a necessary purchase. This visualization may be useful in that boardroom setting when making decisions if the members don’t have a firm grasp of all the items in the collection (which is nearly impossible with a collection the size of the Tate’s). What journalistic impact should the visualization have? Going forward with that acquisition committee example, this graphic should demonstrate the breadth of the collection and act as a simple representation of the distribution of media and artists’ works. If I opt for novel graphical/interaction styles, what guidance will I provide the audience? I think this is an important question for this particular graph. I would need to perhaps supplement with text outlining where the pieces came from (if groups of prints were bequeathed together for example) and when they were acquired by the museum. This historical acquisition data would be necessary to understand the graph. How will I expose the story? I would include that contextualization first and then turn to this graph to reiterate a point rather than begin with it. This would incorporate the narrative quality in another one of Nessa’s questions. Although I’ve edited the data already, is there superfluous data that I can still edit out? Here I think I’ve edited out the superfluous data. But even if I didn’t think I had, Tableau requires a certain number of fields to create certain graphic types, so I needed to include what I did.

Data Sutra: The Many Forms of Data Visualization

Unlike most disciplines, especially in the humanities, art historians have one aspect that unites them all: the image. There might be fights over methodologies, historiographies, interpretations, and countless other things, but underneath it all is the privileging of images. No matter the genre of art or the field of scholarship, every art historian utilizes images as an integral form of their work, whether it is their own research, including publishing endeavors, or pedagogical tools. That is why, when it comes to data visualizations, it would seem that art historians would be on the cutting edge of these tools. Yet, once again, it appears that art historians seem to be slightly behind the curve when it comes to this aspect of the digital humanities. These ideas are best illuminated by Diane M. Zorich’s presentation “The ‘Art’ of Digital Art History.”

Zorich “consults on information management and digitization issues in cultural and educational organizations” and is perhaps best known for (at least in the realm of digital art history) her 2012 Kress Foundation report entitled “Transitioning to a Digital World: Art History, Its Research Centers, and Digital Scholarship,” which we have looked at earlier this semester. Her presentation, which occurred a year after the report was published, in some ways acts as a response to her report. One of the biggest takeaways, and one that I have written about in almost all of my blogs this semester, is once again highlighting the differences between digitized and digital art history, a concept that Johanna Drucker defines in article “Is There a Digital Art History?” In the responses that Zorich received to her report, it is clear that people within the field are still grappling with the true meaning of digital art history. One response that Zorich highlights in the presentation basically asserts that if scholars use technology, such as Google searching or library databases, they are conversing in digital art history. Yet, as Zorich highlights and reasserts from Drucker’s article, simply using digital resources doesn’t make you a digital art historian- it has to alter the way in which you approach your research or even inform your research question. Zorich writes

“I think the reason for these sentiments is that art history has been slow at adopting the computational methodologies and analytical techniques that are enabled by new technologies. And until it does so, art historians will never really be practicing digital art history in the more meaningful sense that Drucker implies. They will only be moving their current practices to a digital platform, not using the methodologies unique to this platform to expand art history in a transformational way.”

Diane M. Zorich, “The ‘Art’ of Digital Art History” (presented at The Digital World of Art History, Princeton University, June 26, 2013), https://ima.princeton.edu/pubs/2013Zorich.pdf

Afterwards, Zorich proceeds to highlight and reflect on some new computational methodologies and the ways in which they can be incorporated in digital art historical scholarship. In her presentation, Zorich includes many of the tools that we have looked at in class- Google’s N-Gram Viewer, the Software Studies Initiative from Lev Manovich’s Cultural Analytics Lab, Pamela Fletcher & Anne Helmreich’s “Local/Global” mapping of 19th-century London art markets, and “Mining the Dispatch” from the University of Richmond. While not necessarily all art historical projects, they all highlight examples in which computational methodologies have been used and then could be applied to art historical projects.

One of the interesting areas that Zorich highlighted that caught my attention was the potentiality of text mining in art historical studies. Text mining, or distant reading, was one of the first (perhaps the first?) digital humanities tools that really impacted the disciplines of the humanities, yet it is an area that I have largely associated with the discipline of English, and perhaps maybe History. But, as Zorich astutely highlighted in her presentation, art historians could use topic modeling as a new tool, and presents possible avenues of corpora: the Getty Portal, journals in the discipline, the oeuvre of icons in the field (Panofsky, Gombrich, etc.), oral histories, and perhaps even images, although technologies are not quite there yet. Personally, I would absolutely love to do some text mining of these corpora, especially the different journals in the field. While it is most likely that the data will show what we already know (namely that journals wrote mostly about Western white male artists), it would be interesting to find the outlier of this data, something that you might not be able to find without these new technologies.

But First: Coffee (and data cleanup!)

But, before we can even get to to the data visualization, you have to clean up your data! We talked about it last week as well, but it is crazy how much work goes into creating and maintaining tiny data. Last year when I was working on my SILS Master’s Paper, I had a very small amount of data that I was working with- I was doing a content analysis of three different art history digital publishing platforms which totaled to just under fifty publications. When I went to make my visualizations, I thought it would be extremely simple- I used the same codes across the platforms and tried to use the same standardized languages throughout my note taking process. But, I was promptly shown how wrong I was when I met with the Digital Visualization Services Librarian, Lorin Bruckner (who is absolutely amazing! You can check out her work here). Simply using different capitalization (i.e. male versus Male) would create utterly new categories in any type of chart I was trying to create. Having that opportunity, especially with a dataset that was relatively small and easily fixed, was a great experience early on in my ‘career’ (if we can call it that) as it made me realize how important having a clear idea of tidy data at the beginning of your project is to the success of it, especially when you publish it or try to create visualizations from the data.

Show Me the Images!

As this was a blog post about data visualization, it would be pretty sad if I didn’t offer some images!

appear aristocrats art austrian beautiful beethoven century child clothes cobbler countries criminal culture decoration degenerate development erotic everything food graffiti happy hear humanity increases longer man modern natural objects ornament papuan past pay people pieces pleasures price produced recognize sometimes state tattoos today unaesthetic understand urge value velvet walls work
created at TagCrowd.com

This first visualization is from Tag Crowd, which lets you create “word clouds” to show the frequency of certain words in a text. The one above is from Alfred Loos’ presentation turned article “Ornament and Crime” published in 1908. While some words I am not suprised by- ornament, man, modern, produced, culture, decoration- I was surprised by Beethoven, child, and food (perhaps reminding me that I need to read this again for my thesis…)

Beagle Puppies from Google Image Search, Created through ImageQuilts

This second visualization is (obviously much more cute) and is made through ImageQuilts, a Google Chrome plug-in that allows you to take a large batch of images from a multitude of sources- WikiMedia, Google Image Search, etc.- to create a manipulable “quilt” of images. While I like looking at lots of images of cute baby beagles, you could also use them as visualization tools for class, such as Pablo Picasso’s work:

or even a ~meta~ quilt of the quilts from Gee’s Bend:

which are both images created by the founders of ImageQuilts, Edward Tufte and Adam Schwartz. They created some amazing images with this software, including these two with which I will conclude my post:

Eadweard Muybridge
Josef Albers

How can we best use huge amounts of data?

This week in class we are discussing data, data “tidying” and visualization, and data mining. We looked at theory and a variety of examples of how various scholars have used amalgamations of huge data sets to reach conclusions and visualize trends. We noted that some of these examples were more successful than others, and as a whole the class seemed to reach a rather pessimistic conclusion: so what? What do these data sets really tell us that furthers our understanding? We looked at the example of organizing paintings by color. I wholeheartedly agreed with a classmates questioning of how useful a data set of 40,000 blue images could be. Sure, she argued, we could look at the spread of pigment geographically, iconography associated with the color, or a host of other topics, but does a massive collection of images really help a scholar on that quest? I also wasn’t convinced. To further dissuade me from thinking it would be helpful, I hadn’t even thought about the way these large data sets could be skewed. Professor Bauer brought up “color pollution” or the idea that the background color of an object would also be mined for these color sets. This means that many coins are placed in black sets because of the black velvet drapes they are photographed against for collections, or that sculptures generally were not accurately tagged because of the wall color they were photographed against. So, if we were to run with the hypothetical collection of 40,000 images of works that are mainly blue, not only is this huge collection perhaps not useful to me as an individual scholar trying to make a claim, but it may not even be accurate.

Data mining is also used to identify trends in textual sources. Dan Cohen’s “Searching for the Victorians” is a great example of this, but it also raises the “so what” question from skeptics. Cohen and his fellow researchers were able to code over a million books (!!) thanks to widespread digitization of Victorian era literature by projects like Google Books and Hathitrust. Below is a graph of the number of books that reference “Revolution” in their titles (for now, only titles are analyzed, but analyzation of full text is in the pipeline for the project):

Graph showing the frequency of the word “Revolution” in the title of Victorian books from Dan Cohen’s “Searching for the Victorians”

The graph is interesting in that it lets us see how much revolution (and therefore perhaps political in/stability and social unrest) was present in the consciousness of society. The spike in the middle of the graph seems interesting and draws the viewer’s attention, but any historian would immediately know that this spike coincides with the French revolution about which there was a lot published and discussed. So again, you may be left with the question, “so what? what does this actually tell us?” In fact, some commenters asked just that in regard to Cohen’s post.

I don’t mean to be pessimistic about the use of data in the humanities, I think there is huge potential to incorporate it into research in art history and beyond. Returning to Cohen’s revolution example, I actually think there is value in simply visualizing trends. Being able to look at not only a small sample, but virtually all examples of Victorian literature and plotting trends in words used shows the general attitude of the population and what is important. Sometimes just showing data and trends is as valuable to scholarship as distinct arguments.

Forensic Architecture at the Whitney Biennial as Another Case Study

Film still from “Triple Chaser” by Forensic Architecture on view at the 2019 Whitney Biennial

I want to shift back towards collecting and mining images for a brief discussion on the piece made by Forensic Architecture included in this year’s Whitney Biennial. Forensic Architecture is an agency which comprises about 20 full-time researchers, filmmakers, and technologists, along with a team of fellows that looks into global violence, corruption, and conflict. They provide an interesting example of the ways in which image recognition and data amalgamation can be useful: as a journalistic pursuit (they try to showcase the role of a Whitney board member in profiting from violence), as a tool to recognize very different images and sort through huge sets of them, but also simply to create art (they are exhibiting in the Whitney Biennial after all!).

Forensic Architecture has enlisted artists, filmmakers, writers, data analysts, technologists, and academics in an intensively collaborative process. Maps and digital animations often play a critical role in the group’s work, allowing for painstaking recreations of shootings and disasters, and images are often culled from social media and scrutinized for information. Forensic Architecture’s work suggests a union of institutional critique and post-internet aesthetics, and it exists in many forms. On the group’s website, it lives as design-heavy interactive presentations. In museums, their work takes the form of installations dense with videos, diagrams, and elements of sound.

Alex Greenberger, “For Whitney Biennial, One Participant Targets Controversial Whitney Patron

I encourage you to look more into how Forensic Architecture made the video that was on display at the Whitney that resulted from the larger project because my lack of understanding of the machine learning processes that made it possible also hinders my ability to talk insightfully about the piece. However, very simply, Forensic Architecture trained AI to identify images of Safariland tear gas canisters. In order to train image recognition software you need A LOT of images, it’s one of the major barriers to use. To get around this, they crowdsourced for images of the canisters (and received a disturbing amount from activists around the world). They then put these canisters against various backgrounds and repositioned them from various angles to help train the software further. Again, this is hugely simplifying the process, and the video that they produced and which was displayed at the museum goes through the process in much better detail.

I bring up this example both because I think it’s an amazing work of art and incredibly thought provoking, but also because I think this sort of image recognition training is how I can envision using large amounts of data most effectively. I can see how useful it would be to identify objects (like a teargas canister) or symbols and then train machines to find them in huge collections of images. On a grand scale this could show cross-cultural connections if we see objects or symbols in use across large geographic or temporal divides, but also in a logistical sense help viewers make sense of blurry or degrading images that the human eye may have trouble discerning.

I know in my own work when I look at colonial photographs, many photographers used the same props in multiple photos in order to create “authentic” portraits that satisfied what the colonists envisioned of the “primitives” they controlled. Using image recognition, I could potentially find all the instances in which a certain prop (or type of prop) was used and use this to highlight the fictitious nature of these photographs. Perhaps with the current state of machine learning this wouldn’t be possible, after all I would need a huge data set to train the machine, but as opposed to some of the examples we looked at in class, this type of image recognition data project may help us answer that nagging “so what” question. I’m not sure I’ll ever be able to code this type of software, although I could definitely find wonderful scholars to collaborate with. Perhaps text data would be most useful and realistic for me. I could easily chart biographical data of subjects or photographers using the basic Excel skills I already know, or use existing text mining software to go through records to pull out relevant information for my research. I’ve been the intern that has to “tidy” this type of data before for projects, so I am used to the type of work that goes into amassing data in a way that is useful for these tools. Although I have not used text-mining services in the past, I would love to work with these tools in the future as it would greatly improve my ability to get through vast archives of information. Perhaps these text based approaches are a better place for me to start as an amateur digital art historian.

Mapping Bauhaus’ Influence

Mapping is one of the great areas of research in the Digital Humanities and Digital Art History. There are a variety of tools that one can use, including StoryMap JS and Google Maps. While Google Maps is a technology that countless people use in their daily lives, it is also a tool that can be used in art historical scholarship. One way that you can do this is by creating a collaborative map for those to share; you can map art museums, artists’ houses, an architect’s designs, and countless other features.

In order to play around and learn about the tool, I decided to create a map regarding the Legacy of the Bauhaus. It was created to show a variety of places related to the legacy of the Bauhaus. After the school closed in Germany in 1933, students and teachers of the school relocated to locations all over the world, illustrating the profound impact of this modern school. While I mostly focused on sites in the United States, I also included some other international examples as points of contrast. This is a work in progress, and just shows a few examples- in no way does it encapsulate the totality of architectural works related to the Bauhaus internationally.

The map includes images, descriptions, links, and videos related to the sites located on the map.

Google Map of the Legacy of the Bauhaus

Mapping Digital Art History

If you have read any of my blog posts from this semester, it will come as absolutely no surprise that I consider Johanna Drucker’s article “Is There A Digital Art History?” to be one, if not the, most foundational texts in the field of digital art history. Drucker articulately summarizes all of the issues that …

The Potential of Photo Matching for Archives

When I think of what I’d like this blog to be or how I’d like it to be used, John Resig’s post “Using Computer Vision to Increase the Research Potential of Photo Archives” comes to mind as a model. In a marathon post Resig lays out an entire experiment on the efficacy of image recognition tools. His post provides not only the results of this inquiry, but a thorough yet concise summary of what tools are out there, why they are important, and how to most effectively use them. He opens the post with a brief summary and an explanation of the blog format:

I’m taking the approach of publishing my results openly on my site so that they can reach a wider audience. Please feel free to share this with whomever you think will find it useful. If you have any questions, comments, or concerns I welcome an email with your feedback

John Resig

Not only is his project of digital image recognition and matching a digital humanities one, but the way in which Resig is conceiving of it and sharing it further the accessibility aims of digital art historical projects.

Now that I’ve made my side comment on the accessibility of digital humanities projects and how fabulous it is that digital humanists are so willing to openly put their work out there, I want to turn to the specifics of the project. Much of the nitty gritty technological details of the tools Resig references still allude me, but he does do a great job of writing in plain enough language that I was able to grasp the essential details. Essentially, by tweaking existing open source image matching systems Resig was able to quickly go through incredibly vast (think tens to hundreds of thousands) numbers of images with much higher accuracy than human archivists could ever accomplish due to input and cataloging errors. He claims that utilizing these types of image matching systems would help in analysis and error correction, expediting the digitization process, and facilitate the merging of vast archives.

I’d like to first turn to the assertion that these services could help in analysis and error correction. While I assume that the majority of metadata input by archivists is accurate (and that Resig perhaps overemphasizes human error), I will concede that older archives especially could benefit from the implementation of these systems. For one, old duplicate images could be eliminated or at least grouped. Alternate views of the same works (one perhaps including the frame in the image for example) can also be grouped using this tool. Aside from this perhaps more obvious benefit, I think the point Resig brings up regarding the matching of images of works before and after restoration is interesting. This could help us to track changes to works over time and is something that I can agree may be harder for the human eye to detect. Similarly, the detection of detail shots from larger works is important as the more closely cropped images may be so disarticulated from the original that an archivist may misinterpret them as their own works. Resig’s discussion of the the various images of portions of the work he labels as “Florentine, 13th century, Uffizi Museum in Florence” demonstrate this capability really well (see screen shot below). These portions have no overlapping segments, but the image matcher was able to accurately group them so that users can better understand the whole work and its context.

Second, I’d like to comment on the use of these tools in the merging of archives. I hadn’t thought of this issue prior to the reading, as I’d always assume archivists would somehow know where the overlap was in their collections and avoid double-digitizing. This, I realize now, is naive. I didn’t fully comprehend the scale of some of these archives. That the Frick Photoarchive alone has 1.2 million photographs of works of art is mind boggling to me. With this quantity no archivist could truly have a handle on the contents of the entire archive, making a seamless merge between two huge collections nearly impossible. I also hadn’t taken into account language barriers that would impede merging metadata in an efficient way between archives from various countries.

Making Moves: From Idea to Creation

This week’s class topics really excite me because we are moving towards the actual creation of a digital project. As I have said in some of my different posts throughout the weeks, while I have a background in digital art history in terms of its scholarship and some familiarity with various projects, I myself have …

GALA Archive: A Case Study

This week I’d like to take a look at a specific resource that I’ve used in the past for my own research. While doing my research I realized the limitations of the archive, but was unable to really articulate them. Now that I’m looking into digital humanities more, I’ve realized the concrete ways in which this particular resource could expand and be of more scholarly use if it implemented more digital tools.

Gala is a South African archive and platform formed in 1997 to address the lack of representation of LGBTIQ (lesbian, gay, bisexual, transgender, intersex, and queer) folks in (South) Africa. Originally called the Gay and Lesbian Archives, Gala works to collect and preserve local African LGBTIQ narratives from both the public and private realms. The collection is primarily made up of objects on paper: namely letters, diary entries, legal documents, and photographs.

The majority of the objects in the archive are not digitized. Thus, a meeting with the archivist and a visit to the reading room is necessary to utilize the resource to its full capacity. This is obviously a major drawback to scholars of African queer history who do not find themselves in Johannesburg, myself included. While Gala does have an Archival Guide that lists and explains the holdings in the collection, a lack of searching capabilities makes it cumbersome to use. Although you can use “control F” search functions on the pdf, this unofficial search tool relies on the user to make use of exact keywords that they are looking for and requires a lot of sifting through unnecessary additional information. While we have seen that digitizing holdings represents a significant cost, since Gala does already employ an archivist, it seems as though digitizing at least some of their collections could be a vital step in their growth. Alternatively, a searchable collections tool or inventory could help users understand the holdings even if the objects themselves are not digitized.

As a digital resource, Gala is perhaps most useful because of its resource list. This list is made up of thumbnails of a handful of books. When clicked on, a new tab opens with information on the book including publication information and a summary. While, again, this page is not searchable, the comparably small scale and the use of thumbnails makes it more easily used.

In addition to this list of resources, there is a physical library associated with the project, the Cooper-Sparks Queer Community Library and Resource Center. The library is in the process of being inventoried and cataloged which will greatly increase the ease of use and the functionality of the library itself. It is important to note that in the publication tab there are a few books that are listed that are no longer in print, however, it is stated that they are available to read through the library. This encourages physical interaction with the organization itself which is a great resource for those in Johannesburg. These resource lists and collections of reading materials are complemented by Gala’s publishing branch, MaThoko’s Books, which seeks to provide publishing support to those who would like to further queer narratives and marginalized voices. Although this particular facet of Gala does not fall under the digital humanities umbrella, I think it is vital that Gala have both tangible and digital resources available, particularly because much of its audience in South Africa may have limited or unreliable internet access.

Taking a step back, part of what I appreciate so much about Gala is how it has transformed over the 22 years it has been operating. On the basic level of its name, the archive has continually adapted to better serve the community it represents. It consciously moved away from the title “Gay and Lesbian Archives” in a move to be more inclusive to a range of identities and orientations. Since this original change they are still open to changing their name if the LGBTIQ community’s needs and desires change and are not met by the archive. This sort of flexibility and community-centered approach is what makes Gala so unique as a research tool. While the objects in their collection remain the same, they are constantly reframing their approach and bringing new insights onto the objects to reflect societal changes and attitudes toward the project.

One may compare this archive to the Lesbian Herstory Archive located in New York City which is considered to be the world’s largest collection of materials about lesbians and their communities. Like Gala, their collection is not fully digitized, although they have begun doing so and are farther along in the process with audio files digitized and many photographs. They do say that if you are unable to visit the location in NYC, that, “In order to use the Archives from a distance it is best if you have a specific request, such as a certain article in a specific journal, along with the author, date and title, rather than a broad request such as “Do you have any material on Lesbian mothers?” in which case the answer would be “yes, a tremendous amount.” We can point you towards published periodical listings as well.” Put simply and without delving into why this might be the case, the Lesbian Herstory Archive has a lot of the same limitations to use that Gala does half a world away.

« Older posts

© 2024 Claiming a Place

Theme by Anders NorenUp ↑