Source: Visualization as a research method

From the beginnings of art history as a distinct discipline, art historians have employed means to visually demonstrate their argument, and often in ways that conceptually fit their line of argumentation. A prime example of this is Heinrich Wölfflin’s essay on linearly and painterly styles in all manner of art, including painting, sculpture, and architecture.1 While I won’t go into too much detail about this essay, Wölfflin’s argues for a classification of two dominant modes of art making in early modern Europe, linearly and painterly, a designation that also generally lines up with the distinction between Renaissance and Baroque styles. Wölfflin primary method is making this argument is to compare two works of art, one exhibiting characteristics of the linear and the other characteristics of the painterly. Wölfflin’s argument would not have been as compelling without this visual demonstration, but part of what makes this demonstration so persuasive is the way in which it mirrors and models the structure of Wölfflin’s rhetorical and textual argumentation: pairs of traits defined in opposition, illustrated by opposing visual examples.

Across technological changes from the slide projector to the Powerpoint, this method of comparative visual analysis has remained an essential component of art historical work both in the classroom and in journal articles and monographs. Although this kind of demonstration has been used to support arguments and methods far afield from Wölfflin’s formalism, these diverse uses all share in common the utilization of close reading of a small set of visual artifacts to bolster a text-based argument. In contrast to these tried and true methods of visual analysis and demonstration, digital research methods deliver new means to analyze huge bodies of visual and textual material, and to generate graphical representations of this analysis—more commonly termed “visualization.” If Wölfflin’s revolutionary method of placing two images side-by-side for comparison helped to establish a unique line of inquiry for art historical scholarship to pursue, what is the potential for visualization?

Lev Manovich and his colleagues at the Software Studies Initiative have developed a suite of tools for art historians to begin to answer expressly this question. As Manovich argues in his feature article for the inaugural issue of The International Journal for Digital Art History, humanities researchers need to understand the principles behind ‘data science’ and how contemporary society thinks through ‘big data’ in order to pursue computationally driven research. Computational methods have the potential to expand the kinds of inquiry and research available to art historians, but it is imperative that art historians first have a grounding in working with data.2

As important as it is to understand how data-driven methods differ from traditional modes of art historical research, I think it can be just as helpful to emphasize the continuities. ImagePlot, one of the tools developed by the Software Studies Initiative, takes individual visual artifacts as data points and arranges them along an X-Y grid according to a variety of characteristics, such as hue, saturation, date of creation, and so on. In one example of ImagePlot in action, a Software Studies Initiative researcher, Jeremy Douglass, demonstrates how the tool can be used to analyze Mark Rothko’s body of work, illustrating how his use of color developed over time, where patterns emerge in his body of work, and what artworks might be outliers to these trends. Both Wölfflin and ImagePlot treat individual artworks as data points; the difference between these two approaches is the scale for analysis. While Wölfflin focuses in on two works at a time, ImagePlot enables researchers to scale up to an entire corpus of work.

Although an art historical researcher using a tool like ImagePlot may still be interested in discovering something about an individual artwork, she is not limited to this sole visual artifact (or a handful) as her unit of analysis. Computational processes also make it possible for an art historian to treat a huge corpus of artifacts as her unit of analysis—and to be able to empirically investigate this corpus in scholarly rigorous ways.

An early example of this is Jules Prown’s analysis of John Singleton Copley’s American patronage.3 Working in the 1960s, Prown and his research team amassed data about Copley’s patrons, and then used computational methods to find patterns, such as relationships between political affiliations, religion, size of painting, and gender, many of which would not have been possible to uncover by slowly comparing one painting to another. Even more, these methods add an empirical weight to the claims that Prown makes about Copley and his American career. Prown is not just speaking from the position of a scholar who has spent sufficient time studying Copley to be able to make authoritative statements about his body of work; he can also point to concrete methods used to process the data.

I do not want to suggest some false division between traditional art historical research methods as ‘subjective’ and computational methods as ‘objective.’ Traditional research methods have scholarly rigor and weight to them and computational methods are necessarily driven by particular and contingent decisions made the researcher. Subjectivity and objectivity are rarely helpful evaluative categories, and I don’t think they are that useful in this discussion either. What I would suggest is that computationally driven tools allow art historians to apply long familiar scholarly methods and theoretical approaches to entire bodies of work, something that was not possible with a real level of rigor with previously available tools for analysis. Returning to the point I began this post with, these computational tools also generate visual demonstrations of this analysis that compellingly capture and communicate scholarly findings.

ImageQuilt of Kurt Schwitters’ typographical works

Visualization, however, is not just a means of demonstration, but can also itself be a tool for analysis. I found this out through my own experimentation with the ImageQuilts app, which allows the user to arrange dozens of images into a ‘quilt.’ While this is a tool that can just be used to make a pleasant looking collage of images, it has analytic applications as well. I made an image quilt of Kurt Schwitters’ typographical works, and used the app to arrange the images into rough categories: (in order from top left to bottom right) pieces that use letters to create figural representations, pieces that heavily layer letters to create dense textural surfaces, designs for magazine covers, and finally more typical collage works that make use of text-based material. As a quick exercise, this kind of visualization can help us to make sense of (a small selection) of Kurt Schwitters’ varied career, which in turn might help us to formulate a research question. Indeed, the process of actually arranging the images got me thinking about how I might devise research into Schwitters’ career. This visualization could easily be built upon, with annotations or accompanied by a thorough, critical description of Schwitters’ typographical work. While this visualization is not as sophisticated as ImagePlot, or many of the other tools out there, ImageQuilt is still a great example of an easily accessible tool that art historians can use to analyze and illustrate in new ways.

NOTES

[1] Heinrich Wölfflin, “Linear and Painterly,” in Principles of Art History (1932; reprint ed., New York: Dover Books, 1950), pp. 18-72.

[2] Lev Manovich, “Data Science and Digital Art History,” International Journal for Digital Art History, Issue #1, June 2015, pp. 12-35

[3] Jules Prown. “The Art Historian and the Computer.” Art as Evidence : Writings on Art and Material Culture (New Haven, CT: Yale University Press, 2001).