On the history of decorative art, design, and film. Doing Digital Art History

Author: JJ Bauer (Page 2 of 6)

Folksonomies in the GLAM context

Source: Folksonomies in the GLAM context

Folksonomies, also known as crowd sourced vocabularies, have proved their usefulness time and again in terms of deepening user experience and accessibility, especially in cultural heritage institutions.  Often termed GLAMs (Gallery, Library, Archive and Museum), these institutions can use folksonomies to tap into the knowledge base of their users to make their collections more approachable.

For example, when one user looks at a record for Piet Mondrian’s Composition No. II, with Red and Blue, she may assign tags for colour blocking, linear and geometric.  Another user may tag the terms De Stijl, Dutch and neoplasticism.  By combining both approaches to the painting, the museum ensures that those without contextual knowledge have just as much access to their collections as those with an art historical background.  Linking tags can allow users to access and search collections at their needed comfort level or granularity.  It also frees up the time for employees handling those collections, since they can depend—at lease in part—upon the expertise and observations of their public.

The diversity of tags also increases findability levels for educational uses of online collections.  If an elementary school teacher wants to further her students’ understanding of movements like cubism, she can just open up the catalogue of her local museum to explore the paintings tagged with ‘cubism.’  This becomes increasingly important as field trips are increasingly unavailable to public schools and larger class sizes require more class prep than ever.  With the linked folksonometric vocabulary, the teacher need not fight for the field trip nor dedicate even more time to personally constructing a sampling of the online collection to display.

Crowd sourcing knowledge can also go beyond vocabularies and prove especially useful in an archival context.[1]  When limited information is known about a collection or object and those with institutional knowledge are unavailable (a persistent problem plaguing archives), another source of information is needed.  The best way to do that is to tap those who would have the necessary expertise or experience.  For example, Wellesley College’s archive began including digitized copies of unknown photographs from the archive in the monthly newsletters emailed to alumni.  With those photos, the archive sent a request for any information that alums could provide.  In this way, the archive has recovered a remarkable amount of knowledge about the historical happenings around college.

But folksonomies and other crowd sourcing projects are only useful if institutions incorporate the generated knowledge into their records.  Some gamify the crowd sourcing process in order to engage users, but then lack the impetus to follow through in terms of incorporation.  Dropping the ball in this way may be due in part to technical challenges of coordinating user input and the institution’s online platform.  It may also stem from the same fear that many educators hold for Wikipedia: What if the information provided is WRONG? Fortunately, both anecdotal and research evidence is proving those fears largely unfounded.[2]  The instinct to participate with institutions in crowd sourcing is a self-selective process, requiring users to be quite motivated.  Those interested in that level of participation are going to take the process seriously, since it does require mental engagement and the sacrifice of time.  In terms of enthusiastic but incorrect contributions, institutions may rest assured that the communities that participates in crowd sourcing efforts are quite willing to self-police their fellows.  If something is wrong or misapplied (eg Stephen Colbert’s Wikipedia antics), another user with greater expertise will make the necessary alterations, or the institution can step in directly.

GLAMs are experiencing an identity shift with the increased emphasis on outreach and engagement.[3]  The traditional identity of a safeguarded knowledge repository no longer stands.  GLAMs now represent knowledge networks with which users can engage.  If obstacles exist to hinder that engagement, the institution loses its relevance and therefore its justification to both the bursar and the board.  These open sourced projects can break down those perceived barriers between an institution and their public.  Engaging with users at their varying levels and then using that input shows that the institution envisions itself as a member of that community rather than a looming, inflexible dictator of specific forms of knowledge.  Beyond that, though, institutions can use all the help they can get.  Like at Wellesley, an organization may be lacking in areas of knowledge, or they just don’t have the resources to deeply engage with all of their objects at a cataloguing or transcription level.  Crowd sourcing not only engages users, but it also adds to an institution’s own understanding of its collections.  By mining a community’s knowledge, everyone benefits.

From a more data science-y perspective: Experimenting with folksonomies in an image based collection

For a class on the organization of information, we read an article covering an experiment analyzing the implementation of user generated metadata.[4]  For those in image based institutions looking to build on the attempts of others in the field of crowd sourcing, this experiment is a solid place to begin.  Some alterations, however, may prove helpful.  Firstly, I would recommend a larger pool of participants from a broader age range, at least 25-30 people between the ages of 13-75.  This way the results may be extrapolated with more certainty across a user population.  Secondly, for the tag scoring conducted in the second half of the experiment, I would use the Library of Congress Subject Headings hierarchy in tandem with the Getty’s Art & Architecture Thesaurus, so as to compare the user generated data to discipline specific controlled vocabularies.  Retain the two tasks assigned to the users, including the random assignment of controlled vs. composite indexes and the 5 categories for the images (Places, People-recognizable, People-unrecognizable, Events/Action, and Miscellaneous formats).  In terms of data analysis, employ a one-way analysis of variance, since it provides a clear look at the data, even accounting for voluntary search abandonment.  By maintaining these analytical elements in the one-way ANOVAs and scoring charts for the pie charts, it would be easy enough to compare your findings with those in the article to see if there’s any significant difference in representations of index efficiency (search time or tagging scores) for general cultural heritage institutions and GLAMs with more image based collections.

[1]  L. Carletti, G. Giannachi, D. Price, D. McAuley, “Digital Humanities and Crowdsourcing: An Exploration,” in MW2013: Museums and the Web 2013, April 17-20, 2013.

[2]  Brabham, Daren C. “Managing Unexpected Publics Online: The Challenge of Targeting Specific Groups with the Wide-Reaching Tool of the Internet.” International Journal of Communication, 2012.

Brabham, Daren C. “Moving the Crowd at iStockphoto: The Composition of the Crowd and Motivations for Participation in a Crowdsourcing Application”. First Monday, 2008.

Brabham, Daren C. “Moving the Crowd at Threadless: Motivations for Participation in a Crowdsourcing Application”. Information, Communication & Society 13 (2010): 1122–1145. doi:10.1080/13691181003624090.

Brabham, Daren C. “The Myth of Amateur Crowds: A Critical Discourse Analysis of Crowdsourcing Coverage”. Information, Communication & Society 15 (2012): 394–410. doi:10.1080/1369118X.2011.641991.

Lakhani; et al. “The Value of Openness in Scientific Problem Solving.” 2007. PDF.

Saxton, Oh, & Kishore. “Rules of Crowdsourcing: Models, Issues, and Systems of Control.”. Information Systems Management 30 (2013): 2–20. doi:10.1080/10580530.2013.739883.

[3] “Throwing Open the Doors” in Bill Adair, Benjamin Filene, and Laura Koloski, eds. Letting Go?: Sharing Historical Authority in a User-Generated World, 2011, 68-123.

[4]  Manzo, Kaufman and Flanagan Punjasthitkul.”By the people, for the people: Assessing the value of crowdsourced, user-generated metadata.” Digital Humanities Quarterly (2015) 9(1).

Further Reading on Crowd sourcing, Social media & Open access (drawn from JJ’s class syllabus for 11 April)

Crowdsourcing Projects: Smithsonian Digital Volunteers and the Smithsonian Social Media Policy.
Jeffrey Inscho, “Guest Post: Oh Snap! Experimenting with Open Authority in the Gallery,” Museum 2.0 (March 13, 2013).http://museumtwo.blogspot.com/2013/03/guest-post-oh-snap-experimenting-with.html
Kristin Kelly. Images of Works of Art in Museum Collections: The Experience of Open Access. Mellon Foundation, April 2013.http://msc.mellon.org/research-reports/Open Access Report 04 25 13-Final.pdf/view
Nate Matias, “Crowd Curation: Participatory Archives and the Curarium Project,” https://civic.mit.edu/2013/09/24/crowd-curation-participatory-archives-and-the-curarium-project/
NMC Horizon Project Short List, 2013 Museum Edition,http://www.nmc.org/pdf/2013-horizon-museum-short-list.pdf
Karen Smith-Yoshimura and Cyndi Shein, “Social Metadata for Libraries, Archives, and Museums” OCLC Research Report, Executive Summary only,http://www.oclc.org/content/dam/research/publications/library/2012/2012-02.pdf
For those interested in Museum 2.0, Nina Simon, The Participatory Museum, http://www.participatorymuseum.org/

Experimenting with 3D Scanning

Source: Experimenting with 3D Scanning

Autodesk 123D Catch desktop & smart phone app

Last week, we visited the Ackland Museum to use one of their objects for our first foray into 3D scanning.  I chose a squat, spouted object for my first project.

The phone app is a huge help in terms of starting out with lens distance and the number of photos required for a baseline.  There is a 70 photo cut off, so don’t get too enthusiastic if you’re using the app.  But if you follow their model (the blue segments), you get the right number with a sufficient amount of overlap to construct the 3D render.

screencap_123D_AppThis is a sculpture from home, nothing so fancy as an Ackland object. But it lets you see how the app works.

But it’s certainly a lesson in patience. There are several stages of upload that require users to dredge up the patience we used to exercise when dial-up was our only option.  JJ supposed that the wait time might be a part of the way the site processes the photos—maybe it’s constructing the model as it’s uploading the photo.  If you have internet connectivity issues (the struggle is real, even on our university network), don’t worry too much—I walked too far from the building while it was ‘Finishing,’ and the app asked me if I wanted to continue on my data plan.  Even still, it stayed on that ‘Finishing’ screen for hours.  I finally just let the phone alone and went to bed.  When I opened the app up the next day, it offered me the option of publishing the upload, which went speedily.  So lesson learned: don’t attempt a project under time pressure.

The Autodesk application is meant as an intro to 3D scanning and potential printing (123D Make).  So there’s a limit as to editing the model after it’s been uploaded.  Pretty much all you can do is add tags and a description without any tools to manipulate the model itself.  You can download several different file formats, though.  It also allows you to share the project on various social media platforms or embed it (as seen in the first image).  If you’re new to 3D scanning, this is definitely the way to start.

Agisoft PhotoScan (& Sketchfab)

PhotoScan, on the other hand, is better either for beginners with the option of a tutorial or for those with a little more familiarity with image manipulation software.  The learning curve isn’t as steep as with Adobe products, however, since the basics of what you need to do (in the order they need doing) show up on the ‘Workflow’ dropdown of the menu.  As you complete each step, the next steps are no longer greyed out.  For each task completed, more of the toolbar is made available to you.  For example, once the photos are aligned, the dense cloud constructed and the mesh build, you can view the various underlying structures of the model.  To help fill in the gaps, you can use Tools>Mesh>Close Holes and the software will interpolate what’s missing. The best part, though, is that the default setting for all of the stages result in a respectable model that renders non-reflective surfaces and textures beautifully.  For example, the model we constructed during class involved a brick wall, and it really did look like an actual brick wall by the time we finished, and just on the default (medium) settings.  The only caveat: Make sure you’re working on a computer with a a good bit of memory behind it—the speed and capability of the processing is limited otherwise.

Once you have the model as you like it, you can use image editing software to alter the aesthetics of the model.  DO NOT make any alteration (including cropping) to the photos in the process of taking them nor before aligning them.  Once the photos are aligned, you can crop them in the program to hone in on the object.  For the amount of light and sharpness, export the material file and .jpg.  Edit the .jpg in your image editing software (it will look like an unrecognizable blob—just go with it) and then feed it back into the model for a reapplication of the texture by going to Tools>Import>Import Texture.

Once the model is perfected, you have the option of several export options.  Did you know that Adobe Reader lets users move around a 3D model?  It’s a fun and approachable option.  The Wavefront obj file option allows you to to save without compression, though the file size is large.

experiment with PhotoScanFor this model:

The photos that align properly show up with green check marks.  I think part of the issue with this model is that I uploaded the photos slightly out of order, thus missing the full scan of the little Frenchman’s beret.  That, the uneven lighting and the poorly contrasted background all contributed to the editing problems.  If the model were better, it would be worth spending time with the magic wand or lasso tools to edit closer to the sculpture.  For lighting, try to get neutral lighting that casts minimal shadows.  Bright or harsh lighting not recommended.  If you’re shooting outdoors, aim for cloudy days.

Sketchfab is a means of publicly sharing projects from 3D rendering programs.  If you have paid for an Agisoft account, you can upload projects directly to Sketchfab.  My account is super boring, since the free trial of PhotoScan won’t allow file export or saving.  But RLA Archaeology has a fascinating and active account, so take a look there for a sampling of what Sketchfab has to offer as a public gallery of projects.

3D Printing

The process of creating the scan and watching such a textured image develop was gratifying—that scan being the one that we created in class with the brick, and not reproduced here.  I’m certain that watching the model go through the 3D printing process would be equally fascinating.  But the final product may be less satisfying.  Most printers produce those heavily ringed models that require sanding before they look and feel more like a whole—rather than composite—object.

What’s really cool about 3D printing, though, you can use such an assortment of materials with which to print.  For example, World’s Advanced Saving Project is building prototypes for affordable housing 3D printed from mud.  Some artists are constructing ceramics from 3D models using clay (basically more refined mud).  Still others are using metals for their work.  The possibilities for art, architecture and material culture are overwhelming.

 

Social Media and the Changing Role of the Curator

Source: Social Media and the Changing Role of the Curator

Sorry everyone, my blog posts are getting a little out of order the past few weeks. This post refers to Week 12: Public Engagement/Crowdsourcing. Will post entries to make up for the gaps here soon.

This week we are talking about not only the transformation of the word “curator” and its implications in general language, but the changing role of the curator in an age of increasing digital projects in the galleries and social media as a tool for outreach, education, and crowdsourcing projects. Within the past few years we see the word used in pop culture as kind of a trendy stand-in for “editor” or “designer”, or, in some examples, simply a, “picker-outer.” You curate your iTunes playlists, your garden, your Instagram account. It isn’t exactly that this terminology is wrong, but the popular usage seems to have overtaken the museum/gallery-specific use as of late. I think this actually reflects the fact that the work a “traditional” curator of art does is in a state of flux in response to the rise of social media.

Nancy Proctor’s article “Digital: Museum as Platform, Curator as Campion, in the Age of Social Media” [1] was particularly helpful in hashing out the changing nature of curatorial work and perception of curatorial work in museums today. Proctor serves as a Digital Editor of the Curator journal, and has a decade’s worth experience implementing digital tools in the New Media Department at the Smithsonian. The table from the borrowed list of Smithsonian curator David Allison on the first page gives us the basic gist: Change is “In”; Stability/stodginess is “Out.” “Curators as experts” gives way to “Curators as collaborators and brokers.” Instead of the published monographs, telling stories. In place of Control, Collaboration. And taking advantage of all the Web has to offer in terms of social media is very much “in.” One analogy Proctor cites from Steven Zucker, dean of the School of Graduate Studies at FIT, also gives us the gist of the shift: “[Zucker] has described it as a transition from Acropolis – that inaccessible treasury on the fortified hill – to Agora, a marketplace of ideas offering space for conversation, a forum for civic engagement and debate, and opportunity for a variety of encounters among audiences and the museum.” So what does this look like in practical terms for museums and the jobs of curators? According to Proctor, it means user-generated content, “crowd curation”, forums for online discussion, and highlighting social media exchanges. In short, curators are facilitating, both in the “analog platform” of the museum and online, the incorporation of many voices for selecting art, performing art, and engaging in discourse.

I think even to those of us who have studied art history and museum studies in school for a few years, maybe volunteered, interned, or worked in museums, may have still had a certain expectation of what a curator’s work is or isn’t, and that expectation may be challenged and continue to shift as the field does. In my opinion, it is challenged for the better and on to more interesting, relevant, and collaborative work – both interdisciplinary as well as between the institution and the public. Actually, on a more personal note, part of the reason I went into Library Science (and hopefully remaining in an art museum or university art department, fingers crossed) is that I wanted to continue to do curatorial work (the content knowledge, collection development, collaboration with curators if I’m in a museum setting) in a capacity that fit my interests and skill set more than the old “stodgy” model of the curator who focuses mostly on gaining a wealth of knowledge on a niche of expertise. My conception of curatorial work changed, and could align better with my motivation to reach outward with a public always in mind, rather than more inward, with my research at the forefront of my career. Of course museums are shifting from this model, and I think collaboration with librarians, archivists, and tech-focused employees makes museums all the nimbler in instituting these changes and dealing with them creatively.

A few years ago I happened on the article, “The Power of Non-Experts” [2] by Desi Gonzalez, for Hyperallergic, one of my favorite art blogs. I re-read it today in light of this batch of readings about crowd sourcing and social metadata, and although the article is only 3 years old, it strikes me as kind of quaint that Gonzalez was hired to stand around in the galleries and gather responses, face-to-face, from museumgoers a few years prior to the writing of the Hyperallergic post. The idea that a lot of the public finds art, especially contemporary art in general, baffling, is not a new idea, nor is the resentment about “not getting it” and being condescended to by an “expert.” Part of the point of the article, that museum goers each bring their unique experience to art viewing and form their own opinions and interpretation of the work that may or may not be informed by past art knowledge, a curatorial vision for the exhibition, a docent-led tour, or wall labels, is nothing new either. But now, seeking out and valuing this input from visitors is revitalizing engagement and interactivity, and hopefully inclusivity (see any number of critical pieces on the museum as a space of privilege). I think the ongoing challenge will be deciding how to incorporate that valuable expertise (read: content knowledge and original scholarship) of the curator, who can hopefully retain their voice while in conversation with all the others. It is certainly exciting to learn about examples of the types of programming, exhibit design, and web-based interactives that are being implemented, but I think the impetus behind it all (the above mentioned education, inclusivity, respect for diversity of visitors) reflects a positive change in the role of the curator and in museums as a civic space of inclusion.

With regards to a GLAMwiki project idea, I have learned through my Kress project that the Ackland has a number of pages of printer’s marks from books published by Northern Renaissance printing houses. While there is of course a Wikipedia page dedicated to explaining what a printer’s mark (or printer’s device) is, it would be interesting to open a Wiki up where the Ackland could partner with institutions that have Rare Book Collections (which encompass museums, archives, libraries, maybe even dealers of rare books?), to put all of these images in one place in order to compare them and also seek knowledge from book experts, as these pages often end up in art museums without a dedicated curator of incunabula (books printed before 1501) or old books generally. I have to give credit to Heather Aiken, my Education partner of the Kress Fellowship, for this idea, but there could be a creative element with a section dedicated to users uploading their original designs of their own personal printer’s mark.

 

[1] Proctor, Nancy. “Digital: Museum as Platform, Curator as Champion, in the Age of Social Media.” Curator 53/1. 2010. 35-43.

[2] Gonzalez, Desi. “The Power of Non-Experts.” Hyperallergic. January 3, 2013. http://hyperallergic.com/62985/the-power-of-non-experts/

Ethical Design Decisions in Data Visualization

Source: Ethical Design Decisions in Data Visualization

Shazna Nessa, a journalist who has specializes in data visualization, addresses the respect a designer must have for their audience’s level of visual literacy. In her article “Visual Literacy in the Age of Data”[1], Nessa warns that with all the cool new data visualization tools, journalists working with visualizations are beginning to present toward a more specialized audience, when perhaps the audience’s level of visual sophistication has not been running apace.

I once heard that editors at the New York Times aim to bring articles to a 9th-grade reading level. As the New York Times is a respected pioneer in data visualization in journalism, I wonder if there is any formalized understanding of visual literacy that corresponds with text-reading literacy. A set of loose guidelines would certainly be helpful in establishing best practices in this area. While the journalist’s goal is to inform the public, the scholar engaging in Digital Humanities would also do well to re-evaluate their use of data visualization. For example, Nessa cites a 1984 study[2] that ranked the most efficiently decoded visual data presentations (bar graphs and scatter plots leading the way, heat maps towards the end). I think the digital humanities scholar or digital art historian, when and if they are aiming to skew their argument towards a more specialized area in a dissertation, conference paper, or journal article, must find a way to balance basic design principles and legibility when working with a sophisticated argument that may be data-driven. Regardless of who the readership is for the scholar, some of the questions Nessa recommend journalists ask themselves, such as “How many points are you trying to illustrate?“ and “Although I’ve edited the data already, is there superfluous data that I can still edit out?“ apply to the Digital Humanities scholarship world too.

This week’s readings and class discussion really made me consider the ethics involved with data visualization, and the visual literacy that may be required of a reader/viewer of this information. Data in any form does not speak for itself, it needs a human voice, or ideally many voices, to interpret it and to make a statement. And data visualized seems to run a huge risk of intentional manipulation and/or potential misunderstanding. While blatantly spouting misleading information via chart a-la-Fox News (or anyone with an agenda) is heinous, I think the responsible data visualization really begins a deep understanding of design principles, a respect for one’s audience, as well as and understanding of the raw data itself.

A quick Google search of “Ethics in Data Visualization” yielded first results of Codes of Ethics for various software companies that work with data visualization, like Tableau and Visual.ly. These codes basically cover the importance of accuracy in data collection, data analysis, and design choices. In fact, on the Visual.ly blog, the author quotes a “Hippocratic oath for visualation”:

I shall not use visualization to intentionally hide or confuse the truth which it is intended to portray. I will respect the great power visualization has in garnering wisdom and misleading the uninformed. I accept this responsibility willfully and without reservation, and promise to defend this oath against all enemies, both domestic and foreign.[3]

That last part sounds a little tongue in check to me, but it is encouraging that companies are publishing statements like this. It’s almost like a blessing and a curse that data visualizations are really coming to the fore through journalism. It’s a positive in that journalism is traditionally associated with fact-checking and transparency as a tenet of the profession. But what passes for journalism is where not only visual literacy but media literacy skills generally are need to discern journalistic integrity.

On a less serious note, my image quilt was so much fun to make! When I searched the Benezit Dictionary of Artists as I was working on writing a brief bio on Albrecht Durer for my Ackland project, I saw that Benezit displays images of artist’s signatures. For an artist so exacting, his signatures when seen so close-up and separate from their corresponding artworks were so charmingly unique from each other. When compared, some even seemed a bit lopsided and sloppy, (though I’m sure this is in part due to the inexact copies coming from wood cuts ) so I thought the monogram signature a perfect fit for a quilt:

[1]Shazna Nessa, “Visual Literacy in the Age of Data”, Opennews.org, 13 June 2013  https://source.opennews.org/en-US/learning/visual-literacy-age-data/
[2]William S. Cleveland and Robert McGill, “Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods”Journal of the American Statistical Association, Volume 79, Issue 387, 1984.
[3]Jason Moore qtd in Drew Skau, “A Code of Ethics for Data Visualization Professionals” Visually Blog, February 7, 2012. http://blog.visual.ly/a-code-of-ethics-for-data-visualization-professionals/

 

Digital Assignment #3

Source: Digital Assignment #3

I had started initially charting some timeline points that related to a paper I’m writing about in my Intro to Archives class about the impact of technology on the nature of the collection, accessibility, and preservation of the Archives of American Art. However, upon hearing the recent news of the architect Zaha Hadid’s death and reading a number of obituaries and memorial statements, I (like a lot of people and museums, design publications, news orgs, etc. I follow on social media!) started thinking about the legacy of this groundbreaking and controversial architect. Working with around 20 timeline points I had plenty of editorial decision-making to do in looking at such a full life (which life events, which partnerships, which buildings, which awards, which controversies), I did want to present somewhat of a balance of Hadid’s interesting biography and look at her life before she achieved “starchitect” status, and a timeline is a good way to do achieve this. So I decided I would spend the evening with the articles, images, and videos I wanted to look at anyway Enjoy!

 

Experimenting with Gephi and network visualizations

Source: Experimenting with Gephi and network visualizations

Over the past several days, I’ve been playing around with Gephi to get a better idea of what network analysis tools can do, and how I might apply Gephi (or a similar tool) to my own research. I don’t have any previous experience with network analysis or visualization, but I’m incredibly interested in the possibilities that these tools offer for a wide variety of research programs.

I began by trying out some sample datasets that Gephi includes on their GitHub page.1 I first tried to use Gleiser and Danon’s data on social networks of jazz musicians.2 However, when I loaded it into Gephi, I discovered that none of the nodes were actually labeled with the names of the musicians, each node only labeled by its ID number in the dataset. This resulted in a very intriguing looking network, as there were over 180 nodes in the network and many with multiple edges connecting them, but did not produce an intelligible visualization. I did get experience examining the dataset itself though as I investigated this issue. Since the dataset was a .NET file (not a format I was familiar with), I wondered if Gephi was having an issue relating names to their respective nodes. I was able to open the file in a text editing program and saw that, in the dataset itself, all of the musicians’ names had been replaced by ID numbers. I imagine that the researchers have a codebook to help them interpret the raw dataset, but that was not included on the Gephi GitHub.

Next, I tried out another interesting looking dataset: a social network for a class of German students in 1880.3 I was able to find the article the researchers wrote using this data through UNC’s e-journal subscriptions and read over the article to better understand what was going on with the data. For a history of social network research, the article is well worth looking up, as it describes an early mixed-methods research study conducted by a German school teacher, Johannes Delitsch, as he analyzed friendship groups in his 1880 class of boys. The present researchers have re-purposed Delitsch’s data to perform new, more high powered social network analysis on this data to see what Delitsch’s research can reveal today.

While this research project is a great example of the interesting work that can be done with social network data (and a reminder that this data is not limited to the Facebook era), I mostly used this dataset simply to learn a little bit more about how Gephi works. After importing the data, I made a few adjustments to produce a readable, usable visualization of the whole network. As there were only a couple dozen nodes, I could produce a visualization that captures the entire network.

For this visualization, I chose the Fruchterman-Reingold model, set the color of the nodes to vary in intensity of color based on how many edges are connected to them (deeper green is more connections), and turned on labels so I could the names of the classmates. While this doesn’t tell you how the friendships are formed and sustained (this information is in Delitsch’s original study and provides great social insight), the visualization does show patterns of popularity: who is the center of various social groups and who are on the outskirts.

To dig a little deeper into the data, I then experimented with some of the different filters to produce more fine grained sub-network visualizations. Fortunately, this dataset also included the direction of edges, indicating where connections were incoming (such as receiving a gift from a classmate) or outgoing (or giving a gift). I produced one visualization where I filtered the network to only those nodes with greater than 7 incoming connections and produced another visualization where I filtered the network to only those nodes with greater than 5 outgoing connections.

Visualization filtering 7 or more incoming connections.
Visualization filtering 5 or more outgoing connections.

Again, I’m not really in a position to analyze or compare these two visualizations, but for a researcher this kind of filtering could support a lot of different queries into the data.

I was able to see from this brief exercise some of the different ways in which Gephi can be used to create visualizations and support network analysis methods. However, network analysis be used to many other ends as well. Pamela Fletcher and Anne Helmreich’s project mapping the 19th century London art world4 and Stanford’s ORBIS5 are two examples of how network analysis can be paired with geographic information to explore how networks form and manifest effects in both time and space. These examples also illustrate, however, that additional resources and expertise quickly become necessary when projects move beyond free tools like Gephi. Both of these projects had dedicated programmers working together with humanities researchers to produce the unique, interactive network visualizations.

Elijah Meeks and Karl Grossner describe ORBIS as an “interactive scholarly work” (ISW), characterizing this as a new potential scholarly output in addition to more traditional models like the journal article or monograph.6 ORBIS not only represents new and innovative scholarship into the Roman world of antiquity, but also provides an interface for individuals to make their own discoveries and support their own research. Of course, the traditional journal article is also founded on the idea that the information it presents builds upon previous scholarship and serves the development of future scholarship, but something like ORBIS makes this manifest by providing the means for interaction and direct engagement. ORBIS does more than just network analysis and visualization, but these methods clearly play an integral role in the new kinds of scholarly projects that ORBIS demonstrates—those that blur the lines between publication, research tool, and online exhibition.

Across time and history, networks of people, places, and materials have been hugely significant forces; while the importance of networks has been long recognized by scholars (as evidenced by Delitsch’s work), digital tools provide ways to interrogate and visualize these complex structures in ways that had not previously been possible. These examples illustrate the kinds of exciting projects that can be done with network analysis, but also demonstrate that additional expertise and resources quickly become necessary when projects move beyond free tools like Gephi.

NOTES

[1] https://github.com/gephi/gephi/wiki/Datasets

[2] P.Gleiser and L. Danon , Adv. Complex Syst.6, 565 (2003).

[3] Heidler, R., Gamper, M., Herz, A., Eßer, F. (2014): Relationship patterns in the 19th century: The friendship network in a German boys’ school class from 1880 to 1881 revisited. Social Networks 13: 1-13.

[4] Pamela Fletcher and Anne Helmreich, with David Israel and Seth Erickson, “Local/Global: Mapping Nineteenth-Century London’s Art Market,” Nineteenth Century Art Worldwide 11:3 (Autumn 2012). http://www.19thc-artworldwide.org/index.php/autumn12/fletcher-helmreich-mapping-the-london-art-market.

[5] http://orbis.stanford.edu/

[6] Elijah Meeks and Karl Grossner, “Modeling Networks and Scholarship with ORBIS,” Journal of Digital Humanities (2012).

 

Telling, counting, and temporal perspectives

Source: Telling, counting, and temporal perspectives

In a thoroughly insightful essay on the relationship between telling (erzählen) and counting (zählen), German media theorist Wolfgang Ernst describes how historical narratives have developed over time.1 The earliest historical narratives were written in the form of annals, with one event following another. This genre persisted in the West from antiquity through the middle ages, but intellectual developments in the Renaissance and the Enlightenment slowly gave form to the discipline we now know as history. The historian not only recounts events in a chronology, but also seeks to explain why one event led to another, and thus how the past serves as the precedent for the present. As Ernst suggests: “chronology may supply order in the temporal arrangement of events, but it does not supply explicit patterning, and that is what separates proper history from chronicles and annals.”2 For Ernst though, the etymological link between telling and counting is suggestive of a deeper connection between these two acts of writing the past. Telling, or re-counting, still involves a kind of adding up of past events, even if these become embedded in a broader narrative structure.

For digital art historians, keeping this connection in mind is important, as digital means for telling historical narratives rely more and more on counting-based techniques. The timeline is an prominent and accessible digital tool, with many free and easy to use services available, such as Timeline JS or TimeMapper. These tools operate by drawing upon a database or spreadsheet of events, putting entries from the spreadsheet into interactive and aestheticized end products that can be embedded in blogs, websites, or digital journal articles. In form and structure, the database is perhaps closer to the annals or chronicles of antiquity than it is more recent historians’ accounts, and yet these database-fueled timelines are used to illustrate histories in the modern sense. Ernst certainly does not set up telling and counting as a hard and fast dichotomy, and we should not feel compelled to pursue one at the expense of the other. In many ways, digital scholars are developing new ways of writing history that integrate modes of telling and counting simultaneously. Still, it can be useful to schematize the ways in which we write history, or scholarship more generally, and try to place these within the broader field of intellectual activity, across both time and space. Ernst is motivated to historicize the act of writing history itself, and this kind of perspective can help us as digital scholars to strengthen, bolster, and better define our scholarly activities.

For my own timeline, I decided to create a history of digital storage media, starting with pre-computer precedents of data storage and moving to the present day. I got the idea to pursue this project based on research I’m currently engaged with regarding new media art, and building upon the work of Matt Kirschenbaum in the book Mechanisms: New Media and the Forensic Imagination. In this book, Kirschenbaum argues that the analysis of new media art has to look beyond what’s happening on the screen, and also pay attention to the material technologies that the work is using, including hard drives, specific operating systems, etc. Reading this book got me interested in the history of storage media. Although we don’t necessarily think about these things when we are transferring files to an external hard drive or putting material for a presentation onto a USB drive, these storage technologies have a rich history, and have developed in response to market forces, the work of previous researchers, and so on.

Establishing a history for these storage media, then, can greatly help research into new media art, providing a material context within which different works operated over time. Was an artwork originally distributed on a floppy disk? A particular web browser? These questions not only have an impact on how a work might be interpreted, but are also hugely important for the preservation of the work. Indeed, I’m learning more and more that interpretive and contextualizing activities for new media art cannot be separated from preservation activities—with new media art, the work of critic, curator, and archivist all intermingle. At the center of these activities is a necessary attention to the material aspects of new media art. As more and more new media art (hopefully) enters museums, galleries, art history curricula, and text books, a strong sense of the history of digital technology will need to develop as a corollary to these practices. In the history of painting, it would be a grave mistake to conflate watercolor and tempera, and equally so it would be a critical error to confuse work on a CD-ROM with work on a floppy.

Although this timeline was mostly just an exercise, I do think it demonstrates the potential for this kind of resource on a museum website in conjunction with a digital art collection or as part of a course pack for a class on digital art. I tried to focus on the introduction of different storage media, but I could also envision a similar time with a much finer grained level of detail, outlining the number of different research advances that all contributed to the hard disk drive, for example. Another interesting addition could be to intersperse different digital artworks into this history, demonstrating how innovative artworks responded to and utilized the newest technologies. This could open up a huge variety of potential research questions: how do the form, structure, and content of digital artworks change as storage media grow more capacious? Following Kirschenbaum, the variety and depth of research questions for new media art greatly expands when you are no longer just interested in the look, feel, and functionality of the digital artwork, but also begin to investigate the artists’ and artworks’ intersections with the historical development of technologies.

NOTES

[1] Wolfgang Ernst, “Telling versus Counting,” in Digital Memory and the Archive (Minneapolis: University of Minnesota Press, 2013), 147-157.

[2] Ibid., 152.

Visualization as a research method

Source: Visualization as a research method

From the beginnings of art history as a distinct discipline, art historians have employed means to visually demonstrate their argument, and often in ways that conceptually fit their line of argumentation. A prime example of this is Heinrich Wölfflin’s essay on linearly and painterly styles in all manner of art, including painting, sculpture, and architecture.1 While I won’t go into too much detail about this essay, Wölfflin’s argues for a classification of two dominant modes of art making in early modern Europe, linearly and painterly, a designation that also generally lines up with the distinction between Renaissance and Baroque styles. Wölfflin primary method is making this argument is to compare two works of art, one exhibiting characteristics of the linear and the other characteristics of the painterly. Wölfflin’s argument would not have been as compelling without this visual demonstration, but part of what makes this demonstration so persuasive is the way in which it mirrors and models the structure of Wölfflin’s rhetorical and textual argumentation: pairs of traits defined in opposition, illustrated by opposing visual examples.

Across technological changes from the slide projector to the Powerpoint, this method of comparative visual analysis has remained an essential component of art historical work both in the classroom and in journal articles and monographs. Although this kind of demonstration has been used to support arguments and methods far afield from Wölfflin’s formalism, these diverse uses all share in common the utilization of close reading of a small set of visual artifacts to bolster a text-based argument. In contrast to these tried and true methods of visual analysis and demonstration, digital research methods deliver new means to analyze huge bodies of visual and textual material, and to generate graphical representations of this analysis—more commonly termed “visualization.” If Wölfflin’s revolutionary method of placing two images side-by-side for comparison helped to establish a unique line of inquiry for art historical scholarship to pursue, what is the potential for visualization?

Lev Manovich and his colleagues at the Software Studies Initiative have developed a suite of tools for art historians to begin to answer expressly this question. As Manovich argues in his feature article for the inaugural issue of The International Journal for Digital Art History, humanities researchers need to understand the principles behind ‘data science’ and how contemporary society thinks through ‘big data’ in order to pursue computationally driven research. Computational methods have the potential to expand the kinds of inquiry and research available to art historians, but it is imperative that art historians first have a grounding in working with data.2

As important as it is to understand how data-driven methods differ from traditional modes of art historical research, I think it can be just as helpful to emphasize the continuities. ImagePlot, one of the tools developed by the Software Studies Initiative, takes individual visual artifacts as data points and arranges them along an X-Y grid according to a variety of characteristics, such as hue, saturation, date of creation, and so on. In one example of ImagePlot in action, a Software Studies Initiative researcher, Jeremy Douglass, demonstrates how the tool can be used to analyze Mark Rothko’s body of work, illustrating how his use of color developed over time, where patterns emerge in his body of work, and what artworks might be outliers to these trends. Both Wölfflin and ImagePlot treat individual artworks as data points; the difference between these two approaches is the scale for analysis. While Wölfflin focuses in on two works at a time, ImagePlot enables researchers to scale up to an entire corpus of work.

Although an art historical researcher using a tool like ImagePlot may still be interested in discovering something about an individual artwork, she is not limited to this sole visual artifact (or a handful) as her unit of analysis. Computational processes also make it possible for an art historian to treat a huge corpus of artifacts as her unit of analysis—and to be able to empirically investigate this corpus in scholarly rigorous ways.

An early example of this is Jules Prown’s analysis of John Singleton Copley’s American patronage.3 Working in the 1960s, Prown and his research team amassed data about Copley’s patrons, and then used computational methods to find patterns, such as relationships between political affiliations, religion, size of painting, and gender, many of which would not have been possible to uncover by slowly comparing one painting to another. Even more, these methods add an empirical weight to the claims that Prown makes about Copley and his American career. Prown is not just speaking from the position of a scholar who has spent sufficient time studying Copley to be able to make authoritative statements about his body of work; he can also point to concrete methods used to process the data.

I do not want to suggest some false division between traditional art historical research methods as ‘subjective’ and computational methods as ‘objective.’ Traditional research methods have scholarly rigor and weight to them and computational methods are necessarily driven by particular and contingent decisions made the researcher. Subjectivity and objectivity are rarely helpful evaluative categories, and I don’t think they are that useful in this discussion either. What I would suggest is that computationally driven tools allow art historians to apply long familiar scholarly methods and theoretical approaches to entire bodies of work, something that was not possible with a real level of rigor with previously available tools for analysis. Returning to the point I began this post with, these computational tools also generate visual demonstrations of this analysis that compellingly capture and communicate scholarly findings.

ImageQuilt of Kurt Schwitters’ typographical works

Visualization, however, is not just a means of demonstration, but can also itself be a tool for analysis. I found this out through my own experimentation with the ImageQuilts app, which allows the user to arrange dozens of images into a ‘quilt.’ While this is a tool that can just be used to make a pleasant looking collage of images, it has analytic applications as well. I made an image quilt of Kurt Schwitters’ typographical works, and used the app to arrange the images into rough categories: (in order from top left to bottom right) pieces that use letters to create figural representations, pieces that heavily layer letters to create dense textural surfaces, designs for magazine covers, and finally more typical collage works that make use of text-based material. As a quick exercise, this kind of visualization can help us to make sense of (a small selection) of Kurt Schwitters’ varied career, which in turn might help us to formulate a research question. Indeed, the process of actually arranging the images got me thinking about how I might devise research into Schwitters’ career. This visualization could easily be built upon, with annotations or accompanied by a thorough, critical description of Schwitters’ typographical work. While this visualization is not as sophisticated as ImagePlot, or many of the other tools out there, ImageQuilt is still a great example of an easily accessible tool that art historians can use to analyze and illustrate in new ways.

NOTES

[1] Heinrich Wölfflin, “Linear and Painterly,” in Principles of Art History (1932; reprint ed., New York: Dover Books, 1950), pp. 18-72.

[2] Lev Manovich, “Data Science and Digital Art History,” International Journal for Digital Art History, Issue #1, June 2015, pp. 12-35

[3] Jules Prown. “The Art Historian and the Computer.” Art as Evidence : Writings on Art and Material Culture (New Haven, CT: Yale University Press, 2001).

#DAH symposium and Network Visualizations

Source: #DAH symposium and Network Visualizations

This week followed a somewhat different structure, as we had a “Special Day” on Monday, when the Digital Art History conference was being held at Duke. Unfortunately, I missed the only portion of discussion/workshop this week, which was held on Wednesday. This is too bad; as I probably could have used some additional discussion time on our readings about data in Digital Art History. Fortunately, I was able to attend the last two presentations in the #DAH conference at Duke. Ingrid Daubechies, a Professor of Math at Duke, has been working with museum curators and conservators by developing algorithms that go so far as to detect the wood grain on frames that show up on x-rays, in order to remove that shape and texture for more precise cleaning and conservation. C. Griffith Mann of the Metropolitan Museum of Art closed out the presentations with “Museums in a Digital World: Engaging Audiences in the Collection”, which was an overview of his work with digital tools for museum visitors at the Cleveland Museum of Art, like an interactive digital wall. Its nice to see the connection between the behind-the-scenes metadata from the museum database and the end result for a playful, interactive museum going experience.

Our readings this week were themed by data. I would like to learn more about using textual data and using machine-readable text. I took English professor/Digital Humanist/blogger Ted Underwood’s advice[i] and checked out JSTOR’s Data for Research API, which is a free tool that gives access to 7 million journal articles. The really cool thing about this API is that it includes additional tools beyond search and machine-readable text: information about word frequencies, citations, topic modelling, and visualization tools.

This past Friday I plugged in over 100 relationships (for now sticking with mutual relationships) for my Kress project. I chose 3 relationship types (married to, related to, worked with) and 3 main entities between Artists (Designers/Engravers); Authors (Humanist writers/Translators/Editors); and Printers (Printing Houses/Publishers/their spouses). I included spouses (read: wives), as these women are often the link between generations of printing houses in Basel. There was intermarrying between the major competitors from the beginning of the printing boom in this city – my aim was to show the use of marriage as a business tool to benefit multiple families. While this intermarrying did not form some kind of formal conglomerate, once I saw just how many women I was listing who were connected as a daughter or wife or in-law to various printing houses, I speculate that this intermarriage at least fostered healthy competition in Basel, and strengthened the city’s role as a Northern European center of publishing. It possibly brings up a potential research area of women’s work in these printing houses: were they supportive housewives or did they play an active, if unacknowledged role in the business, in creating partnerships between artists and engravers, in negotiating contracts, proofreading, physically running the presses?

A network visualization is a great way to tell a story – like other data visualizations we talked about in class, I think there is potential for misleading the viewer if they are not properly described and contextualized. For example, “Martin Luther” as a node is rather small in my Google Fusion table. One could mistakenly read this as a mistake for such a key figure of the Reformation in the 16th century. However, the story I am telling is of networks in Basel only, and it appears that as far as working with Basel publishers, Luther stuck with only the Curio family, and then specifically its patriarch Valentin Curio. Some nodes are also larger due to a high number of connections through marriage and bloodlines, while not reflecting a high production rate or anything like that. In playing with this network visualization, which I will share in its final stage, has been really instructive in the choices we make with data. Even my fairly simple scheme (3 entities, 3 relationship types, only mutual relationships) led me to think critically about just how exhaustive I want to make my chart, or how exhaustive I am able to make my chart, given the data available to me is certainly incomplete. It also reinforces the importance of expertise in a specialization of art history – I’m looking forward to running this list of relationships by the Art Historian I’m working with – while I am working more from “raw data” here than deep historical knowledge, Miranda will certainly be able to brainstorm more publications and partnerships that she has come across in her research.

Below is my network visualization as-is from Google Fusion charts. I’m going to play with other visualization apps like Palladio. The way I organized my spreadsheet did not separate by gender, and I want to make that a focus on my chart, maybe just by color-coding.

 

[i] Ted Underwood, “Where to Start with Text Mining,” The Stone and the Shell. http://tedunderwood.com/2012/08/14/where-to-start-with-text-mining/

 

Open data for art history research

Source: Open data for art history research

Although sharing research data has been a huge concern for researchers in both social science and hard science disciplines for quite some time, the issue certainly came to the fore around 2010, when the National Science Foundation required that all grant proposals include a Data Management Plan, which required considerations for standardizing, archiving, and sharing data.1 As the NSF is one of the major providers of funding for scientific research in the United States, this requirement has had huge implications for researchers, who now not only need to worry about collecting and analyzing data, but also have to develop strategies for preserving this data and making it accessible. As data collection is often difficult and resource intensive, the benefits for sharing research data are potentially huge, enabling future researchers to build upon existing data sets in ways the original researchers never imagined. However, as Christine Borgman discusses, the discourse surrounding data sharing is dense:2

[There] are thick layers of complexity about the nature of data, research, innovation, and scholarship, incentives and rewards, economics and intellectual property, and public policy. Sharing research data is thus an intricate and difficult problem—in other words, a conundrum.

An engaging body of literature about the challenges and possibilities of sharing scientific data has begun to grow, especially in the past several years, with contributions coming from academic librarians, data curators and archivists, and the scientists themselves. Some of the key issues from a library science perspective include the negotiating the role of librarians and archivists in helping to manage research data3 and understanding how to incorporate good data management practices into researchers’ existing workflows.4 While this rich discourse continues to evolve around data sharing in the sciences, I wonder about the possibility for similar discussions in the humanities, and particularly for (digital) art history research. What are there venues for this discourse in the humanities? What are the benefits of sharing art historical research data and what are the potential issues? There is clear motivation and a high level of importance for this discourse in the sciences (not least because of the huge amounts of funding money at stake), but is this discourse just as critical for art historians?

As Borgman suggests, scientists have a plethora of concerns about sharing data, such as how to ensure that re-used data is properly attributed, how data sharing might help to contribute to a faculty member’s tenure considerations, or how to guard against potential misuses of data. Art historians would likely have to negotiate many of these issues as well if data sharing were to become widespread across the discipline. However, data sharing in practice would look very different in art history than it does in the sciences, and the unique nature of art history research as it has traditionally been practiced would raise some serious hurdles.

For one, art history is a far more individualized discipline than any of the social or hard sciences. Art history research is often characterized as long and intense contemplation of images, previous scholarship, and other historical documents by a lone scholar, who deliberately writes up her findings. In this traditional research model, there is not a lot of opportunity for sharing data. Art historians are also notorious perfectionists, who dislike sharing anything before it is completely squared away and ready to be published. Another difficulty is that there is very little standardized data in ‘traditional’ art historical research. While many different social scientists would all be able to (hopefully) make sense of another researchers dataset from a large survey, any two given art historians may employ radically different and idiosyncratic methods of analyzing images, with perhaps a great deal of of that analysis remaining internalized and never explicitly or systematically recorded. Art historians working on living artists or studying current cultural practices also might have incredibly sensitive data, and may not want to share the raw data out of respect for their subjects.

Given these difficulties, many art historians may question if it’s worthwhile at all to even think about sharing data. If other art historians will be able to read and build upon the research once it’s published in a journal or as a monograph, then what’s the point? Although the kind of widespread data sharing common in the sciences will perhaps never catch on with ‘traditional’ art historians, the increasing importance of the digital in art history is rendering that kind of ‘traditional’ research more and more outmoded. To take full advantage of the opportunities of digital art history research, then art historians will have to get comfortable with sharing their datasets, as well as their tools and methods.

There is already real evidence of this shift in the art history disciplinary culture with the working practices of digital scholars especially. For example, in Thomas Padilla’s interview with digital scholar Matthew Lincoln, Lincoln talks about how he rigorously documents his data in order to facilitate sharing the data and having it be used by a wider set of researchers.5 He modeled his own data habits off of other scholars that he admired in the field, taking up these practices as a kind of de facto standard. While Lincoln’s willingness to share his data—and the steps he takes to make his data usable—is exemplary, these practices were taken up on his own volition, and not learned in a methods course or another educational or professional development setting. Digital art history can develop at an accelerated pace if researchers share their data, but they have to learn best practices for standardizing, cleaning up, and making that data accessible. What are the best venues for art history as a discipline to negotiate and articulate those best practices? How should those best practices be taught to students, as well as scholars already established in the field? These are some of the questions that art history has to tackle.

For my own part, I would also be very excited to see more datasets released by museums, similar to what was recently released by the Tate.6 In addition to cataloging data, I would be interested to see more datasets released containing information on conservation and preservation actions artworks have received over time. As one of my main research interests is contemporary art preservation, it would be quite intriguing to see how conservation practices have evolved over time, as well as how conservation is recorded differently for different kinds of art. What language is used in the conservation records for Renaissance paintings versus contemporary sculpture? It might be interesting to visualize these differences through a word cloud, for instance.

Although I’ve mostly been talking about actions that scholars need to take to share their data, museums clearly need to be a part of this discussion as well. Prominent institutions like the Tate and the Getty are setting the example by releasing more and more information for researchers, but how might these same practices trickle down to smaller institutions. What role should these institutions play in establishing best practices for how data should be shared?

NOTES

[1] National Science Foundation. (2010). NSF data management plans. http://www.nsf.gov/pubs/policydocs/pappguide/
nsf11001/gpg_2.jsp#dmp

[2] Christine L. Borgman, “The Conundrum of Sharing Research Data,” Journal of the American Society for Information Science and Technology 63, no. 6 (2012): 1059–1078. doi:10.1002/asi.22634.

[3] Sheila Corrall, “Roles and Responsibilities: Libraries, Librarians and Data,” in Pryor (ed.), Research Data Management (London: Facet, 2012), 105-133.

[4] Jillian C. Wallis, Christine L. Borgman, Matthew S. Mayernik, and Alberto Pepe. “Moving Archival Practices Upstream: An Exploration of the Life Cycle of Ecological Sensing Data in Collaborative Field Research,” International Journal of Digital Curation 3/1 (2008) 114-126. http://www.ijdc.net/index.php/ijdc/article/viewFile/67/46

[5] Thomas Padilla, “Data-Driven Art History: Framing, Adapting, Documenting,” DH+lib (27 October 2015). http://acrl.ala.org/dh/2015/10/27/datapraxisart/

[6] http://research.kraeutli.com/index.php/2013/11/the-tate-collection-on-github/

« Older posts Newer posts »

© 2021 Dressing Valentino

Theme by Anders NorenUp ↑

css.php