Dressing Valentino

On the history of decorative art, design, and film. Doing Digital Art History

Page 2 of 8

Pedagogy & Digital Media

Source: Pedagogy & Digital Media

When I heard Jaskot’s talk, I realized that I was missing out on a new and interesting approach to art history. I had previously used technology to record, organize, and even represent my work as part of a larger conventional framework. I had not used technology to help me better understand my work or to help me draw new conclusions.  —Nancy Ross, Dixie State University

This comment from Nancy Ross’ article “Teaching Twentieth Century Art History with Gender and Data Visualizations” gets at the heart of digital humanities research as we’ve understood it in this class.  For most scholars, digital humanity tools are a means of producing an accompanying visualization.  This neglects how digital humanity tools can actually serve as a new means of interpretation and scholarly output—an expression of research in itself.  Ross goes on to describe how she used a non canonical text book paired with a group networking visualization project to help her undergraduates to better grasp the implications of the research they were conducting on women artists and their social-professional networks.  The students responded with enthusiasm and noted how the process of constructing the visualization altered and strengthened the conclusions they had begin to draw before seeing their research in a new form.

In a class session on virtual reality models for architectural spaces, JJ commented that many of the scholars working to put the visualization together found the process of compiling the data and making it work cohesively was actually far more revealing than the finalized model itself.  Inputting the data dramatically altered the scholars’ understanding of how the space worked, while the final product looked as if the space was always meant to be understood in that way.  Process can be a powerful too in research.  See, for example, the outputs resulting from George Mason University’ new technology requirement for its art history master’s program.  These projects allowed the students to experiment with new media relevant to their professional interests while exploring previously unforeseen connections and research conclusions facilitated by their software.

In terms of pedagogy, digital humanities projects really can prove the ideal form for student engagement.  Randy Bass of Georgetown’s Center for New Designs in Learning & Scholarship provides a comprehensive breakdown of learning approaches made possible by assigning digital projects.

Distributive Learning – the combination of growing access to distributed resources and the availability of media tools by which to construct and share interpretation of these resources allows for distribution of classroom responsibility to students.
Authentic Tasks and Complex Inquiry – the availability of large archives of primary resources online makes possible assignments that allow for authentic research and the complex expression of research conclusions.
Dialogic Learning – interactive and telecommunications technologies allow for asynchronous and synchronous learning experiences and provide spaces for conversations and exposure to a wide array of viewpoints and varied positions.
Constructive Learning – the ability to create environments where students can construct projects that involve interdisciplinary, intellectual connections through the use of digital media that are usable.
Public Accountability – the ease of transmission of digital media makes it easy to share work, raising the stakes of participation due to the possibility of public display.
Reflective and Critical Thinking – in aggregate, learning as experienced within digital media now available to pedagogues contributes to the development of complex reflective and critical thinking that educators wish to instill in their students.

In my own learning experiences, I’ve found that engaging with new media with an assortment of these 6 learning approaches in mind allows me to conceive of my research in a broader context and with a more diverse audience while still delving deeply into the subject.  Like Nancy Ross’ students, my attention was sustained for much longer and in manifold ways by having to think critically about making the platform or software work for my purposes.

To see how one particular new media platform for visual bookmarking can serve as a means of facilitating research, keep reading.

Conceiving of social media as another teaching platform for students.

I use Pinterest with embarrassing regularity—both as a personal indulgence in planning the minutia of my theoretical future home, as well as a platform for more scholarly endeavors that incorporate various media.  Other than the lack of tagging capabilities, the site is beautifully suited for research and reference.

For example, in my Collections Development course, Professor Mary Grace Flaherty assigned a final project in which we developed a collection for a specific population.  I chose to create a core collection for book artists.  Instead of simply compiling a bibliography of resources, I created a Pinterest board to host a more dynamic catalogue.  Why Pinterest, you may ask?  One obvious advantage is that it embeds video directly into the pin (AKA a catalogue entry, in this case).  Of far more importance, however, are Pinterest’s networked search functions.  As mentioned, Pinterest doesn’t allow for tagging of pins to simplify searching by category within a single board.  It does allow for 1 keyword search function and 4 different browsing functions across boards and pins, though.

Let me break those 5 functions down for you:

A keyword search function that seeks out pins using 4 limiters: searching across all pins Pinterest; searching across your pins; searching for other pinners; or searching for boards.  This search function also suggests terms to allow for greater granularity. 
A browsing function that allows users to see other pins related to a particular pin.
A browsing function that allows pinners to see other boards with items related to a particular pin.
A browsing function that allows pinners to see other boards on which a particular pin is pinned.
A browsing function that allows pinners to see what else from a particular site has been pinned on Pinterest.

This sort of searching and browsing turns Pinterest into a highly linked catalogue and thesaurus.  One pin can create an entire network of related items, which turns the items in the Book Artists’ Core Collection into a starting point for practitioners or scholars to conduct more in-depth research into a project.  When I began the research that inspired this collection (for a book arts class, which also has its own reference board), Pinterest allowed me to delve deeply into contemporary and traditional methods for emulating every aspect of a British 14th century book of hours.  It also provided inspiration for how to shape that book of hours into an artist book documenting the history of the book.  By identifying one relevant pin on parchment & papermaking or regional binding methods or paleography & typography, I could follow those pins to related resources by using the browsing functions, or even just by following the pin back to its source.

By using Pinterest as the host for my catalogue, I also skirted around the limitations inherent in any collection—one can only develop each topic within a collecting area up to a point before the resources are outside of the collecting scope.  Pinterest lets users control when they’re deep enough into a topic, since the boundaries of the catalogue are so malleable.  For instance, my core collection doesn’t contain an encyclopaedia on the saints or regional medieval plants.  This information is highly relevant to a book artist working on a book of hours, but it’s too granular for a core collection of resources.  For more on the Book Artists’ Core Collection, see this blog post.

Articles on digital pedagogy:

Roberd DeCaroli, “New Media and New Scholars,” presentation to Rebuilding the Portfolio, July 17, 2014 http://arthistory2014.doingdh.org/wp-content/uploads/sites/3/2014/07/CLIO.pdf
Kimon Keramidas, “Interactive Development as Pedagogical Process: Digital Media Design in the Classroom as a Method for Recontextualizing the Study of Material Culture” Museums and the Web 2014: Proceedings http://mw2014.museumsandtheweb.com/paper/interactive-development-as-pedagogical-process-digital-media-design-in-the-classroom-as-a-method-for-recontextualizing-the-study-of-material-culture/
Digital pedagogy teaching tools: Art History Teaching Resources, Smarthistory, and Art Museum Teaching
Nancy Ross, “Teaching Twentieth-Century Art History with Gender and Data Visualizations,” Journal of Interactive Technology and Pedagogy, (Issue 4) http://jitp.commons.gc.cuny.edu/teaching-twentieth-century-art-history-with-gender-and-data-visualizations/
Gretchen Kreahling McKay, “Reacting to the Past: Art in Paris, 1888-89,” http://arthistoryteachingresources.org/2016/03/reacting-to-the-past-art-in-paris-1888-89/

Folksonomies in the GLAM context

Source: Folksonomies in the GLAM context

Folksonomies, also known as crowd sourced vocabularies, have proved their usefulness time and again in terms of deepening user experience and accessibility, especially in cultural heritage institutions.  Often termed GLAMs (Gallery, Library, Archive and Museum), these institutions can use folksonomies to tap into the knowledge base of their users to make their collections more approachable.

For example, when one user looks at a record for Piet Mondrian’s Composition No. II, with Red and Blue, she may assign tags for colour blocking, linear and geometric.  Another user may tag the terms De Stijl, Dutch and neoplasticism.  By combining both approaches to the painting, the museum ensures that those without contextual knowledge have just as much access to their collections as those with an art historical background.  Linking tags can allow users to access and search collections at their needed comfort level or granularity.  It also frees up the time for employees handling those collections, since they can depend—at lease in part—upon the expertise and observations of their public.

The diversity of tags also increases findability levels for educational uses of online collections.  If an elementary school teacher wants to further her students’ understanding of movements like cubism, she can just open up the catalogue of her local museum to explore the paintings tagged with ‘cubism.’  This becomes increasingly important as field trips are increasingly unavailable to public schools and larger class sizes require more class prep than ever.  With the linked folksonometric vocabulary, the teacher need not fight for the field trip nor dedicate even more time to personally constructing a sampling of the online collection to display.

Crowd sourcing knowledge can also go beyond vocabularies and prove especially useful in an archival context.[1]  When limited information is known about a collection or object and those with institutional knowledge are unavailable (a persistent problem plaguing archives), another source of information is needed.  The best way to do that is to tap those who would have the necessary expertise or experience.  For example, Wellesley College’s archive began including digitized copies of unknown photographs from the archive in the monthly newsletters emailed to alumni.  With those photos, the archive sent a request for any information that alums could provide.  In this way, the archive has recovered a remarkable amount of knowledge about the historical happenings around college.

But folksonomies and other crowd sourcing projects are only useful if institutions incorporate the generated knowledge into their records.  Some gamify the crowd sourcing process in order to engage users, but then lack the impetus to follow through in terms of incorporation.  Dropping the ball in this way may be due in part to technical challenges of coordinating user input and the institution’s online platform.  It may also stem from the same fear that many educators hold for Wikipedia: What if the information provided is WRONG? Fortunately, both anecdotal and research evidence is proving those fears largely unfounded.[2]  The instinct to participate with institutions in crowd sourcing is a self-selective process, requiring users to be quite motivated.  Those interested in that level of participation are going to take the process seriously, since it does require mental engagement and the sacrifice of time.  In terms of enthusiastic but incorrect contributions, institutions may rest assured that the communities that participates in crowd sourcing efforts are quite willing to self-police their fellows.  If something is wrong or misapplied (eg Stephen Colbert’s Wikipedia antics), another user with greater expertise will make the necessary alterations, or the institution can step in directly.

GLAMs are experiencing an identity shift with the increased emphasis on outreach and engagement.[3]  The traditional identity of a safeguarded knowledge repository no longer stands.  GLAMs now represent knowledge networks with which users can engage.  If obstacles exist to hinder that engagement, the institution loses its relevance and therefore its justification to both the bursar and the board.  These open sourced projects can break down those perceived barriers between an institution and their public.  Engaging with users at their varying levels and then using that input shows that the institution envisions itself as a member of that community rather than a looming, inflexible dictator of specific forms of knowledge.  Beyond that, though, institutions can use all the help they can get.  Like at Wellesley, an organization may be lacking in areas of knowledge, or they just don’t have the resources to deeply engage with all of their objects at a cataloguing or transcription level.  Crowd sourcing not only engages users, but it also adds to an institution’s own understanding of its collections.  By mining a community’s knowledge, everyone benefits.

From a more data science-y perspective: Experimenting with folksonomies in an image based collection

For a class on the organization of information, we read an article covering an experiment analyzing the implementation of user generated metadata.[4]  For those in image based institutions looking to build on the attempts of others in the field of crowd sourcing, this experiment is a solid place to begin.  Some alterations, however, may prove helpful.  Firstly, I would recommend a larger pool of participants from a broader age range, at least 25-30 people between the ages of 13-75.  This way the results may be extrapolated with more certainty across a user population.  Secondly, for the tag scoring conducted in the second half of the experiment, I would use the Library of Congress Subject Headings hierarchy in tandem with the Getty’s Art & Architecture Thesaurus, so as to compare the user generated data to discipline specific controlled vocabularies.  Retain the two tasks assigned to the users, including the random assignment of controlled vs. composite indexes and the 5 categories for the images (Places, People-recognizable, People-unrecognizable, Events/Action, and Miscellaneous formats).  In terms of data analysis, employ a one-way analysis of variance, since it provides a clear look at the data, even accounting for voluntary search abandonment.  By maintaining these analytical elements in the one-way ANOVAs and scoring charts for the pie charts, it would be easy enough to compare your findings with those in the article to see if there’s any significant difference in representations of index efficiency (search time or tagging scores) for general cultural heritage institutions and GLAMs with more image based collections.

[1]  L. Carletti, G. Giannachi, D. Price, D. McAuley, “Digital Humanities and Crowdsourcing: An Exploration,” in MW2013: Museums and the Web 2013, April 17-20, 2013.

[2]  Brabham, Daren C. “Managing Unexpected Publics Online: The Challenge of Targeting Specific Groups with the Wide-Reaching Tool of the Internet.” International Journal of Communication, 2012.

Brabham, Daren C. “Moving the Crowd at iStockphoto: The Composition of the Crowd and Motivations for Participation in a Crowdsourcing Application”. First Monday, 2008.

Brabham, Daren C. “Moving the Crowd at Threadless: Motivations for Participation in a Crowdsourcing Application”. Information, Communication & Society 13 (2010): 1122–1145. doi:10.1080/13691181003624090.

Brabham, Daren C. “The Myth of Amateur Crowds: A Critical Discourse Analysis of Crowdsourcing Coverage”. Information, Communication & Society 15 (2012): 394–410. doi:10.1080/1369118X.2011.641991.

Lakhani; et al. “The Value of Openness in Scientific Problem Solving.” 2007. PDF.

Saxton, Oh, & Kishore. “Rules of Crowdsourcing: Models, Issues, and Systems of Control.”. Information Systems Management 30 (2013): 2–20. doi:10.1080/10580530.2013.739883.

[3] “Throwing Open the Doors” in Bill Adair, Benjamin Filene, and Laura Koloski, eds. Letting Go?: Sharing Historical Authority in a User-Generated World, 2011, 68-123.

[4]  Manzo, Kaufman and Flanagan Punjasthitkul.”By the people, for the people: Assessing the value of crowdsourced, user-generated metadata.” Digital Humanities Quarterly (2015) 9(1).

Further Reading on Crowd sourcing, Social media & Open access (drawn from JJ’s class syllabus for 11 April)

Crowdsourcing Projects: Smithsonian Digital Volunteers and the Smithsonian Social Media Policy.
Jeffrey Inscho, “Guest Post: Oh Snap! Experimenting with Open Authority in the Gallery,” Museum 2.0 (March 13, 2013).http://museumtwo.blogspot.com/2013/03/guest-post-oh-snap-experimenting-with.html
Kristin Kelly. Images of Works of Art in Museum Collections: The Experience of Open Access. Mellon Foundation, April 2013.http://msc.mellon.org/research-reports/Open Access Report 04 25 13-Final.pdf/view
Nate Matias, “Crowd Curation: Participatory Archives and the Curarium Project,” https://civic.mit.edu/blog/natematias/crowd-curation-participatory-archives-and-the-curarium-project
NMC Horizon Project Short List, 2013 Museum Edition,http://www.nmc.org/pdf/2013-horizon-museum-short-list.pdf
Karen Smith-Yoshimura and Cyndi Shein, “Social Metadata for Libraries, Archives, and Museums” OCLC Research Report, Executive Summary only,http://www.oclc.org/content/dam/research/publications/library/2012/2012-02.pdf
For those interested in Museum 2.0, Nina Simon, The Participatory Museum, http://www.participatorymuseum.org/

Experimenting with 3D Scanning

Source: Experimenting with 3D Scanning

Autodesk 123D Catch desktop & smart phone app

Last week, we visited the Ackland Museum to use one of their objects for our first foray into 3D scanning.  I chose a squat, spouted object for my first project.

The phone app is a huge help in terms of starting out with lens distance and the number of photos required for a baseline.  There is a 70 photo cut off, so don’t get too enthusiastic if you’re using the app.  But if you follow their model (the blue segments), you get the right number with a sufficient amount of overlap to construct the 3D render.

screencap_123D_AppThis is a sculpture from home, nothing so fancy as an Ackland object. But it lets you see how the app works.

But it’s certainly a lesson in patience. There are several stages of upload that require users to dredge up the patience we used to exercise when dial-up was our only option.  JJ supposed that the wait time might be a part of the way the site processes the photos—maybe it’s constructing the model as it’s uploading the photo.  If you have internet connectivity issues (the struggle is real, even on our university network), don’t worry too much—I walked too far from the building while it was ‘Finishing,’ and the app asked me if I wanted to continue on my data plan.  Even still, it stayed on that ‘Finishing’ screen for hours.  I finally just let the phone alone and went to bed.  When I opened the app up the next day, it offered me the option of publishing the upload, which went speedily.  So lesson learned: don’t attempt a project under time pressure.

The Autodesk application is meant as an intro to 3D scanning and potential printing (123D Make).  So there’s a limit as to editing the model after it’s been uploaded.  Pretty much all you can do is add tags and a description without any tools to manipulate the model itself.  You can download several different file formats, though.  It also allows you to share the project on various social media platforms or embed it (as seen in the first image).  If you’re new to 3D scanning, this is definitely the way to start.

Agisoft PhotoScan (& Sketchfab)

PhotoScan, on the other hand, is better either for beginners with the option of a tutorial or for those with a little more familiarity with image manipulation software.  The learning curve isn’t as steep as with Adobe products, however, since the basics of what you need to do (in the order they need doing) show up on the ‘Workflow’ dropdown of the menu.  As you complete each step, the next steps are no longer greyed out.  For each task completed, more of the toolbar is made available to you.  For example, once the photos are aligned, the dense cloud constructed and the mesh build, you can view the various underlying structures of the model.  To help fill in the gaps, you can use Tools>Mesh>Close Holes and the software will interpolate what’s missing. The best part, though, is that the default setting for all of the stages result in a respectable model that renders non-reflective surfaces and textures beautifully.  For example, the model we constructed during class involved a brick wall, and it really did look like an actual brick wall by the time we finished, and just on the default (medium) settings.  The only caveat: Make sure you’re working on a computer with a a good bit of memory behind it—the speed and capability of the processing is limited otherwise.

Once you have the model as you like it, you can use image editing software to alter the aesthetics of the model.  DO NOT make any alteration (including cropping) to the photos in the process of taking them nor before aligning them.  Once the photos are aligned, you can crop them in the program to hone in on the object.  For the amount of light and sharpness, export the material file and .jpg.  Edit the .jpg in your image editing software (it will look like an unrecognizable blob—just go with it) and then feed it back into the model for a reapplication of the texture by going to Tools>Import>Import Texture.

Once the model is perfected, you have the option of several export options.  Did you know that Adobe Reader lets users move around a 3D model?  It’s a fun and approachable option.  The Wavefront obj file option allows you to to save without compression, though the file size is large.

experiment with PhotoScanFor this model:

The photos that align properly show up with green check marks.  I think part of the issue with this model is that I uploaded the photos slightly out of order, thus missing the full scan of the little Frenchman’s beret.  That, the uneven lighting and the poorly contrasted background all contributed to the editing problems.  If the model were better, it would be worth spending time with the magic wand or lasso tools to edit closer to the sculpture.  For lighting, try to get neutral lighting that casts minimal shadows.  Bright or harsh lighting not recommended.  If you’re shooting outdoors, aim for cloudy days.

Sketchfab is a means of publicly sharing projects from 3D rendering programs.  If you have paid for an Agisoft account, you can upload projects directly to Sketchfab.  My account is super boring, since the free trial of PhotoScan won’t allow file export or saving.  But RLA Archaeology has a fascinating and active account, so take a look there for a sampling of what Sketchfab has to offer as a public gallery of projects.

3D Printing

The process of creating the scan and watching such a textured image develop was gratifying—that scan being the one that we created in class with the brick, and not reproduced here.  I’m certain that watching the model go through the 3D printing process would be equally fascinating.  But the final product may be less satisfying.  Most printers produce those heavily ringed models that require sanding before they look and feel more like a whole—rather than composite—object.

What’s really cool about 3D printing, though, you can use such an assortment of materials with which to print.  For example, World’s Advanced Saving Project is building prototypes for affordable housing 3D printed from mud.  Some artists are constructing ceramics from 3D models using clay (basically more refined mud).  Still others are using metals for their work.  The possibilities for art, architecture and material culture are overwhelming.


Social Media and the Changing Role of the Curator

Source: Social Media and the Changing Role of the Curator

Sorry everyone, my blog posts are getting a little out of order the past few weeks. This post refers to Week 12: Public Engagement/Crowdsourcing. Will post entries to make up for the gaps here soon.

This week we are talking about not only the transformation of the word “curator” and its implications in general language, but the changing role of the curator in an age of increasing digital projects in the galleries and social media as a tool for outreach, education, and crowdsourcing projects. Within the past few years we see the word used in pop culture as kind of a trendy stand-in for “editor” or “designer”, or, in some examples, simply a, “picker-outer.” You curate your iTunes playlists, your garden, your Instagram account. It isn’t exactly that this terminology is wrong, but the popular usage seems to have overtaken the museum/gallery-specific use as of late. I think this actually reflects the fact that the work a “traditional” curator of art does is in a state of flux in response to the rise of social media.

Nancy Proctor’s article “Digital: Museum as Platform, Curator as Campion, in the Age of Social Media” [1] was particularly helpful in hashing out the changing nature of curatorial work and perception of curatorial work in museums today. Proctor serves as a Digital Editor of the Curator journal, and has a decade’s worth experience implementing digital tools in the New Media Department at the Smithsonian. The table from the borrowed list of Smithsonian curator David Allison on the first page gives us the basic gist: Change is “In”; Stability/stodginess is “Out.” “Curators as experts” gives way to “Curators as collaborators and brokers.” Instead of the published monographs, telling stories. In place of Control, Collaboration. And taking advantage of all the Web has to offer in terms of social media is very much “in.” One analogy Proctor cites from Steven Zucker, dean of the School of Graduate Studies at FIT, also gives us the gist of the shift: “[Zucker] has described it as a transition from Acropolis – that inaccessible treasury on the fortified hill – to Agora, a marketplace of ideas offering space for conversation, a forum for civic engagement and debate, and opportunity for a variety of encounters among audiences and the museum.” So what does this look like in practical terms for museums and the jobs of curators? According to Proctor, it means user-generated content, “crowd curation”, forums for online discussion, and highlighting social media exchanges. In short, curators are facilitating, both in the “analog platform” of the museum and online, the incorporation of many voices for selecting art, performing art, and engaging in discourse.

I think even to those of us who have studied art history and museum studies in school for a few years, maybe volunteered, interned, or worked in museums, may have still had a certain expectation of what a curator’s work is or isn’t, and that expectation may be challenged and continue to shift as the field does. In my opinion, it is challenged for the better and on to more interesting, relevant, and collaborative work – both interdisciplinary as well as between the institution and the public. Actually, on a more personal note, part of the reason I went into Library Science (and hopefully remaining in an art museum or university art department, fingers crossed) is that I wanted to continue to do curatorial work (the content knowledge, collection development, collaboration with curators if I’m in a museum setting) in a capacity that fit my interests and skill set more than the old “stodgy” model of the curator who focuses mostly on gaining a wealth of knowledge on a niche of expertise. My conception of curatorial work changed, and could align better with my motivation to reach outward with a public always in mind, rather than more inward, with my research at the forefront of my career. Of course museums are shifting from this model, and I think collaboration with librarians, archivists, and tech-focused employees makes museums all the nimbler in instituting these changes and dealing with them creatively.

A few years ago I happened on the article, “The Power of Non-Experts” [2] by Desi Gonzalez, for Hyperallergic, one of my favorite art blogs. I re-read it today in light of this batch of readings about crowd sourcing and social metadata, and although the article is only 3 years old, it strikes me as kind of quaint that Gonzalez was hired to stand around in the galleries and gather responses, face-to-face, from museumgoers a few years prior to the writing of the Hyperallergic post. The idea that a lot of the public finds art, especially contemporary art in general, baffling, is not a new idea, nor is the resentment about “not getting it” and being condescended to by an “expert.” Part of the point of the article, that museum goers each bring their unique experience to art viewing and form their own opinions and interpretation of the work that may or may not be informed by past art knowledge, a curatorial vision for the exhibition, a docent-led tour, or wall labels, is nothing new either. But now, seeking out and valuing this input from visitors is revitalizing engagement and interactivity, and hopefully inclusivity (see any number of critical pieces on the museum as a space of privilege). I think the ongoing challenge will be deciding how to incorporate that valuable expertise (read: content knowledge and original scholarship) of the curator, who can hopefully retain their voice while in conversation with all the others. It is certainly exciting to learn about examples of the types of programming, exhibit design, and web-based interactives that are being implemented, but I think the impetus behind it all (the above mentioned education, inclusivity, respect for diversity of visitors) reflects a positive change in the role of the curator and in museums as a civic space of inclusion.

With regards to a GLAMwiki project idea, I have learned through my Kress project that the Ackland has a number of pages of printer’s marks from books published by Northern Renaissance printing houses. While there is of course a Wikipedia page dedicated to explaining what a printer’s mark (or printer’s device) is, it would be interesting to open a Wiki up where the Ackland could partner with institutions that have Rare Book Collections (which encompass museums, archives, libraries, maybe even dealers of rare books?), to put all of these images in one place in order to compare them and also seek knowledge from book experts, as these pages often end up in art museums without a dedicated curator of incunabula (books printed before 1501) or old books generally. I have to give credit to Heather Aiken, my Education partner of the Kress Fellowship, for this idea, but there could be a creative element with a section dedicated to users uploading their original designs of their own personal printer’s mark.


[1] Proctor, Nancy. “Digital: Museum as Platform, Curator as Champion, in the Age of Social Media.” Curator 53/1. 2010. 35-43.

[2] Gonzalez, Desi. “The Power of Non-Experts.” Hyperallergic. January 3, 2013. http://hyperallergic.com/62985/the-power-of-non-experts/

DAH Post #11: Crowdsourcing and the sources of power

Source: DAH Post #11: Crowdsourcing and the sources of power

Now that Web 2.0 has become fully integrated into the way we seek, discover, process, and share information, it’s no secret or surprise that the GLAM world continues to refine engagement activities and outreach tools to bring in new and hopefully larger audiences. The increasing deployment of digital platforms to build visitor/user interaction with exhibitions, initiatives, and objects dovetails with the general decline in top-down institutional authority associated with a privileged class of makers and sellers, and with the move away from the reverent focus on the art object in favor of events, process, and interaction. This shifting of priorities and authority, however, is still in tension with the way the art world (and more specifically the art market) has traditionally functioned and continues to function, in which more exhibition attention in larger institutions is focused around those names that tend to draw higher prices at auction, still the usual suspects of 20th century male artists, as well as high-earning popular contemporary artists and makers.

In and of itself, I don’t necessarily think this is a bad thing–though often receiving mixed reactions, exhibitions like MOMA’s mass-appeal shows under Klaus Biesenbach, exploring the likes of Bjork, Yoko Ono, and Marina Abramovic, have the potential to spark in new audiences interest in art beyond that of the famous names they came to see. At the same time, they can recontextualize a lifetime of work or provide a space to reconsider the scope of a large, tradition-bound art institution.

In “Digital: Museum as Platform, Curator as Champion in the Age of Social Media,” Nancy Proctor, Head of New Media Initiatives at the Smithsonian American Art Museum, explores the reconfiguring of digital media and institutional control in the proliferation of digital engagement possibilities, as well as the role of the curator in the feedback loop of institution and audience. Procter is referring specifically to crowdsourcing initiatives that employ user-generated content and collaborative tasks, rather than simply to marketing via social media or mass-appeal exhibitions. In her discussion of the changing role of the curator, Procter cites the exhibition American Furniture/Googled at the Decorative Arts Gallery in Milwaukee as an model of the way in which the curator is shifting from singular authority to access point of information in the public domain: “Like a node at the center of the distributed network that the museum has become, the curator is the moderator and facilitator of the conversation about objects and topics proposed by the museum, even across platforms not directly controlled by the museum.” 1Nancy Proctor, “Digital: Museum as Platform, Curator as Champion, in the Age of Social Media,” Curator: The Museum Journal 53, no. 1 (January 1, 2010), 38. jQuery(“#footnote_plugin_tooltip_4911_1”).tooltip({ tip: “#footnote_plugin_tooltip_text_4911_1”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] }); In another example, Procter discusses the, citing Nicholas Poole, of the notion of the “citizen-curator”, whose participation in the interpretation of museums’ collections allow for the building of a rich, complex social history of art.

Once museums open the door of participation to their visitors however, the door is very hard to shut. In class last week, we discussed the possibilities that relinquishing some control provide for institutions, as well as some examples of movements that take the acquisition of dispersed control afforded by social media beyond the point that the institution intended or wanted to allow.

JJ brought up the example of Occupy Museums’ participation in a protest of the Brooklyn Museum for its leasing of space to real estate developers responsible for gentrifying the very neighborhoods that the Brooklyn Museum primarily served. The Brooklyn Museum responded to the protest in a novel way, by adding pieces created those protestors to their exhibition Agitprop!, which documented artwork geared towards political change. Compare this strategy to the Boston MFA’s dismissal of the initially tongue-in-cheek “Renoir Sucks” movement, which began as an Instagram account and morphed into a group of protestors agitating for the MFA to “take down” its Renoirs on the basis of their alleged suckiness (as well as the comparative over-estimation of Renoir in the art market, a serious issue relating to the inextricability of the art market, museums, art historical scholarship, and interpretation of aesthetic value that is arguably the subtext of the Renoir kerfuffle). Museum Director Matthew Teitelbaum’s brief response/non-response marveled, “We live in an era in which authority of the time can be questioned, with many different voices expressed and heard.” Or compare Agitprop! to the Guggenheim’s reaction to an Occupy Museums/art collective Gulf Ultra Luxury Faction (G.U.L.F.) protest/art action of their construction of the Guggenheim Abu Dhabi on Saadiyat Island, a development site home to egregious labor violations, according to the Human Rights Watch. The art action entailed the protesters’ infiltration of the Guggenheim’s exhibition Italian Futurism, 1909-1944: Reconstructing the Universe during the museum’s crowded pay-what-you-wish evening in order to chant slogans, toss leaflets, and draw attention to the Guggenheim’s ignoring of the plight of migrant workers by choosing to site the new museum on Saadiyat Island. In response to the protest and seemingly ignoring its basis, Guggenheim Director Richard Armstrong said that construction on the new plant had not yet begun.

These examples demonstrate the possibilities for communication afforded by social media and protest movements (both physical and digital), as well as the ways in which institutions attempt to take back control either by refusing to respond or by developing cooperative patterns of discussion with community members (or inoculating gestures, depending on how you read the Brooklyn Museum’s handling of their protest). Though it is unclear which of the actions above is the most powerful (from the perspective of the museums), it seems pretty clear which option is at least the most successful in the arena of public relations (as well as resonant with the political urgings of some of the artworks displayed inside each museum). It would be quite interesting to study the correlation between the understanding of where curatorial and political power does and should lie as articulated by institutional leaders (and their digital media personnel) and the lines by which their museums strike a balance (or fail to strike a balance) between their various stakeholders.

GLAMwiki project proposal: In the past several posts, I’ve focused on general research in the area of art-and-technology as the basis for data used in various timeline and network visualization applications. Taking the upcoming Art and Feminism Edit-a-thon at Sloan Art Library as inspiration for proposing a GLAMwiki project, there are several notable women artists who contributed in significant ways to the history of art and tech in the latter half of the 20th century whose biographies and impact are only minimally sketched out on Wikipedia, if at all. Artists who experimented with internet art as a specifically feminist form, such as former members of the influential 1990s feminist art collective VNS Matrix Josephine Starrs, Francesca da Rimini, and Virginia Barratt, would be a good start. Given both the lack of representation of and information about women artists on Wikipedia, as well as the outnumbering of women in the tech sector and the small percentage of women Wikipedians, this proposal seems, to me, particularly on-the-nose.

References   [ + ]

1. ↑ Nancy Proctor, “Digital: Museum as Platform, Curator as Champion, in the Age of Social Media,” Curator: The Museum Journal 53, no. 1 (January 1, 2010), 38. function footnote_expand_reference_container() { jQuery(“#footnote_references_container”).show(); jQuery(“#footnote_reference_container_collapse_button”).text(“-“); } function footnote_collapse_reference_container() { jQuery(“#footnote_references_container”).hide(); jQuery(“#footnote_reference_container_collapse_button”).text(“+”); } function footnote_expand_collapse_reference_container() { if (jQuery(“#footnote_references_container”).is(“:hidden”)) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery(“#” + p_str_TargetID); if(l_obj_Target.length) { jQuery(‘html, body’).animate({ scrollTop: l_obj_Target.offset().top – window.innerHeight/2 }, 1000); } }

DAH Post #10: Network Visualization and the Reduction of Data

Source: DAH Post #10: Network Visualization and the Reduction of Data

Scott B. Weingart’s blog post on the basics of network visualization, “Demystifying Networks”, is a concise overview of the pitfalls digital humanists (and data scientists) can fall into when deciding to use networks to visualize their data. In addition to giving some background information on organization theory and the makeup of networks, Weingart places special emphasis on bias in presentation of data, and the various forms it can take.

Weingart is the digital humanities specialist at Carnegie Mellon University and a historian of science. His blogging style is an excellent example of personal/professional writing about research interests and important goings-on in the field of network analysis. Something to strive towards.

In “Demystifying Networks”, Weingart notes that though networks could conceivably be used on any project, that doesn’t necessarily mean they should be. In a similar vein, digital humanists who use a technology for a purpose other than that for which it was originally designed should be able to fully justify that usage. Weingart stresses that humanistic data are typically flexible and open to interpretation, while node types used in network visualizations are concrete. Attempts to fit humanistic data into network visualizations must acknowledge and contextualize any reduction of data that results.1Scott. B. Weingart, “Demystifying Networks”. www.scottbot.net, accessed April 6, 2016. jQuery(“#footnote_plugin_tooltip_1561_1”).tooltip({ tip: “#footnote_plugin_tooltip_text_1561_1”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });

It was especially useful to read Weingart’s post in conjunction with taking a second look at “The Global: Goupil & Cie/Boussod, Valodon & Cie and International Networks” section of Pamela Fletcher and Anne Helmreich’s “Local/Global: Mapping Nineteenth-Century London’s Art Market”. Using the stock books of Goupil & Cie/Boussod, Valodon & Cie, held by the Getty Research Institute, Helmreich presents and contextualizes a network visualization of information about London’s role in the internationalization of the nineteenth-century art market. In discussing the research question, Helmreich notes that this type of study is not well-suited to traditional art historical methodologies involving close reading of a small dataset, but instead benefits from “distant reading”. Quoting Frank Moretti, Helmreich writes that a different framework ‘allows the scholar to look at “units that are much larger or smaller than the text.’ Moretti adds that ‘if we want to understand the system in its entirety, we must accept losing something,’ but justifies this loss by pointing out how distant reading holds the promise, by allowing a larger corpus than before to be studied, of producing analyses ‘that go against the grain of national historiography.’” In this example, Helmreich and Fletcher acknowledge a reduction of data (the “close reading” of artworks and associations) in order to clarify a larger picture of the London art market that is more appropriate and specific to their research question.2Pamela Fletcher and Anne Helmreich, with David Israel and Seth Erickson, “Local/Global: Mapping Nineteenth-Century London’s Art Market,” Nineteenth Century Art Worldwide 11:3 (Autumn 2012). jQuery(“#footnote_plugin_tooltip_1561_2”).tooltip({ tip: “#footnote_plugin_tooltip_text_1561_2”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });

This acknowledgement is also pertinent to Weingart’s discussion of bias in another of his blog posts on network visualization, “#humnets, paper/review”. In that post, Weingart summarizes UCLA’s Networks and Network Analysis for the Humanities conference, describing the reaction at the conference to his talk on bias in network analysis. Noting that everyone present was well aware of the problem bias poses, Weingart asserts, “As long as we’re open an honest about what we do not or cannot know, we can make claims around those gaps, inferring and guessing where we need to, and let the reader decide whether our careful analysis and historical inferences are sufficient to support the conclusions we draw. Honesty is more important than completeness or unshakable proof; indeed, neither of those are yet possible in most of what we study.”3Scott. B. Weingart, “#humnets, paper/review,” www.scottbot.net, accessed April 6, 2016 jQuery(“#footnote_plugin_tooltip_1561_3”).tooltip({ tip: “#footnote_plugin_tooltip_text_1561_3”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });.

Moving to this week’s example:

Since I used Google Fusion tables in my last network visualization test, I decided to give Gephi a try. Rather than focus on the tenuous and arbitrary connections between subfields of neural network research, this network uses data that is a bit more concrete. In keeping with the focus of the TimeMap from my last post, I’ve charted artists and engineers who were involved in LACMA’s Art and Technology Program and Experiments in Art and Technology (E.A.T), as well as two major E.A.T. projects, 9 Evenings: Theatre and Engineering and the Pepsi Pavilion at the 1970 World Expo in Osaka, Japan. This list of names, and therefore this visualization, is by no means comprehensive–the Pepsi Pavilion alone included contributions from over 75 artists and engineers. In the interests of time, I’ve included only the most active participants in both E.A.T. and the A&T program.  As sources, I used:

Maurice Tuchman, Art & Technology; a Report on the Art & Technology Program of the Los Angeles County Museum of Art, 1967-1971. Los Angeles County Museum of Art; distributed by the Viking Press, New York, 1971.
Kathy Battista and Sabine Breitwieser, E.A.T. – Experiments in Art and Technology. Translated by Karl Hoffman. Köln: Verlag der Buchhandlung Walther König, 2015.


The text placement is, as we discussed in class, not ideal. And the other problems with this network are the same ones that Weingart attributes to any beginner in network visualization in his blog post: the fact that bimodal networks are difficult to work with, that Gephi works best with single node networks, that it measures centrality of nodes, and the size of those nodes is adjustable in the visualization in accordance with the narrative the user is striving to create. However, this network represents a start to a more comprehensive project exploring the connections between artists and engineers in the art and technology programs of the late 1960s and early 1970s.

References   [ + ]

1. ↑ Scott. B. Weingart, “Demystifying Networks”. www.scottbot.net, accessed April 6, 2016. 2. ↑ Pamela Fletcher and Anne Helmreich, with David Israel and Seth Erickson, “Local/Global: Mapping Nineteenth-Century London’s Art Market,” Nineteenth Century Art Worldwide 11:3 (Autumn 2012). 3. ↑ Scott. B. Weingart, “#humnets, paper/review,” www.scottbot.net, accessed April 6, 2016 function footnote_expand_reference_container() { jQuery(“#footnote_references_container”).show(); jQuery(“#footnote_reference_container_collapse_button”).text(“-“); } function footnote_collapse_reference_container() { jQuery(“#footnote_references_container”).hide(); jQuery(“#footnote_reference_container_collapse_button”).text(“+”); } function footnote_expand_collapse_reference_container() { if (jQuery(“#footnote_references_container”).is(“:hidden”)) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery(“#” + p_str_TargetID); if(l_obj_Target.length) { jQuery(‘html, body’).animate({ scrollTop: l_obj_Target.offset().top – window.innerHeight/2 }, 1000); } }

Digital Assignment #3: Notable Moments in Art-and-Technology

Source: Digital Assignment #3: Notable Moments in Art-and-Technology

For my third digital assignment, I decided to build on my timeline of notable moments in the history of art-and-technology from last week, this time using TimeMapper to chart locations of exhibitions, events, organizations, and programs. As Edward Shanken and others have noted, the popularity of collaborations between art and technologists climbed in the 1950s and 1960s, reaching its zenith in the late 60s/early 70s, with Jack Burnham’s influential treatise on the use of technological system as artistic medium, “Systems Esthetics” (1968), and the exhibition he organized at The Jewish Museum in 1970, Software, Information Technology: Its New Meaning for Art. In that exhibition, Burnham presented experimental artworks side-by-side with industry collaborations, thereby problematizing the distinctions between those two worlds. Throughout the 1970s and 1980s, this sort of presentation slid into the background, as artists and the popular art press, shunned any sort of affiliation with the giants of capitalism as inimical to the countercultural ethos. In the late 1990s, however, Burnham’s legacy, and with it the idea of systems aesthetics, was revived with a spate of new publications on art-and-technology collaborations, as well new considerations of experimental art which deploys digital technology.1Edward A. Shanken, Reprogramming Systems Aesthetics” in Systems, ed. by Edward A. Shanken, )Whitechapel Gallery and the MIT Press: 2015), 123-128. jQuery(“#footnote_plugin_tooltip_3968_1”).tooltip({ tip: “#footnote_plugin_tooltip_text_3968_1”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] }); These selections represent some artists’ collaborations with companies and engineers, while others can be classified within cybernetics, systems art, and generative systems practice.

References   [ + ]

1. ↑ Edward A. Shanken, Reprogramming Systems Aesthetics” in Systems, ed. by Edward A. Shanken, )Whitechapel Gallery and the MIT Press: 2015), 123-128. function footnote_expand_reference_container() { jQuery(“#footnote_references_container”).show(); jQuery(“#footnote_reference_container_collapse_button”).text(“-“); } function footnote_collapse_reference_container() { jQuery(“#footnote_references_container”).hide(); jQuery(“#footnote_reference_container_collapse_button”).text(“+”); } function footnote_expand_collapse_reference_container() { if (jQuery(“#footnote_references_container”).is(“:hidden”)) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery(“#” + p_str_TargetID); if(l_obj_Target.length) { jQuery(‘html, body’).animate({ scrollTop: l_obj_Target.offset().top – window.innerHeight/2 }, 1000); } }

Ethical Design Decisions in Data Visualization

Source: Ethical Design Decisions in Data Visualization

Shazna Nessa, a journalist who has specializes in data visualization, addresses the respect a designer must have for their audience’s level of visual literacy. In her article “Visual Literacy in the Age of Data”[1], Nessa warns that with all the cool new data visualization tools, journalists working with visualizations are beginning to present toward a more specialized audience, when perhaps the audience’s level of visual sophistication has not been running apace.

I once heard that editors at the New York Times aim to bring articles to a 9th-grade reading level. As the New York Times is a respected pioneer in data visualization in journalism, I wonder if there is any formalized understanding of visual literacy that corresponds with text-reading literacy. A set of loose guidelines would certainly be helpful in establishing best practices in this area. While the journalist’s goal is to inform the public, the scholar engaging in Digital Humanities would also do well to re-evaluate their use of data visualization. For example, Nessa cites a 1984 study[2] that ranked the most efficiently decoded visual data presentations (bar graphs and scatter plots leading the way, heat maps towards the end). I think the digital humanities scholar or digital art historian, when and if they are aiming to skew their argument towards a more specialized area in a dissertation, conference paper, or journal article, must find a way to balance basic design principles and legibility when working with a sophisticated argument that may be data-driven. Regardless of who the readership is for the scholar, some of the questions Nessa recommend journalists ask themselves, such as “How many points are you trying to illustrate?“ and “Although I’ve edited the data already, is there superfluous data that I can still edit out?“ apply to the Digital Humanities scholarship world too.

This week’s readings and class discussion really made me consider the ethics involved with data visualization, and the visual literacy that may be required of a reader/viewer of this information. Data in any form does not speak for itself, it needs a human voice, or ideally many voices, to interpret it and to make a statement. And data visualized seems to run a huge risk of intentional manipulation and/or potential misunderstanding. While blatantly spouting misleading information via chart a-la-Fox News (or anyone with an agenda) is heinous, I think the responsible data visualization really begins a deep understanding of design principles, a respect for one’s audience, as well as and understanding of the raw data itself.

A quick Google search of “Ethics in Data Visualization” yielded first results of Codes of Ethics for various software companies that work with data visualization, like Tableau and Visual.ly. These codes basically cover the importance of accuracy in data collection, data analysis, and design choices. In fact, on the Visual.ly blog, the author quotes a “Hippocratic oath for visualation”:

I shall not use visualization to intentionally hide or confuse the truth which it is intended to portray. I will respect the great power visualization has in garnering wisdom and misleading the uninformed. I accept this responsibility willfully and without reservation, and promise to defend this oath against all enemies, both domestic and foreign.[3]

That last part sounds a little tongue in check to me, but it is encouraging that companies are publishing statements like this. It’s almost like a blessing and a curse that data visualizations are really coming to the fore through journalism. It’s a positive in that journalism is traditionally associated with fact-checking and transparency as a tenet of the profession. But what passes for journalism is where not only visual literacy but media literacy skills generally are need to discern journalistic integrity.

On a less serious note, my image quilt was so much fun to make! When I searched the Benezit Dictionary of Artists as I was working on writing a brief bio on Albrecht Durer for my Ackland project, I saw that Benezit displays images of artist’s signatures. For an artist so exacting, his signatures when seen so close-up and separate from their corresponding artworks were so charmingly unique from each other. When compared, some even seemed a bit lopsided and sloppy, (though I’m sure this is in part due to the inexact copies coming from wood cuts ) so I thought the monogram signature a perfect fit for a quilt:

ImageQuilt 2016-31-03 at 8.04.03 PM

[1]Shazna Nessa, “Visual Literacy in the Age of Data”, Opennews.org, 13 June 2013  https://source.opennews.org/en-US/learning/visual-literacy-age-data/
[2]William S. Cleveland and Robert McGill, “Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods”Journal of the American Statistical Association, Volume 79, Issue 387, 1984.
[3]Jason Moore qtd in Drew Skau, “A Code of Ethics for Data Visualization Professionals” Visually Blog, February 7, 2012. http://blog.visual.ly/a-code-of-ethics-for-data-visualization-professionals/


Digital Assignment #3

Source: Digital Assignment #3

I had started initially charting some timeline points that related to a paper I’m writing about in my Intro to Archives class about the impact of technology on the nature of the collection, accessibility, and preservation of the Archives of American Art. However, upon hearing the recent news of the architect Zaha Hadid’s death and reading a number of obituaries and memorial statements, I (like a lot of people and museums, design publications, news orgs, etc. I follow on social media!) started thinking about the legacy of this groundbreaking and controversial architect. Working with around 20 timeline points I had plenty of editorial decision-making to do in looking at such a full life (which life events, which partnerships, which buildings, which awards, which controversies), I did want to present somewhat of a balance of Hadid’s interesting biography and look at her life before she achieved “starchitect” status, and a timeline is a good way to do achieve this. So I decided I would spend the evening with the articles, images, and videos I wanted to look at anyway :) Enjoy!


Experimenting with Gephi and network visualizations

Source: Experimenting with Gephi and network visualizations

Over the past several days, I’ve been playing around with Gephi to get a better idea of what network analysis tools can do, and how I might apply Gephi (or a similar tool) to my own research. I don’t have any previous experience with network analysis or visualization, but I’m incredibly interested in the possibilities that these tools offer for a wide variety of research programs.

I began by trying out some sample datasets that Gephi includes on their GitHub page.1 I first tried to use Gleiser and Danon’s data on social networks of jazz musicians.2 However, when I loaded it into Gephi, I discovered that none of the nodes were actually labeled with the names of the musicians, each node only labeled by its ID number in the dataset. This resulted in a very intriguing looking network, as there were over 180 nodes in the network and many with multiple edges connecting them, but did not produce an intelligible visualization. I did get experience examining the dataset itself though as I investigated this issue. Since the dataset was a .NET file (not a format I was familiar with), I wondered if Gephi was having an issue relating names to their respective nodes. I was able to open the file in a text editing program and saw that, in the dataset itself, all of the musicians’ names had been replaced by ID numbers. I imagine that the researchers have a codebook to help them interpret the raw dataset, but that was not included on the Gephi GitHub.

Next, I tried out another interesting looking dataset: a social network for a class of German students in 1880.3 I was able to find the article the researchers wrote using this data through UNC’s e-journal subscriptions and read over the article to better understand what was going on with the data. For a history of social network research, the article is well worth looking up, as it describes an early mixed-methods research study conducted by a German school teacher, Johannes Delitsch, as he analyzed friendship groups in his 1880 class of boys. The present researchers have re-purposed Delitsch’s data to perform new, more high powered social network analysis on this data to see what Delitsch’s research can reveal today.

While this research project is a great example of the interesting work that can be done with social network data (and a reminder that this data is not limited to the Facebook era), I mostly used this dataset simply to learn a little bit more about how Gephi works. After importing the data, I made a few adjustments to produce a readable, usable visualization of the whole network. As there were only a couple dozen nodes, I could produce a visualization that captures the entire network.


For this visualization, I chose the Fruchterman-Reingold model, set the color of the nodes to vary in intensity of color based on how many edges are connected to them (deeper green is more connections), and turned on labels so I could the names of the classmates. While this doesn’t tell you how the friendships are formed and sustained (this information is in Delitsch’s original study and provides great social insight), the visualization does show patterns of popularity: who is the center of various social groups and who are on the outskirts.

To dig a little deeper into the data, I then experimented with some of the different filters to produce more fine grained sub-network visualizations. Fortunately, this dataset also included the direction of edges, indicating where connections were incoming (such as receiving a gift from a classmate) or outgoing (or giving a gift). I produced one visualization where I filtered the network to only those nodes with greater than 7 incoming connections and produced another visualization where I filtered the network to only those nodes with greater than 5 outgoing connections.

Visualization filtering 7 or more incoming connections.Visualization filtering 7 or more incoming connections.
Visualization filtering 5 or more outgoing connections.Visualization filtering 5 or more outgoing connections.

Again, I’m not really in a position to analyze or compare these two visualizations, but for a researcher this kind of filtering could support a lot of different queries into the data.

I was able to see from this brief exercise some of the different ways in which Gephi can be used to create visualizations and support network analysis methods. However, network analysis be used to many other ends as well. Pamela Fletcher and Anne Helmreich’s project mapping the 19th century London art world4 and Stanford’s ORBIS5 are two examples of how network analysis can be paired with geographic information to explore how networks form and manifest effects in both time and space. These examples also illustrate, however, that additional resources and expertise quickly become necessary when projects move beyond free tools like Gephi. Both of these projects had dedicated programmers working together with humanities researchers to produce the unique, interactive network visualizations.

Elijah Meeks and Karl Grossner describe ORBIS as an “interactive scholarly work” (ISW), characterizing this as a new potential scholarly output in addition to more traditional models like the journal article or monograph.6 ORBIS not only represents new and innovative scholarship into the Roman world of antiquity, but also provides an interface for individuals to make their own discoveries and support their own research. Of course, the traditional journal article is also founded on the idea that the information it presents builds upon previous scholarship and serves the development of future scholarship, but something like ORBIS makes this manifest by providing the means for interaction and direct engagement. ORBIS does more than just network analysis and visualization, but these methods clearly play an integral role in the new kinds of scholarly projects that ORBIS demonstrates—those that blur the lines between publication, research tool, and online exhibition.

Across time and history, networks of people, places, and materials have been hugely significant forces; while the importance of networks has been long recognized by scholars (as evidenced by Delitsch’s work), digital tools provide ways to interrogate and visualize these complex structures in ways that had not previously been possible. These examples illustrate the kinds of exciting projects that can be done with network analysis, but also demonstrate that additional expertise and resources quickly become necessary when projects move beyond free tools like Gephi.


[1] https://github.com/gephi/gephi/wiki/Datasets

[2] P.Gleiser and L. Danon , Adv. Complex Syst.6, 565 (2003).

[3] Heidler, R., Gamper, M., Herz, A., Eßer, F. (2014): Relationship patterns in the 19th century: The friendship network in a German boys’ school class from 1880 to 1881 revisited. Social Networks 13: 1-13.

[4] Pamela Fletcher and Anne Helmreich, with David Israel and Seth Erickson, “Local/Global: Mapping Nineteenth-Century London’s Art Market,” Nineteenth Century Art Worldwide 11:3 (Autumn 2012). http://www.19thc-artworldwide.org/index.php/autumn12/fletcher-helmreich-mapping-the-london-art-market.

[5] http://orbis.stanford.edu/

[6] Elijah Meeks and Karl Grossner, “Modeling Networks and Scholarship with ORBIS,” Journal of Digital Humanities (2012).


« Older posts Newer posts »

© 2018 Dressing Valentino

Theme by Anders NorenUp ↑