On the history of decorative art, design, and film. Doing Digital Art History

Author: Erin Dickey (Page 1 of 2)

DAH Project: 3D Models

Source: DAH Project: 3D Models

During our 3D model photo session at the Ackland, I had a little bit of trouble with Autodesk 123D Catch–I ended up with a model of the walls, but not the central figure! Instead of uploading that failed model via Sketchfab, this is IMG_0997one I did several months ago, of a sculpture on display at the Black Mountain College Museum + Arts Center exhibition, Convergence/Divergence: Exploring Black Mountain College and Chicago’s New Bauhaus/Institute of Design. I haven’t edited out the background, and zooming in to the actual sculpture (the tiny, dark, metal object) takes a bit of deftness. Eventually, you should be able to zoom to a 3D model of something that looks more or less like the sculpture in the foreground of this image.

Untitled Sculpture by Maria Zoe Greene-Joseph
by erindickey
on Sketchfab

 

I also experimented with Agisoft Photoscan, using images I took with my DSLR at the Ackland. The object is Robert Graham’s Source Figure Fragment (1992). In my first pass at this, I had the same issue as with 123D Catch–too much background. I used the mask tool to cover much of the background noise, which helped, but this model, represented here via screenshot, still has many gaps–more photos would have helped.Screen Shot 2016-04-27 at 7.49.49 PM

 

DAH Post #11: Crowdsourcing and the sources of power

Source: DAH Post #11: Crowdsourcing and the sources of power

Now that Web 2.0 has become fully integrated into the way we seek, discover, process, and share information, it’s no secret or surprise that the GLAM world continues to refine engagement activities and outreach tools to bring in new and hopefully larger audiences. The increasing deployment of digital platforms to build visitor/user interaction with exhibitions, initiatives, and objects dovetails with the general decline in top-down institutional authority associated with a privileged class of makers and sellers, and with the move away from the reverent focus on the art object in favor of events, process, and interaction. This shifting of priorities and authority, however, is still in tension with the way the art world (and more specifically the art market) has traditionally functioned and continues to function, in which more exhibition attention in larger institutions is focused around those names that tend to draw higher prices at auction, still the usual suspects of 20th century male artists, as well as high-earning popular contemporary artists and makers.

In and of itself, I don’t necessarily think this is a bad thing–though often receiving mixed reactions, exhibitions like MOMA’s mass-appeal shows under Klaus Biesenbach, exploring the likes of Bjork, Yoko Ono, and Marina Abramovic, have the potential to spark in new audiences interest in art beyond that of the famous names they came to see. At the same time, they can recontextualize a lifetime of work or provide a space to reconsider the scope of a large, tradition-bound art institution.

In “Digital: Museum as Platform, Curator as Champion in the Age of Social Media,” Nancy Proctor, Head of New Media Initiatives at the Smithsonian American Art Museum, explores the reconfiguring of digital media and institutional control in the proliferation of digital engagement possibilities, as well as the role of the curator in the feedback loop of institution and audience. Procter is referring specifically to crowdsourcing initiatives that employ user-generated content and collaborative tasks, rather than simply to marketing via social media or mass-appeal exhibitions. In her discussion of the changing role of the curator, Procter cites the exhibition American Furniture/Googled at the Decorative Arts Gallery in Milwaukee as an model of the way in which the curator is shifting from singular authority to access point of information in the public domain: “Like a node at the center of the distributed network that the museum has become, the curator is the moderator and facilitator of the conversation about objects and topics proposed by the museum, even across platforms not directly controlled by the museum.” 1Nancy Proctor, “Digital: Museum as Platform, Curator as Champion, in the Age of Social Media,” Curator: The Museum Journal 53, no. 1 (January 1, 2010), 38. jQuery(“#footnote_plugin_tooltip_4911_1”).tooltip({ tip: “#footnote_plugin_tooltip_text_4911_1”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] }); In another example, Procter discusses the, citing Nicholas Poole, of the notion of the “citizen-curator”, whose participation in the interpretation of museums’ collections allow for the building of a rich, complex social history of art.

Once museums open the door of participation to their visitors however, the door is very hard to shut. In class last week, we discussed the possibilities that relinquishing some control provide for institutions, as well as some examples of movements that take the acquisition of dispersed control afforded by social media beyond the point that the institution intended or wanted to allow.

JJ brought up the example of Occupy Museums’ participation in a protest of the Brooklyn Museum for its leasing of space to real estate developers responsible for gentrifying the very neighborhoods that the Brooklyn Museum primarily served. The Brooklyn Museum responded to the protest in a novel way, by adding pieces created those protestors to their exhibition Agitprop!, which documented artwork geared towards political change. Compare this strategy to the Boston MFA’s dismissal of the initially tongue-in-cheek “Renoir Sucks” movement, which began as an Instagram account and morphed into a group of protestors agitating for the MFA to “take down” its Renoirs on the basis of their alleged suckiness (as well as the comparative over-estimation of Renoir in the art market, a serious issue relating to the inextricability of the art market, museums, art historical scholarship, and interpretation of aesthetic value that is arguably the subtext of the Renoir kerfuffle). Museum Director Matthew Teitelbaum’s brief response/non-response marveled, “We live in an era in which authority of the time can be questioned, with many different voices expressed and heard.” Or compare Agitprop! to the Guggenheim’s reaction to an Occupy Museums/art collective Gulf Ultra Luxury Faction (G.U.L.F.) protest/art action of their construction of the Guggenheim Abu Dhabi on Saadiyat Island, a development site home to egregious labor violations, according to the Human Rights Watch. The art action entailed the protesters’ infiltration of the Guggenheim’s exhibition Italian Futurism, 1909-1944: Reconstructing the Universe during the museum’s crowded pay-what-you-wish evening in order to chant slogans, toss leaflets, and draw attention to the Guggenheim’s ignoring of the plight of migrant workers by choosing to site the new museum on Saadiyat Island. In response to the protest and seemingly ignoring its basis, Guggenheim Director Richard Armstrong said that construction on the new plant had not yet begun.

These examples demonstrate the possibilities for communication afforded by social media and protest movements (both physical and digital), as well as the ways in which institutions attempt to take back control either by refusing to respond or by developing cooperative patterns of discussion with community members (or inoculating gestures, depending on how you read the Brooklyn Museum’s handling of their protest). Though it is unclear which of the actions above is the most powerful (from the perspective of the museums), it seems pretty clear which option is at least the most successful in the arena of public relations (as well as resonant with the political urgings of some of the artworks displayed inside each museum). It would be quite interesting to study the correlation between the understanding of where curatorial and political power does and should lie as articulated by institutional leaders (and their digital media personnel) and the lines by which their museums strike a balance (or fail to strike a balance) between their various stakeholders.

GLAMwiki project proposal: In the past several posts, I’ve focused on general research in the area of art-and-technology as the basis for data used in various timeline and network visualization applications. Taking the upcoming Art and Feminism Edit-a-thon at Sloan Art Library as inspiration for proposing a GLAMwiki project, there are several notable women artists who contributed in significant ways to the history of art and tech in the latter half of the 20th century whose biographies and impact are only minimally sketched out on Wikipedia, if at all. Artists who experimented with internet art as a specifically feminist form, such as former members of the influential 1990s feminist art collective VNS Matrix Josephine Starrs, Francesca da Rimini, and Virginia Barratt, would be a good start. Given both the lack of representation of and information about women artists on Wikipedia, as well as the outnumbering of women in the tech sector and the small percentage of women Wikipedians, this proposal seems, to me, particularly on-the-nose.

References   [ + ]

1. ↑ Nancy Proctor, “Digital: Museum as Platform, Curator as Champion, in the Age of Social Media,” Curator: The Museum Journal 53, no. 1 (January 1, 2010), 38. function footnote_expand_reference_container() { jQuery(“#footnote_references_container”).show(); jQuery(“#footnote_reference_container_collapse_button”).text(“-“); } function footnote_collapse_reference_container() { jQuery(“#footnote_references_container”).hide(); jQuery(“#footnote_reference_container_collapse_button”).text(“+”); } function footnote_expand_collapse_reference_container() { if (jQuery(“#footnote_references_container”).is(“:hidden”)) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery(“#” + p_str_TargetID); if(l_obj_Target.length) { jQuery(‘html, body’).animate({ scrollTop: l_obj_Target.offset().top – window.innerHeight/2 }, 1000); } }

DAH Post #10: Network Visualization and the Reduction of Data

Source: DAH Post #10: Network Visualization and the Reduction of Data

Scott B. Weingart’s blog post on the basics of network visualization, “Demystifying Networks”, is a concise overview of the pitfalls digital humanists (and data scientists) can fall into when deciding to use networks to visualize their data. In addition to giving some background information on organization theory and the makeup of networks, Weingart places special emphasis on bias in presentation of data, and the various forms it can take.

Weingart is the digital humanities specialist at Carnegie Mellon University and a historian of science. His blogging style is an excellent example of personal/professional writing about research interests and important goings-on in the field of network analysis. Something to strive towards.

In “Demystifying Networks”, Weingart notes that though networks could conceivably be used on any project, that doesn’t necessarily mean they should be. In a similar vein, digital humanists who use a technology for a purpose other than that for which it was originally designed should be able to fully justify that usage. Weingart stresses that humanistic data are typically flexible and open to interpretation, while node types used in network visualizations are concrete. Attempts to fit humanistic data into network visualizations must acknowledge and contextualize any reduction of data that results.1Scott. B. Weingart, “Demystifying Networks”. www.scottbot.net, accessed April 6, 2016. jQuery(“#footnote_plugin_tooltip_1561_1”).tooltip({ tip: “#footnote_plugin_tooltip_text_1561_1”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });

It was especially useful to read Weingart’s post in conjunction with taking a second look at “The Global: Goupil & Cie/Boussod, Valodon & Cie and International Networks” section of Pamela Fletcher and Anne Helmreich’s “Local/Global: Mapping Nineteenth-Century London’s Art Market”. Using the stock books of Goupil & Cie/Boussod, Valodon & Cie, held by the Getty Research Institute, Helmreich presents and contextualizes a network visualization of information about London’s role in the internationalization of the nineteenth-century art market. In discussing the research question, Helmreich notes that this type of study is not well-suited to traditional art historical methodologies involving close reading of a small dataset, but instead benefits from “distant reading”. Quoting Frank Moretti, Helmreich writes that a different framework ‘allows the scholar to look at “units that are much larger or smaller than the text.’ Moretti adds that ‘if we want to understand the system in its entirety, we must accept losing something,’ but justifies this loss by pointing out how distant reading holds the promise, by allowing a larger corpus than before to be studied, of producing analyses ‘that go against the grain of national historiography.’” In this example, Helmreich and Fletcher acknowledge a reduction of data (the “close reading” of artworks and associations) in order to clarify a larger picture of the London art market that is more appropriate and specific to their research question.2Pamela Fletcher and Anne Helmreich, with David Israel and Seth Erickson, “Local/Global: Mapping Nineteenth-Century London’s Art Market,” Nineteenth Century Art Worldwide 11:3 (Autumn 2012). jQuery(“#footnote_plugin_tooltip_1561_2”).tooltip({ tip: “#footnote_plugin_tooltip_text_1561_2”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });

This acknowledgement is also pertinent to Weingart’s discussion of bias in another of his blog posts on network visualization, “#humnets, paper/review”. In that post, Weingart summarizes UCLA’s Networks and Network Analysis for the Humanities conference, describing the reaction at the conference to his talk on bias in network analysis. Noting that everyone present was well aware of the problem bias poses, Weingart asserts, “As long as we’re open an honest about what we do not or cannot know, we can make claims around those gaps, inferring and guessing where we need to, and let the reader decide whether our careful analysis and historical inferences are sufficient to support the conclusions we draw. Honesty is more important than completeness or unshakable proof; indeed, neither of those are yet possible in most of what we study.”3Scott. B. Weingart, “#humnets, paper/review,” www.scottbot.net, accessed April 6, 2016 jQuery(“#footnote_plugin_tooltip_1561_3”).tooltip({ tip: “#footnote_plugin_tooltip_text_1561_3”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });.

Moving to this week’s example:

Since I used Google Fusion tables in my last network visualization test, I decided to give Gephi a try. Rather than focus on the tenuous and arbitrary connections between subfields of neural network research, this network uses data that is a bit more concrete. In keeping with the focus of the TimeMap from my last post, I’ve charted artists and engineers who were involved in LACMA’s Art and Technology Program and Experiments in Art and Technology (E.A.T), as well as two major E.A.T. projects, 9 Evenings: Theatre and Engineering and the Pepsi Pavilion at the 1970 World Expo in Osaka, Japan. This list of names, and therefore this visualization, is by no means comprehensive–the Pepsi Pavilion alone included contributions from over 75 artists and engineers. In the interests of time, I’ve included only the most active participants in both E.A.T. and the A&T program.  As sources, I used:

Maurice Tuchman, Art & Technology; a Report on the Art & Technology Program of the Los Angeles County Museum of Art, 1967-1971. Los Angeles County Museum of Art; distributed by the Viking Press, New York, 1971.
Kathy Battista and Sabine Breitwieser, E.A.T. – Experiments in Art and Technology. Translated by Karl Hoffman. Köln: Verlag der Buchhandlung Walther König, 2015.

network.pdf

The text placement is, as we discussed in class, not ideal. And the other problems with this network are the same ones that Weingart attributes to any beginner in network visualization in his blog post: the fact that bimodal networks are difficult to work with, that Gephi works best with single node networks, that it measures centrality of nodes, and the size of those nodes is adjustable in the visualization in accordance with the narrative the user is striving to create. However, this network represents a start to a more comprehensive project exploring the connections between artists and engineers in the art and technology programs of the late 1960s and early 1970s.

References   [ + ]

1. ↑ Scott. B. Weingart, “Demystifying Networks”. www.scottbot.net, accessed April 6, 2016. 2. ↑ Pamela Fletcher and Anne Helmreich, with David Israel and Seth Erickson, “Local/Global: Mapping Nineteenth-Century London’s Art Market,” Nineteenth Century Art Worldwide 11:3 (Autumn 2012). 3. ↑ Scott. B. Weingart, “#humnets, paper/review,” www.scottbot.net, accessed April 6, 2016 function footnote_expand_reference_container() { jQuery(“#footnote_references_container”).show(); jQuery(“#footnote_reference_container_collapse_button”).text(“-“); } function footnote_collapse_reference_container() { jQuery(“#footnote_references_container”).hide(); jQuery(“#footnote_reference_container_collapse_button”).text(“+”); } function footnote_expand_collapse_reference_container() { if (jQuery(“#footnote_references_container”).is(“:hidden”)) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery(“#” + p_str_TargetID); if(l_obj_Target.length) { jQuery(‘html, body’).animate({ scrollTop: l_obj_Target.offset().top – window.innerHeight/2 }, 1000); } }

Digital Assignment #3: Notable Moments in Art-and-Technology

Source: Digital Assignment #3: Notable Moments in Art-and-Technology

For my third digital assignment, I decided to build on my timeline of notable moments in the history of art-and-technology from last week, this time using TimeMapper to chart locations of exhibitions, events, organizations, and programs. As Edward Shanken and others have noted, the popularity of collaborations between art and technologists climbed in the 1950s and 1960s, reaching its zenith in the late 60s/early 70s, with Jack Burnham’s influential treatise on the use of technological system as artistic medium, “Systems Esthetics” (1968), and the exhibition he organized at The Jewish Museum in 1970, Software, Information Technology: Its New Meaning for Art. In that exhibition, Burnham presented experimental artworks side-by-side with industry collaborations, thereby problematizing the distinctions between those two worlds. Throughout the 1970s and 1980s, this sort of presentation slid into the background, as artists and the popular art press, shunned any sort of affiliation with the giants of capitalism as inimical to the countercultural ethos. In the late 1990s, however, Burnham’s legacy, and with it the idea of systems aesthetics, was revived with a spate of new publications on art-and-technology collaborations, as well new considerations of experimental art which deploys digital technology.1Edward A. Shanken, Reprogramming Systems Aesthetics” in Systems, ed. by Edward A. Shanken, )Whitechapel Gallery and the MIT Press: 2015), 123-128. jQuery(“#footnote_plugin_tooltip_3968_1”).tooltip({ tip: “#footnote_plugin_tooltip_text_3968_1”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] }); These selections represent some artists’ collaborations with companies and engineers, while others can be classified within cybernetics, systems art, and generative systems practice.

References   [ + ]

1. ↑ Edward A. Shanken, Reprogramming Systems Aesthetics” in Systems, ed. by Edward A. Shanken, )Whitechapel Gallery and the MIT Press: 2015), 123-128. function footnote_expand_reference_container() { jQuery(“#footnote_references_container”).show(); jQuery(“#footnote_reference_container_collapse_button”).text(“-“); } function footnote_collapse_reference_container() { jQuery(“#footnote_references_container”).hide(); jQuery(“#footnote_reference_container_collapse_button”).text(“+”); } function footnote_expand_collapse_reference_container() { if (jQuery(“#footnote_references_container”).is(“:hidden”)) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery(“#” + p_str_TargetID); if(l_obj_Target.length) { jQuery(‘html, body’).animate({ scrollTop: l_obj_Target.offset().top – window.innerHeight/2 }, 1000); } }

Testing TimelineJS

Source: Testing TimelineJS

In class this week, we looked at two New York Times Interactives: Riding the New Silk Road and Forging an Art Market in China, as well as the Metropolitan Museum’s Heilbrunn Timeline of Art History and the BCG course site Media and Materiality. All represent examples of ways to map both space and time.

“Riding the New Silk Road” presents a journey in mostly images and video, with very little text, juxtaposed with a map of Hewlett-Packard’s overland route for transporting electronics from China to European markets. It is simple and effective, though, given the title, I would have liked to see side-by-side comparisons with the historic Silk Road, rather then just a section of the modern route. The use of video in connection with points on the map is especially dynamic.

The Met’s Heilbrunn Timeline of Art History was something of a disappointment (no more timeline! unclear connections between objects!). I’m still in awe, however, of the amount of information available for perusal and connection. It strikes me that the Met tried to strike some sort of balance between creating an atmosphere of serendipity for the casual browser and building a powerful resource for the scholar. It may have fallen short, but this tool is still eminently usable by visitors at all levels. Its biggest failing is likely in the lack of a visualization (or visualizations) designed to draw the casual or student user in immediately.

“Forging an Art Market in China” has a number of graphics that chart, over time, the appearance on the auction market of works by specific artists. None of them are conventional timelines, which makes them all the more appealing. The final graphic is a video which visualizes the many thousands of Qi Baishi works offered for sale from 2000-2013. At 18 seconds long, it quickly and powerfully conveys the implicit thesis of the article: Going by numbers alone, it is inevitable that many of the works sold by Chinese auction houses and purportedly created by famous artists must be forgeries.

Of the BGC course visualizations, I was most taken with the typography timeline. Its structure is, however, utterly mystifying. A right-to-left chronological order of dates and corresponding images, descriptions, and quotations would improve this tool tremendously.

In my test of TimelineJS, below, I chose to chart notable moments in the history of art-and-technology in the latter half of the 20th century, from the first cybernetic sculpture to Jack Burnham’s landmark exhibition, Software, at The Jewish Museum. The user is somewhat constricted in the number of design choices TimelineJS allows, but what it lacks in customizability, it makes up for in ease of use. I’ll be adding to this timeline for next week’s post, or adding a map to this data using TimeMapper.

DAH Post #8: Using Google Fusion Tables and ImageQuilt

Source: DAH Post #8: Using Google Fusion Tables and ImageQuilt

Using data I described in last week’s blog post, I created a Google Fusion network visualization showing the connections between various subfields of machine intelligence research. I think looking at this visualization probably does far more for me (with the background research I’ve done on specific projects, scholars, and important dates connected with these research fields) than for anyone without any specific knowledge of this field. Because users of the visualization can’t click through to see other metadata I’ve associated with the nodes in the network, the information conveyed is rather superficial. Still, the network visualization helps to give a general sense of the important fields and their relationships to subfields in machine intelligence.

More on the art history side of the art-and-technology project I described last week, I’ve also generated an ImageQuilt using the ImageQuilt Chrome plugin. These particular images are all associated with Experiments in Art and Technology, a initiative of the 1960s and 1970s founded by Billy Klüver, an engineer at Bell Telephone Laboratories, his colleague Fred Waldhauer, and the artists Robert Rauschenberg and Robert Whitman. The group aimed to collaboratively produce art using new technology. A loose association of artists and technologists, artists including Jean Tinguely, Andy Warhol, Jasper Johns, and Yvonne Rainer were also involved with Klüver’s projects. The idea behind the E.A.T. collaborations was to allow for the creation of works that may not have been possible without the involvement of engineers, who would in turn be inspired by the artists to help shape the future of technology.1E.A.T. Experiments in Art and Technology. Edited by Sabine Breitwieser. For the Museum der Moderne Salzburg, 2015 jQuery(“#footnote_plugin_tooltip_5888_1”).tooltip({ tip: “#footnote_plugin_tooltip_text_5888_1”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] }); Many of the images are from a 2015 exhibition, E.A.T. Experiments in Art and Technology at Salzburg’s Museum der Moderne. It’s an interesting way of bringing together some of the more canonical images of works from this movement, but again, I’m not sure how informative it is other than on a superficial level. I found it useful as part of my broader research efforts, but I wouldn’t incorporate it into a final version of a project.

Image quilt of google image search, “E.A.T. Experiments in Art & Technology”

Though neither of these visualizations were what I had in mind as I was plotting out my data last week, and though I’m not sure that they’re useful in conveying or clarifying much information to an outside observer, they’ve helped me to organize some segments of my research in these areas. I’ve also learned how to organize my data in Google Fusion tables as a result of inputting some elements of my research into a spreadsheet.

In the course of looking more into network visualizations and how they have been used in digital art history projects in order to determine what form (in an ideal world, with plenty of time) I’d like my visualization to take, I came across the Performance Artist Database, created by Matthew Miller in conjunction with his MFA thesis at Pratt. Focusing mainly on Fluxus artists, Miller organizes performance artists, events, and interactions through the use of quantitative analysis in order to explore the development of artistic movements and mediums in what is traditionally a difficult art form to preserve. Click ‘View Entire Network’ to see Miller’s network visualization of his research, searchable, and with nodes linked to his own metadata (dates, locations, and performances) as well as corresponding dbpedia items. It’s impressive, useful, accomplishes what he set out to do, and could serve as a starting point for further research in this area.

References   [ + ]

1. ↑ E.A.T. Experiments in Art and Technology. Edited by Sabine Breitwieser. For the Museum der Moderne Salzburg, 2015 function footnote_expand_reference_container() { jQuery(“#footnote_references_container”).show(); jQuery(“#footnote_reference_container_collapse_button”).text(“-“); } function footnote_collapse_reference_container() { jQuery(“#footnote_references_container”).hide(); jQuery(“#footnote_reference_container_collapse_button”).text(“+”); } function footnote_expand_collapse_reference_container() { if (jQuery(“#footnote_references_container”).is(“:hidden”)) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery(“#” + p_str_TargetID); if(l_obj_Target.length) { jQuery(‘html, body’).animate({ scrollTop: l_obj_Target.offset().top – window.innerHeight/2 }, 1000); } }

DAH Post 7: Art History and Machine Learning

Source: DAH Post 7: Art History and Machine Learning

The Physics arXiv Blog post, “When a Machine Learning Algorithm Studied Fine Art Paintings, It Saw Things Art Historians Had Never Noticed,” summarizes the 2014 findings of Rutgers University researchers Babak Saleh, Kanako Abe, Ravneet Singh Arora, Ahmed Elgammal in their research on using machine learning to classify art. Comparing a set of 1700 images by 66 artists by using descriptions of the visual objects they contain, Saleh et. al. experimented with an algorithm to identify metrics that would indicate which artists showed influence on or strong similarity with other artists.1Babak Saleh, Kanako Abe, Ravneet Singh Arora, Ahmed Elgammal. “Toward Automated Discovery of Artistic Difference.” arxiv.org>cs>arxiv:1408.3218, August 14, 2014. Accessed on March 6, 2016. http://arxiv.org/abs/1408.3218. jQuery(“#footnote_plugin_tooltip_3511_1”).tooltip({ tip: “#footnote_plugin_tooltip_text_3511_1”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });

As JJ pointed out in class, the claims made for the findings are clearly somewhat flawed, most obviously treating the machine’s discovery of similarities between Braque’s Man with a Violin and Picasso’s Still Life: Sun and Shadow as an art historical revelation demonstrating a previously undiscovered relationship (“influence”) between the two painters. As any art historian will tell you, this is no revelation; Braque and Picasso worked on their paintings side by side. Similarly, the post portrays the computer’s identification of compositional similarities between Frederic Bazille’s Studio 9 Rue de la Condamine  and Norman Rockwell’s Shuffleton’s Barber Shop as demonstrative of a “clear link” between the two–again, demonstrating “influence” (regardless of any sort of context that may verify such an arbitrary observation). Humans are pattern-seeking creatures, so it makes sense that computers would be too. Just because a computer identifies a pattern, however, does not mean that its creators aren’t capable of falling into the trap of identifying a pattern as representative of something more substantial–quite the opposite is true.

As this blog post demonstrates, there’s a very interesting way that these developments in machine learning in connection with art history are hailed in the popular press, with the potential for scholarly advances using machine learning treated as actual advances in art history rather than just computer science. Similarly, with regards to data visualization, it’s problematic to consider a digital art history project as an end in itself rather than as the starting point for further research.

For my data visualization project, I’d like to draw on research I’m conducting in another class. I’m working on an art history project related to Google’s 2015 open sourcing of code for their DeepDream project, an Artificial Neural Network (ANN). In a post on Google’s research blog on June 17, 2015, Google software engineers Alexander Mordvintsev, Christopher Olah, and Mike Tyka revealed outputs from a data visualization tool using an Artificial Neural Network (ANN), a subcategory within machine learning research.2“Inceptionism: Going Deeper into Neural Networks,” Google Research Blog, June 17, 2015. Accessed March 5, 2016. https://web.archive.org/web/20150708233542/http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html. jQuery(“#footnote_plugin_tooltip_3511_2”).tooltip({ tip: “#footnote_plugin_tooltip_text_3511_2”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] }); Two weeks later, seemingly in response to interest “from programmers and artists alike” in the images generated by the tool, now called DeepDream, they posted the code and documentation for the software, encouraging readers to make their own neural network images and tag them accordingly on social media, sparking a flurry of media think pieces and #deepdream images, videos, and software modifications.3“DeepDream – a Code Example for Visualizing Neural Networks.” Google Research Blog, July 8, 2015. Accessed March 5, 2016. http://googleresearch.blogspot.com/2015/07/deepdream-code-example-for-visualizing.html. jQuery(“#footnote_plugin_tooltip_3511_3”).tooltip({ tip: “#footnote_plugin_tooltip_text_3511_3”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });In a Medium.com blog post released in conjunction with “DeepDream: The art of neural networks”, a February, 2016 benefit auction and art exhibition at San Francisco’s Gray Area Foundation for the Arts, Blaise Aguera y Arcas, head of Google’s Machine Intelligence Group in Seattle, situated the critical reaction to DeepDream within a history of anti-technology art historical scholarship in order to argue that DeepDream images created by users are, in fact, artworks.4Arcas, Blaise Aguera y. “Art in the Age of Machine Intelligence.” Medium, February 23, 2016. Accessed March 5, 2016. https://medium.com/artists-and-machine-intelligence/what-is-ami-ccd936394a83. jQuery(“#footnote_plugin_tooltip_3511_4”).tooltip({ tip: “#footnote_plugin_tooltip_text_3511_4”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });

DeepDream is a computer vision system (a specific type of ANN) that feeds an input image through successive layers of artificial “neurons”, also known as nodes, processing elements, or units. The layers of neurons, governed by adaptive numeric weights set by the developers and calibrated by a large dataset of images, build and enhance certain features of the input image based on the information in the dataset and the algorithms structuring the output visualization. The output image is thus conceived of as a perception and interpretation of the input image, based on the network’s prior training. The research described in the Physics arXiv Blog mentioned above is also a type of machine learning, but rather than using a network of neurons to output another image as an “interpretation” of the input image using a large dataset of images to “train” the network, an algorithm is used identify similarities between input images using textual descriptive data.

My deep dream (right), created using Hillary Clinton's official Secretary of State portrait (left).My deep dream (right), created using Hillary Clinton’s official Secretary of State portrait (left).

The discourse surrounding DeepDream’s release centers on how the visual representation of a picture’s interpretation by a computer vision software whose adaptive weights have been calibrated by a dataset of billions of digital images is potentially revelatory of how humans think and perceive, Like Google’s more prosaic data visualization tools, however, output depends upon a dataset determined by Google itself, however large. Advocates (from Google, in popular media, and users) noted the images’ surreal and hallucinatory aesthetic appeal, while also speculating about the possibilities for using ANNs like DeepDream to model or otherwise explore the perceptual processes of the human brain.5LaFrance, Adrienne. “When Robots Hallucinate.” The Atlantic, September 3, 2015. Accessed March 5, 2016. http://www.theatlantic.com/technology/archive/2015/09/robots-hallucinate-dream/403498/;  Campbell-Dollaghan, Kelsey. “This Artist Is Making Haunting Paintings With Google’s Dream Robot.”Gizmodo. Accessed March 5, 2016. http://gizmodo.com/this-human-artist-is-making-hauting-paintings-with-goog-1716597566; Hern, Alex. “Yes, Androids Do Dream of Electric Sheep.” The Guardian, June 18, 2015, Accessed March 5, 2016. http://www.theguardian.com/technology/2015/jun/18/google-image-recognition-neural-network-androids-dream-electric-sheep. jQuery(“#footnote_plugin_tooltip_3511_5”).tooltip({ tip: “#footnote_plugin_tooltip_text_3511_5”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] }); Less vocally, critics emphasized the shallowness in both content and meaning of the output images, as well as the role of Google developers in determining the look of the images, a facet obscured by Google’s seemingly democratic open sourcing of the code and the resultant wave of images “interpreted” by computer vision.6Chayka, Kyle. “Why Google’s Deep Dream Is Future Kitsch.” Pacific Standard. Accessed March 5, 2016. http://www.psmag.com/nature-and-technology/googles-deep-dream-is-future-kitsch. jQuery(“#footnote_plugin_tooltip_3511_6”).tooltip({ tip: “#footnote_plugin_tooltip_text_3511_6”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });

Ultimately, I’d like to explore the art historical discourses and significant moments that preceded DeepDream’s development and inform–explicitly and implicitly–how DeepDream has been defined, lauded, and excoriated by its developers, advocates, and critics. I think it might be interesting to examine and provoke the branding of this particular form of computer vision software as capable of artistic creation versus an applicable but less resonant function, data visualization. This will entail situating this specific branch of ANN research within the art historical context of systems art–specifically, art employing cybernetic systems, artificial life models (A-life), and art-and-technology (by which I mean the large-scale collaborations between artists and the tech industry from the 1940s-1970s).

As I conduct research for this project, I’m discovering the lengthy history of ANN research. Since 1943, engineers and scientists have explored many different aspects of machine learning using ANNs, charting various subfields and research accomplishments.  For the data visualization project in this class and in order to strengthen my working knowledge of this history and its possible correlation with and connection to cybernetics art, a-life, and art-and-technology, I’d like to develop a network visualization that traces, defines, and connects research fields and significant projects within the history of ANN research in order to examine this history in tandem with a timeline of developments in systems art in the latter half of the 20th century. This network visualization could help to clarify the connections, both technological and discursive, that have emerged in the recent media coverage of DeepDream’s release and usage, and at the very least provide context for a more specific research path.

In order to prepare the data required to visualize this network, I would need to identify significant fields, subfields, and projects within within the history of ANNs, determine their dates, connections to other items in the networks, and provide brief descriptions of their relevance. Ideally, I would also create a similar network for cybernetics art, A-life, and art-and-technology, and then identify conceptual connections between the two networks (ANN research and art history), though this would likely require much more time and I would need a way to define a hierarchy of possible connections (i.e., am I connecting two items because they are related by date proximity, by the individuals who worked on them, by art historical context, or by an argument that I am positing about the conceptual nature of their connection? The latter category could prove very messy). I may only accomplish a demo version of this, but in my ideal visualization, users would be able to view the network as a whole, then drill down into sections of the network to view them more closely, and then zoom in on specific items in order to view title, date, and descriptive information not visible in the network view.

At this point I’m not sure if I’d use a force-directed layout, a radial tree, or a treemap for this visualization–it will depend on the form and categories of the spreadsheet I create to organize the data, and how precise I want to (or am able to) be at this point. I’m thinking spreadsheet fields will include unique item ID (maybe?) field, subfield, title of project (if applicable), date, people involved, description, and related items and the description of their relationships. As I mentioned above, I may also want to define the various types of relationships included. At this point, my thinking about this is still very theoretical, so I welcome any advice you have!

References   [ + ]

1. ↑ Babak Saleh, Kanako Abe, Ravneet Singh Arora, Ahmed Elgammal. “Toward Automated Discovery of Artistic Difference.” arxiv.org>cs>arxiv:1408.3218, August 14, 2014. Accessed on March 6, 2016. http://arxiv.org/abs/1408.3218. 2. ↑ “Inceptionism: Going Deeper into Neural Networks,” Google Research Blog, June 17, 2015. Accessed March 5, 2016. https://web.archive.org/web/20150708233542/http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html. 3. ↑ “DeepDream – a Code Example for Visualizing Neural Networks.” Google Research Blog, July 8, 2015. Accessed March 5, 2016. http://googleresearch.blogspot.com/2015/07/deepdream-code-example-for-visualizing.html. 4. ↑ Arcas, Blaise Aguera y. “Art in the Age of Machine Intelligence.” Medium, February 23, 2016. Accessed March 5, 2016. https://medium.com/artists-and-machine-intelligence/what-is-ami-ccd936394a83. 5. ↑ LaFrance, Adrienne. “When Robots Hallucinate.” The Atlantic, September 3, 2015. Accessed March 5, 2016. http://www.theatlantic.com/technology/archive/2015/09/robots-hallucinate-dream/403498/;  Campbell-Dollaghan, Kelsey. “This Artist Is Making Haunting Paintings With Google’s Dream Robot.”Gizmodo. Accessed March 5, 2016. http://gizmodo.com/this-human-artist-is-making-hauting-paintings-with-goog-1716597566; Hern, Alex. “Yes, Androids Do Dream of Electric Sheep.” The Guardian, June 18, 2015, Accessed March 5, 2016. http://www.theguardian.com/technology/2015/jun/18/google-image-recognition-neural-network-androids-dream-electric-sheep. 6. ↑ Chayka, Kyle. “Why Google’s Deep Dream Is Future Kitsch.” Pacific Standard. Accessed March 5, 2016. http://www.psmag.com/nature-and-technology/googles-deep-dream-is-future-kitsch. function footnote_expand_reference_container() { jQuery(“#footnote_references_container”).show(); jQuery(“#footnote_reference_container_collapse_button”).text(“-“); } function footnote_collapse_reference_container() { jQuery(“#footnote_references_container”).hide(); jQuery(“#footnote_reference_container_collapse_button”).text(“+”); } function footnote_expand_collapse_reference_container() { if (jQuery(“#footnote_references_container”).is(“:hidden”)) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery(“#” + p_str_TargetID); if(l_obj_Target.length) { jQuery(‘html, body’).animate({ scrollTop: l_obj_Target.offset().top – window.innerHeight/2 }, 1000); } }

Digital Project #2: Google Maps

Source: Digital Project #2: Google Maps

For my digital project using Google Maps, I’ve chosen to chart [a very small sample size representing] the spread of Italian Futurism inside and outside of Italy by mapping first the initial publication/writing dates and locations of some of the more notable manifestos associated with Italian futurism, and then, in my second layer, mapping the re-publication dates and locations of those same manifestos (or their variations, or, in one instance, a closely associated work of art). In addition to distinguishing between the layers using the varying shape and color of the pins, I’ve also drawn lines between the connected publications in order to make their identities and movements across Europe more explicit. In some cases, I’ve had to shift the pins a bit to make the map more legible (for instance, in Milan, the early locus of Futurist activity).

I’ve included links to some sources in the pins, but all of the manifestos I’ve cited, along with publication details, can be found in:

Futurist Manifestos. Edited by Umbro Apollonio. Translated by Robert Brain, R.W. Flint, J.C. Higgitt, and Carolin Tisdale. Boston: MFA Publication, a Division of the Museum of Fine Arts, Boston (1970).

DAH Post #6: Space is the Place?

Source: DAH Post #6: Space is the Place?

In “The Spatial Turn in Art History”, Jo Guldi, in the History Department at Brown University, implicitly contextualizes the relatively recent spate of digital mapping projects in Art History by tracing how art, art history and related disciplines have traditionally parsed, represented, and used spatial relationships in creation and scholarship. Through her exploration of the etymology of landscape, Guldi links German and Dutch collective irrigation projects to renderings of English manor estates in landscape paintings. Later in this genealogy, the construct of the landscape is extended and reimagined in dioramas and spectaculars enjoyed by crowds in large urban centers. Guldi identifies the art historical study of landscape as the locus of philosophical speculation on the connections between visual perception, individual perspective, and Western history, as evidenced by 19th and 20th century readings of the invention of linear perspective in the Renaissance. Guldi notes that the study of historical maps changed in the late 20th century, along with that of other objects of study in Art History departments, with scholars “reading against the grain to reveal the prejudices of power.” More recent projects have explored the connections (or disconnections) as the case may be individual visual experience and the contemporary urban environment.

A talk by Donal Cooper of the University of Cambridge at today’s symposium at the Nasher Museum of Art at Duke University, “Apps, Maps, and Models: A Symposium on Digital Pedagogy and Research in Art History, Archaeology, & Visual Studies”, highlighted a particular example of ties between visual experience of an art object and the history of that object’s spatial relationship with its environment, as well as between a contemporary urban environment and its digitally reconstituted past. In “Modeling Architecture and Uncertainty in Renaissance Florence: The Digital Reconstructions of Santa Chiara and San Pier Maggiore”, Cooper described a project to digitally reconstruct via 3D mapping the destroyed church of San Pier Maggiore in order to provide a space both ritually and historically fitting in which to “display” Botticini’s Palmieri Altarpiece for The National Gallery London’s exhibition of that work. In the process of research, Cooper and his colleagues discovered physical traces of the destroyed church (pillars, gargoyles, steps to the bell tower) in the existing urban environment. Those physical traces in turn helped to build and were incorporated within the resultant 3D model. Cooper shared a Youtube video giving an overview of the process: https://www.youtube.com/watch?v=ZUXa1nDtOB0. In this instance, an effort to provide a more fitting context for the Botticini altarpiece than that of the museum (a repository for decontextualized physical traces) led to a fuller exploration of actual physical traces within, as Cooper put it, “the present urban fabric”. With this digital project, the remaining physical traces of the San Maggiore Church (as photographed and encoded) are finally reunited with their lost altarpiece.

With all this in mind, I’ve been experimenting this week with StorymapJS. The map below constructs a narrative path using a toner map of the former site of Black Mountain College (1933-1957), a progressive liberal arts school located in Black Mountain, NC, whose artistic and pedagogical impact has been wide-ranging in 20th century American art. A number of well known and influential figures studied or taught there in a variety of disciplines, including Josef and Anni Albers, Buckminster Fuller, John Cage, Robert Rauschenberg, Ruth Asawa, and on and on. Along with others familiar with the history of the school, it has always struck me as somewhat fantastic that the rural town of Black Mountain, NC could become a locus–albeit a temporary one–of the avant garde (and in the middle of the Great Depression, no less). The area is now owned by a private boys camp, Camp Rockmont. With a Black Mountain College site map discovered in 2014 by Camp Rockmont staff, along with photographs of the Black Mountain College campus and students taken by photographer and instructor Hazel Larsen Archer, I’ve attempted to map some of the historic activities of the school onto its current location. Doing so is, in the sense that I’ve referred to above, reconstituting the memories of the place as experienced through physical location, even as the terrain may have changed and some of the structures may no longer stand.  The site map is itself an interesting document in that it depicts both the school’s current buildings (in green) and future plans for expansion (in orange).

This Storymap is a incomplete for my purposes, both because I lack the images to construct a more detailed narrative, and because StorymapJS has a somewhat limited set of features. For instance, it would be useful for orientating purposes to show the map, the site map, and the aerial photograph of the campus side by side (and ideally, to identify features within them), but it appears I’m only able to display one photograph at a time next to the map. Additionally, given my stated goal to explore and spur meditation on how the setting of the campus might have influenced the art made there, it might have made more sense for me to use a map showing terrain, at the very least. With the current map, it is impossible to know where the structures referred to in the photographs actually stand–a visualization akin to Google Street View might be better suited to a project like this. Initially, I had planned to use the site map as the background image that the path would navigate through, but I determined that my JPEG of the site map was not of a high enough resolution.

Still, just from this experiment, it is easy to see how StoryMapJS can facilitate the construction of certain humanities narratives in which the visualization of and movement through space is imperative. Its capabilities also demonstrate the ways in which we might navigate an image of an art object in the same way as we navigate a map, an impulse that art historians and digital humanists may wish to deploy, exploit, or resist depending on what their scholarly goals may be.

Exploring Thinglink with Archival Materials

Source: Exploring Thinglink with Archival Materials

I’ve had some fun exploring Thinglink.com this week. To explore the capabilities of this tool, I decided to use some recently scanned materials from North Carolina artist Connie Bostic’s collection of clippings from earlier in her art career. The image below, and the materials included in my annotations, are all related to the hue and cry that took place in the Asheville, NC community over the supposed indecency of the paintings and their exhibition in a school, as well as the responding uproar over art censorship that took place in the wake of the school headmaster’s covering up and removal of the paintings.

The paintings, part of Bostic’s “Mark of the Goddess” series, were to be exhibited in the Walker Arts Center of the Asheville School from Aug. 8-Sept. 10, 1990. Instead, the headmaster John Tyrer called for them to be removed almost immediately, saying “Female genitalia have no place on the walls of a school building.” It’s probably relevant to note here that the works are abstract rather than figural. The works are, in Bostic’s words, “archetypal female symbols that depict the regenerative and nurturing power of women”; the exhibition combined the oil-on-paper paintings with quotations about women from history’s “great men”, such as “A woman takes off her claim to respect along with her garments” (Herodotus) and “Silence gives the proper grace to woman” (Sophocles). The fact that Bostic’s work, which is intended to evoke the concern she feels about the loss of women’s cultural heritage within history, was itself covered over and symbolically silenced (at least for the hot minute before folks caught wind of the censorship and started protesting, which led to even greater exposure for the exhibition) is quite the irony.

In my annotations below, I’ve selected some quotations from news articles surrounding the event, as well as provided images of those articles and editorials that argue on both sides of the issue (for a magnificent example of the pro-censorship, anti-“gender feminist” side, see #6 below, “Feminist Junk Masquerades as Art”) . There are also images of exhibition cards and fliers, as well as a newspaper image of Bostic herself, in front of one of her works. For the most part, I have preserved Bostic’s arrangement of clippings in the order of my tags. On the right side of the image (where the white sheet veils the artworks on the wall), I have included images of a selection of the original “Mark of the Goddess” paintings. I will admit to indulging in a sort of subversive delight by digitally “unveiling” these censored images, right on top of a visual symbol of their censorship.

It’s hard to overstate how viscerally this has demonstrated to me the importance of examining an item in context, and an artist’s artwork in the context of her archives. The photograph below is not only a record of one exhibition opening among many, it is a relic of a certain cultural moment for women artists in the South. Furthermore, while physically interacting with the materials certainly gives a rich sense of the various discourses surrounding this specific event, being able to incorporate quotations and images (as well as other media, potentially) into my annotations of this photo creates a sort of narrative shorthand, another way of paging through a scrapbook.

And finally, if any of you are interested and would like to annotate the image through Thinglink, please feel free!

NB:

I have not provided individual citation information for these images and scans–all of these items are from Bostic’s studio materials.
One limitation of Thinglink seems to be that there is not a zoom function for the images that are included within tags. The images within the tags are purposefully lo-res, but you should be able to zoom within your browser and still read from the full newspaper articles. Future iterations of annotations for this specific images could include links to the digital versions of these newspaper articles in online databases.
Thinglink requires you to upgrade to the “Pro” version in order to post images within tags. I don’t plan to upgrade after my free 14-day trial period is over, so most of the images in the tags below will not be visible after Friday, Feb. 26.

« Older posts

© 2020 Dressing Valentino

Theme by Anders NorenUp ↑

css.php