Source: DAH Post 7: Art History and Machine Learning

The Physics arXiv Blog post, “When a Machine Learning Algorithm Studied Fine Art Paintings, It Saw Things Art Historians Had Never Noticed,” summarizes the 2014 findings of Rutgers University researchers Babak Saleh, Kanako Abe, Ravneet Singh Arora, Ahmed Elgammal in their research on using machine learning to classify art. Comparing a set of 1700 images by 66 artists by using descriptions of the visual objects they contain, Saleh et. al. experimented with an algorithm to identify metrics that would indicate which artists showed influence on or strong similarity with other artists.1Babak Saleh, Kanako Abe, Ravneet Singh Arora, Ahmed Elgammal. “Toward Automated Discovery of Artistic Difference.” arxiv.org>cs>arxiv:1408.3218, August 14, 2014. Accessed on March 6, 2016. http://arxiv.org/abs/1408.3218. jQuery(“#footnote_plugin_tooltip_3511_1”).tooltip({ tip: “#footnote_plugin_tooltip_text_3511_1”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });

As JJ pointed out in class, the claims made for the findings are clearly somewhat flawed, most obviously treating the machine’s discovery of similarities between Braque’s Man with a Violin and Picasso’s Still Life: Sun and Shadow as an art historical revelation demonstrating a previously undiscovered relationship (“influence”) between the two painters. As any art historian will tell you, this is no revelation; Braque and Picasso worked on their paintings side by side. Similarly, the post portrays the computer’s identification of compositional similarities between Frederic Bazille’s Studio 9 Rue de la Condamine  and Norman Rockwell’s Shuffleton’s Barber Shop as demonstrative of a “clear link” between the two–again, demonstrating “influence” (regardless of any sort of context that may verify such an arbitrary observation). Humans are pattern-seeking creatures, so it makes sense that computers would be too. Just because a computer identifies a pattern, however, does not mean that its creators aren’t capable of falling into the trap of identifying a pattern as representative of something more substantial–quite the opposite is true.

As this blog post demonstrates, there’s a very interesting way that these developments in machine learning in connection with art history are hailed in the popular press, with the potential for scholarly advances using machine learning treated as actual advances in art history rather than just computer science. Similarly, with regards to data visualization, it’s problematic to consider a digital art history project as an end in itself rather than as the starting point for further research.

For my data visualization project, I’d like to draw on research I’m conducting in another class. I’m working on an art history project related to Google’s 2015 open sourcing of code for their DeepDream project, an Artificial Neural Network (ANN). In a post on Google’s research blog on June 17, 2015, Google software engineers Alexander Mordvintsev, Christopher Olah, and Mike Tyka revealed outputs from a data visualization tool using an Artificial Neural Network (ANN), a subcategory within machine learning research.2“Inceptionism: Going Deeper into Neural Networks,” Google Research Blog, June 17, 2015. Accessed March 5, 2016. https://web.archive.org/web/20150708233542/http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html. jQuery(“#footnote_plugin_tooltip_3511_2”).tooltip({ tip: “#footnote_plugin_tooltip_text_3511_2”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] }); Two weeks later, seemingly in response to interest “from programmers and artists alike” in the images generated by the tool, now called DeepDream, they posted the code and documentation for the software, encouraging readers to make their own neural network images and tag them accordingly on social media, sparking a flurry of media think pieces and #deepdream images, videos, and software modifications.3“DeepDream – a Code Example for Visualizing Neural Networks.” Google Research Blog, July 8, 2015. Accessed March 5, 2016. http://googleresearch.blogspot.com/2015/07/deepdream-code-example-for-visualizing.html. jQuery(“#footnote_plugin_tooltip_3511_3”).tooltip({ tip: “#footnote_plugin_tooltip_text_3511_3”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });In a Medium.com blog post released in conjunction with “DeepDream: The art of neural networks”, a February, 2016 benefit auction and art exhibition at San Francisco’s Gray Area Foundation for the Arts, Blaise Aguera y Arcas, head of Google’s Machine Intelligence Group in Seattle, situated the critical reaction to DeepDream within a history of anti-technology art historical scholarship in order to argue that DeepDream images created by users are, in fact, artworks.4Arcas, Blaise Aguera y. “Art in the Age of Machine Intelligence.” Medium, February 23, 2016. Accessed March 5, 2016. https://medium.com/artists-and-machine-intelligence/what-is-ami-ccd936394a83. jQuery(“#footnote_plugin_tooltip_3511_4”).tooltip({ tip: “#footnote_plugin_tooltip_text_3511_4”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });

DeepDream is a computer vision system (a specific type of ANN) that feeds an input image through successive layers of artificial “neurons”, also known as nodes, processing elements, or units. The layers of neurons, governed by adaptive numeric weights set by the developers and calibrated by a large dataset of images, build and enhance certain features of the input image based on the information in the dataset and the algorithms structuring the output visualization. The output image is thus conceived of as a perception and interpretation of the input image, based on the network’s prior training. The research described in the Physics arXiv Blog mentioned above is also a type of machine learning, but rather than using a network of neurons to output another image as an “interpretation” of the input image using a large dataset of images to “train” the network, an algorithm is used identify similarities between input images using textual descriptive data.

My deep dream (right), created using Hillary Clinton’s official Secretary of State portrait (left).

The discourse surrounding DeepDream’s release centers on how the visual representation of a picture’s interpretation by a computer vision software whose adaptive weights have been calibrated by a dataset of billions of digital images is potentially revelatory of how humans think and perceive, Like Google’s more prosaic data visualization tools, however, output depends upon a dataset determined by Google itself, however large. Advocates (from Google, in popular media, and users) noted the images’ surreal and hallucinatory aesthetic appeal, while also speculating about the possibilities for using ANNs like DeepDream to model or otherwise explore the perceptual processes of the human brain.5LaFrance, Adrienne. “When Robots Hallucinate.” The Atlantic, September 3, 2015. Accessed March 5, 2016. http://www.theatlantic.com/technology/archive/2015/09/robots-hallucinate-dream/403498/;  Campbell-Dollaghan, Kelsey. “This Artist Is Making Haunting Paintings With Google’s Dream Robot.”Gizmodo. Accessed March 5, 2016. http://gizmodo.com/this-human-artist-is-making-hauting-paintings-with-goog-1716597566; Hern, Alex. “Yes, Androids Do Dream of Electric Sheep.” The Guardian, June 18, 2015, Accessed March 5, 2016. http://www.theguardian.com/technology/2015/jun/18/google-image-recognition-neural-network-androids-dream-electric-sheep. jQuery(“#footnote_plugin_tooltip_3511_5”).tooltip({ tip: “#footnote_plugin_tooltip_text_3511_5”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] }); Less vocally, critics emphasized the shallowness in both content and meaning of the output images, as well as the role of Google developers in determining the look of the images, a facet obscured by Google’s seemingly democratic open sourcing of the code and the resultant wave of images “interpreted” by computer vision.6Chayka, Kyle. “Why Google’s Deep Dream Is Future Kitsch.” Pacific Standard. Accessed March 5, 2016. http://www.psmag.com/nature-and-technology/googles-deep-dream-is-future-kitsch. jQuery(“#footnote_plugin_tooltip_3511_6”).tooltip({ tip: “#footnote_plugin_tooltip_text_3511_6”, tipClass: “footnote_tooltip”, effect: “fade”, fadeOutSpeed: 100, predelay: 400, position: “bottom right”, relative: true, offset: [10, 10] });

Ultimately, I’d like to explore the art historical discourses and significant moments that preceded DeepDream’s development and inform–explicitly and implicitly–how DeepDream has been defined, lauded, and excoriated by its developers, advocates, and critics. I think it might be interesting to examine and provoke the branding of this particular form of computer vision software as capable of artistic creation versus an applicable but less resonant function, data visualization. This will entail situating this specific branch of ANN research within the art historical context of systems art–specifically, art employing cybernetic systems, artificial life models (A-life), and art-and-technology (by which I mean the large-scale collaborations between artists and the tech industry from the 1940s-1970s).

As I conduct research for this project, I’m discovering the lengthy history of ANN research. Since 1943, engineers and scientists have explored many different aspects of machine learning using ANNs, charting various subfields and research accomplishments.  For the data visualization project in this class and in order to strengthen my working knowledge of this history and its possible correlation with and connection to cybernetics art, a-life, and art-and-technology, I’d like to develop a network visualization that traces, defines, and connects research fields and significant projects within the history of ANN research in order to examine this history in tandem with a timeline of developments in systems art in the latter half of the 20th century. This network visualization could help to clarify the connections, both technological and discursive, that have emerged in the recent media coverage of DeepDream’s release and usage, and at the very least provide context for a more specific research path.

In order to prepare the data required to visualize this network, I would need to identify significant fields, subfields, and projects within within the history of ANNs, determine their dates, connections to other items in the networks, and provide brief descriptions of their relevance. Ideally, I would also create a similar network for cybernetics art, A-life, and art-and-technology, and then identify conceptual connections between the two networks (ANN research and art history), though this would likely require much more time and I would need a way to define a hierarchy of possible connections (i.e., am I connecting two items because they are related by date proximity, by the individuals who worked on them, by art historical context, or by an argument that I am positing about the conceptual nature of their connection? The latter category could prove very messy). I may only accomplish a demo version of this, but in my ideal visualization, users would be able to view the network as a whole, then drill down into sections of the network to view them more closely, and then zoom in on specific items in order to view title, date, and descriptive information not visible in the network view.

At this point I’m not sure if I’d use a force-directed layout, a radial tree, or a treemap for this visualization–it will depend on the form and categories of the spreadsheet I create to organize the data, and how precise I want to (or am able to) be at this point. I’m thinking spreadsheet fields will include unique item ID (maybe?) field, subfield, title of project (if applicable), date, people involved, description, and related items and the description of their relationships. As I mentioned above, I may also want to define the various types of relationships included. At this point, my thinking about this is still very theoretical, so I welcome any advice you have!

References   [ + ]

1. ↑ Babak Saleh, Kanako Abe, Ravneet Singh Arora, Ahmed Elgammal. “Toward Automated Discovery of Artistic Difference.” arxiv.org>cs>arxiv:1408.3218, August 14, 2014. Accessed on March 6, 2016. http://arxiv.org/abs/1408.3218. 2. ↑ “Inceptionism: Going Deeper into Neural Networks,” Google Research Blog, June 17, 2015. Accessed March 5, 2016. https://web.archive.org/web/20150708233542/http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html. 3. ↑ “DeepDream – a Code Example for Visualizing Neural Networks.” Google Research Blog, July 8, 2015. Accessed March 5, 2016. http://googleresearch.blogspot.com/2015/07/deepdream-code-example-for-visualizing.html. 4. ↑ Arcas, Blaise Aguera y. “Art in the Age of Machine Intelligence.” Medium, February 23, 2016. Accessed March 5, 2016. https://medium.com/artists-and-machine-intelligence/what-is-ami-ccd936394a83. 5. ↑ LaFrance, Adrienne. “When Robots Hallucinate.” The Atlantic, September 3, 2015. Accessed March 5, 2016. http://www.theatlantic.com/technology/archive/2015/09/robots-hallucinate-dream/403498/;  Campbell-Dollaghan, Kelsey. “This Artist Is Making Haunting Paintings With Google’s Dream Robot.”Gizmodo. Accessed March 5, 2016. http://gizmodo.com/this-human-artist-is-making-hauting-paintings-with-goog-1716597566; Hern, Alex. “Yes, Androids Do Dream of Electric Sheep.” The Guardian, June 18, 2015, Accessed March 5, 2016. http://www.theguardian.com/technology/2015/jun/18/google-image-recognition-neural-network-androids-dream-electric-sheep. 6. ↑ Chayka, Kyle. “Why Google’s Deep Dream Is Future Kitsch.” Pacific Standard. Accessed March 5, 2016. http://www.psmag.com/nature-and-technology/googles-deep-dream-is-future-kitsch. function footnote_expand_reference_container() { jQuery(“#footnote_references_container”).show(); jQuery(“#footnote_reference_container_collapse_button”).text(“-“); } function footnote_collapse_reference_container() { jQuery(“#footnote_references_container”).hide(); jQuery(“#footnote_reference_container_collapse_button”).text(“+”); } function footnote_expand_collapse_reference_container() { if (jQuery(“#footnote_references_container”).is(“:hidden”)) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery(“#” + p_str_TargetID); if(l_obj_Target.length) { jQuery(‘html, body’).animate({ scrollTop: l_obj_Target.offset().top – window.innerHeight/2 }, 1000); } }