This week, we talked about the pedagogy of DH. Caroline Bruzelius and Hannah Jacobs in their article, “The Living Syllabus: Rethinking the Introductory Course to Art History with Interactive Visualization.” describe a shift in the approach to art education, that move away from viewing artworks in isolation as aesthetic events within a larger linear historical evolution. Instead, students can better engage with the materiality of art objects, tracing their life from creation to collection through the use of maps. This project really spoke to me since the project I’ve worked on for this class has been thinking through some of the same ideas and issues. Seeing a deeper understanding of the role of artworks in large networks and systems of trade, travel, and commodification was really promising to me and I would have loved to take this class.
Allowing students to create maps based on their individual interests, like the origin of materials used in ancient Egyptian boats, the professors can stimulate higher levels of research and engagement in students. Students can use different methods of primary materials and propose their interpretations, fostering critical thinking.
Moreover, the incorporation of visualization and data collection technologies, such as Omeka and Neatline tools, not only benefits students in art history but also equips them with skills applicable to other courses and projects. For example, a student applied these tools to model pollution rates in a different class. The passage emphasizes the importance of integrating digital tools into education, highlighting their role in enhancing twenty-first-century literacy.
The passage concludes by noting the irony that technologies are well-suited to the study of material culture. Despite this irony, technology’s capacity to record, capture, and organize data enables a new level of engagement with art and material culture, fostering a deeper understanding of objects, places, and buildings.
Solmaz Kive in “Digital Methods for Inquiry into the Eurocentric Structure of Architectural History Surveys,” discusses how eurocentric bias shows up in architectural surveys and highlights the advantages of using data analysis and visualization techniques to expose such biases. She focuses on two contemporary surveys, “Architecture and Interior Design” and “World Architecture,” to show how these digital methods can reveal patterns and disparities in coverage.
The Eurocentric bias is identified in several areas, including the relative coverage of different regions, the structure of the survey, and the emphasis on certain building types associated with Western values. She emphasizes that the bias often manifests in an overemphasis on religious structures in “other” traditions, reinforcing the narrative of premodernity.
The analysis uses digital visualizations to map the geographical distribution of buildings discussed in these surveys. For “Architecture and Interior Design,” the map reveals a heavy concentration of examples in Europe and the United States, with sparse coverage in other parts of the world. The disparity is visually evident, exposing areas deemed unworthy of mention. In the case of “World Architecture,” the map shows a more geographically diverse content but still highlights the concentration on certain regions, such as Asia, while specific regions of South America and Africa receive selective attention. The map helps depict the relative scale and coverage of different areas, addressing the mismatch between geographical size and attention in the survey. She concludes by emphasizing the versatility of data analysis and visualization methods because it can expose biases and highlight their adaptability to different types of surveys.
I think that as technology in the public sphere progresses and people get more used to technology, art history and the humanities will have no choice but to adapt and include digital methods. Even things that haven’t been traditionally considered to be “academic” can be used for teaching. Using Pinterest in class was a great exercise and categorizing things visually as an obvious and easily palpable way to visualize something that professors try to describe. This group activity was a really cool way to introduce discussions and start conversations.
This week, we talked about public engagement. The term “virtual museum” has been a controversial phrase for many people, but especially in Germany since the 1990s, who challenged the established definition proposed by the International Council of Museums (ICOM). ICOM says that a museum is a physical space where objects are selected, preserved, interpreted, and exhibited. Critics of the idea of the “virtual museum” say that “virtual” and “museum” are oxymorons, and the importance of physical museum space and engagement with real objects, asserting that the virtual cannot act as a substitute for the real encounter with authentic artifacts.
A methodological approach considers representations of museum collections in virtual space and “e-tangibles” as additional forms of communication that neither replace nor compete with physical museums but address different needs. Digital entities summarizing digital reproductions of museum objects appear as online databases, web portals, and digital exhibitions. Museums systematically digitize their collections, providing open access to object data and digital images through websites, portals like Europeana and Deutsche Digitale Bibliothek, and utilizing semantic web technologies and digital standards like LIDO and CIDOC-CRM.
Digital museum features, including augmented and virtual reality, enhance conventional museum space. Augmented reality offers insights into hidden details, brings specimens to life, and provides untold stories behind collections. Virtual museum space integrates tools to augment museal reality, complementing and enhancing conventional museums through interactivity and user experience.
The goal of a virtual museum is to consider object biographies, link data to the uniqueness of objects, and present a multi-layered view while imparting narratives that emotionally engage visitors in both virtual and conventional museum spaces. Achieving this requires the integration of digital and immersive tools and methods to represent conventional museum space in the virtual world, creating an extended museum experience that encourages the joy of discovery. From a scholarly perspective, having links to biographies and supplementary materials is a dream, and I can only imagine how this would enhance the in-person museum experience.
However, implementing a virtual museum is challenging, and the digital museum documentation standards mentioned above, such as WissKI, can help in data harvesting, but modeling and mapping data remain complex and time-consuming tasks. Bridging the gap between theory and practice involves exploring the virtual museum as an interdisciplinary and transdisciplinary field of investigation.
Crowdsourcing provides museums with a unique methodology to establish new relationships with their audiences, transforming the conventional museum-user dynamic by involving the public as curators, experts, and researchers. This approach not only enriches the user’s experience but also enhances the museum’s data and access points. By incorporating the public’s vocabulary and style of description, crowdsourcing expands the museum’s voice, making collections more accessible to a wider audience and fostering mission-driven experiences that encourage engagement with the institution.
In the online landscape today, where the public encounters misinformation, biases in search results, and often invisible artificial intelligence, maintaining a status quo that hinders the mission of museums is counterproductive. Projects like crowdsourcing, seen as an extension of museums’ mission-driven work, offer an opportunity to challenge existing power structures and consciously shape the course of the institution. Acknowledging participation as integral to the institution’s mission allows for the allocation of staff, time, and resources to address contemporary issues affecting the public.
In “Crowdsourcing Metadata in Museums: Expanding Descriptions, Access, Transparency, and Experience”, Jessica Brodefrank discusses how projects like “Tag Along with Adler” contribute not only to understanding research topics but also serve as real-time case studies evolving through actual work at institutions like the Adler Planetarium. As the public’s online habits evolve, cataloging should shift their focus on both the “about” of collections. She advocates for more institutions to engage in crowdsourcing and metadata-generating projects that leverage the enthusiasm and insight of their audiences. These projects provide meaningful opportunities for the public to experience collections and make them more discoverable. Crowdsourcing emerges as an effective strategy to enhance transparency, accessibility, and representation within museum collections, playing a crucial role in museums’ online presence.
I’m really excited about the GLAMWiki project. I suggested to Veronica to incorporate the Feminist Wiki Edit-a-Thon as an ASGO event, since I think lots of art historians would be super interested in this. I’ve found lots of wikipedia pages, particularly of women, that are really insufficient and inadequate.
This week, we talked about 3D modeling and visualization. Amy Jeffs discusses in her article “Digital 3D Modeling for the History of Art” the benefits and uses of 3D modeling as a tool for art historians. She focuses on three projects: the Digital Pilgrim Project, which used 3D modeling for medieval pilgrim souvenirs; Sofia Gans’ study of a medieval brass assemblage; and Robert Hawkins’ application to medieval stone bosses. I thought it was interesting that all three of her examples are Medieval art but when I think about who is the most comfortable with 3D projects (other than architectural historians which make 3D Modeling seem old hat), it does seem like classical and medieval scholars are doing more work proportionally to other areas.
She doesn’t discuss reconstructing buildings, 3D modeling as an independent art form, or 3D printing. The potential of virtual 3D reproduction in generating new research questions and altering perspectives is highlighted. She also provides a basic understanding of 3D modeling, emphasizing photogrammetric methods and the creation of models from photographs. The process involves taking multiple photos of an object, generating a low-density point-cloud through software, converting it into a high-density point-cloud (lines connecting the dots to make a skeleton), and creating a mesh overlaid with texture and color from the photographs (like skin).
One of my favorite examples was the Digital Pilgrim Project that created 3D models of twelve medieval badges from the British Museum’s collection to shed light on the visual language of medieval Northwestern Europe. These badges, everyday objects with various emblems, including those of noble families or bawdy depictions, were designed to be worn and later sewn into manuscripts. Despite their significance, many were discarded and have degraded over time. The 3D models allow virtual handling and a level of scrutiny that isn’t possible with the original as a way to simulate the original owners’ experiences handling these objects. From a museum perspective, 3D modeling can be useful for creating intellectually rich archival records that can facilitate a variety of study and teaching methods. The models have been used in museum outreach, dissertations, seminars, lectures, online articles, and social media, garnering thousands of views. This project really how high-quality 3D models enhance the study and appreciation of objects that are challenging to display in galleries or are relatively unknown. Accessible reproduction of these artifacts through 3D modeling serves as a crucial first step in bringing them to the attention of scholars.
Jeffs concludes that 3D modeling as a stand in for works of art significantly enhances the field of art history for both teaching and research. The accessibility of building, viewing, and downloading 3D models with basic equipment provides art historians with the opportunity to integrate this technology into their everyday academic activities. Exemplified by the Digital Pilgrim and Hawkins projects, 3D modeling can totally transform access to artworks that resist effective display or conventional photography. The selected subjects, medieval badges and sculpted bosses, benefit from viewers’ freedom to choose multiple viewing angles, simulating a fluid pre-photographic viewing experience.
Sarah Kenderdine, in her article, “Embodiment, Entanglement, and Immersion in Digital Cultural Heritage”, discusses the application and use of Interactive Immersive Virtual Environments (IIVE) in the context of digital archives. There has been a big shift in recent years in user interaction with databases, archives, and search engines from basic access to a more creative production, driven by the growth in participant culture through Web 2.0.
Several projects use alternative methods for exploring data in more immersive environments. The Living Web (2002) by Sommerer and Mignonneau, CloudBrowsing (2008–2009) by Lintermann et al., ECLOUD WW1 (2012) by Kenderdine and Shaw, and mARChive (2014) by Kenderdine all demonstrate innovative approaches to interactive and immersive data exploration.
These projects use diverse strategies, such as physically immersing users into live-streamed Internet data, creating spatial narratives in a panoramic screen, and using 3D projection environments to explore large-scale datasets. The focus is on providing users with an experiential and dynamic way to engage with digital archives, fostering creativity, exploration, and new meanings. Kenderdine also highlights the potential of IIVE in changing information retrieval into a more spatial experience, which encourages visual searching, and enhances the display and interpretation of metadata in cultural contexts. I like that Kenderdine is considering multiple facets of a space in her discussion of 3D spaces, like the environment and cultural influences that having digital access (rather than physically going) can help in the dispersion of knowledge. I would assume more archeologists aren’t claustrophobic (perhaps I’m projecting memories of Indiana Jones…) but having digital models of spaces that are physically uncomfortable to be in seems like a major benefit to many people, no jumping out of the way of a giant boulder required.
In my 3D project, I wanted to get a scan of an old camera I have. It was such a fun memory finding it for $6 at a thrift store, but it mostly sits in its box as I figure out where to get film, how to shoot it, if it even works… Having a digital model was a way for me to get it out of its box more! Even if I never opened the box again, a version of it exists on my computer and for someone with a penchant for hoarding sentimental items, I feel more at peace in case my house burns down and I didn’t grab it. I was surprised how easy the process was, and it gave me a lot of confidence for larger institutions that have more money and people to digitize their collections. The back of the camera didn’t take with the modeling, and I couldn’t get it to be quite as sharp in SketchFab as it was in AgiSoft, but overall, I’m really pleased with how it turned out.
This week, we discussed network analysis. Melanie Conroy, in her article “Networks, Maps, and Time: Visualizing Historical Networks Using Palladio”, discusses how a specific technology called Palladio can be used to create networks. Palladio is a tool designed for historians and related disciplines, facilitating the spatial and temporal visualization of data. It was developed by the Humanities + Design Lab, to fulfill the vision for the study of social networks in the humanities. Palladio is suitable for qualitative studies, providing visualizations like maps and network graphs that are familiar to people like art historians and anthropologists. It allows for the presentation of multifaceted data, such as network data with date ranges or categories.
Unlike other network graph packages, Palladio doesn’t have advanced network analytics features but she argues that it does well at presenting historical case studies because it is considered more of a software solution that makes design decisions for users. Palladio was specifically developed for historians, and considers best practices, design research methods, peer critiques, and usability testing. I think it’s really interesting that technology was developed specifically for the humanities! I don’t typically think of humanities being the focus of tech research. The focus is on ease of use, combining diagram types, and quick prototyping.
The tool’s simplicity allows historians to rapidly prototype diagrams, filter data, and explore subsets in highly legible diagrams that can be used for both print and online use. Palladio’s outputs are also not copyrighted, making them suitable for use in commercial and non-commercial works. Even for me, who has been described as a “boomer” for my technology capabilities, found Palladio shockingly easy to use. Unlike other network analysis programs, Palladio’s diagrams are clear and easy to read.
In Houda Lamgaddam, Inez de Prekel, Koenraad Brosens, and Katrien Verbert article, “Perceptual Effects of Hierarchy in Art Historical Social Networks”, the article discusses the impacts of perceived hierarchies in visualizations of historical social networks. Focusing on how the networks are understood, the study finds that human participants tend to have a hierarchical bias when viewing social networks. The hypothesis was that representing social networks in a way that presents it hierarchically would have advantages or sway the audience. This was confirmed and users reported lower cognitive load, more frequent and deeper insights, and a strong preference for hierarchical representations. Despite understanding the meaningful topology in the graphs, they emphasize the importance of considering perceptual benefits in hierarchically structured layouts, particularly in the humanities fields.
The authors encourage an open and critical discussion on the role of network visualization in Humanities research and suggest that this method of structuring layouts should be more commonly accepted as an alternative to graphs. I think the authors are successful in their goal is to provide scholars with different perspectives on their data as a contribution to the ongoing dialogue about the impact that structure can have in network visualization.
This week, we discussed the challenges and benefits to mapping time. Michael Goodchild in his article, “Combining Space and Time: New Potential for Temporal GIS”, discusses the limitations and challenges associated with traditional paper maps and introduces the evolution of Geographic Information Systems (GIS) as a solution. Static maps have many issues associated with them, like distortion of places and the inability to represent three-dimensional information accurately. The advent of digital technology, exemplified by the Canada Geographic Information System (CGIS), revolutionized the sharing of geographic information. CGIS laid the foundation for global GIS practices by converting map content into digital form, allowing for more efficient and accurate representation.
Layering different themes in GIS emerged as a key aspect, enabling advanced operations on geographic data. Despite the shift to digital technology, the metaphor and concept of physical maps continues to influence GIS design.
Goodchild also highlights some challenges posed by uncertainty in GIS. He emphasizes that, like analog maps, GIS databases are simply approximations that may not perfectly replicate the real world.
In the 90s, an object-oriented model emerged, which addresses the issues of a model based on relationships. Rather, it organizes things based on defined objects that get sorted into groups. This model was a significant advancement that allowed for a more accurate representation of geographic information. The discrete-object model, representing identifiable and countable objects, became the most popular display mode, allowing the GIS databases to store non-mappable information which better accounts for changes over time and other issues presented by previous models like blurred borders.
Modern GIS databases can handle dynamism and other complexities that are challenging for static paper maps, which show a limited understanding of a very specific moment in time and space. However, Goodchild also addresses a concept that has been a through line of the course: the resistance to adopt new practices despite vast improvements. I really enjoyed his discussion of how GIS specialists are benefitted by incorporating historians, anthropologists, and other fields to strengthen the accuracy of the maps created. Maps are often considered to be supplementary material, but supplementing the supplement can only bolster the strength and fidelity of the maps. Once again, encouraging an interdisciplinary approach seems to be a strategy that benefits everyone!
Goodchild acknowledges the existing challenges of the field, like the limited availability of tools for dynamic data analysis in GIS and the lack of dynamic, three-dimensional data, especially in historical periods. However, the future of GIS, particularly for historians, is extremely promising!
Considering Goodchild’s article, Suzanne Churchill, Linda Kinnahan, and Susan Rosenbaum put many of the digital mapping strategies to use in their DH project about Mina Loy. In their article, “”Mina Loy: Navigating the Avant-Garde”: a case study of collaborative DH design”, they discuss the on-the-ground challenges of creating a DH mapping project. Although at some points, they seem to be making a case for why we shouldn’t do DH projects, like difficulty receiving funding or a global pandemic, the Mina Loy project is a promising example of what DH are capable of.
In my map, I wanted to capture how time and objects interact with each other and why a single view of time and space is not an accurate representation of an object’s story.
This week, our reading focused on building digital collections. Some of the popular platforms for building these collections are Omeka and Scalar. Although they accomplish similar goals, Omeka and Scalar have very different capabilities.
Omeka is primarily designed for creating and managing digital collections, archives, and exhibits. It is often used by cultural heritage institutions, libraries, and museums to showcase and organize digital assets, such as images, documents, and metadata. It also seems more focused on the curation and visual presentation of digital assets, making it well-suited for showcasing historical documents, photographs, and other archival materials. However, it’s pretty static in its delivery. It has a more straightforward and catalog-esque interface that makes it easy to organize and present digital collections. While Omeka has a lot of structured, clear digital collections options, it definitely has some restrictions when it comes to creating very complex narratives or interactive content.
Scalar is very different. It allows users to integrate more media content in a more narrative structure. It would be really great for projects that want to provide a more interactive and visually interesting reading experience. It offers a more flexible and dynamic interface that results in more storytelling as well as a blend of media elements within the text. Although Scalar’s visual capabilities are more visually interesting, I found that Omeka was easier to work with. Certainly if I had more time to dedicate to learning Scalar, I would have chosen to use it for my project. But, for a week-long project, I found the back end of Omeka to be a little more user friendly. Callie helped me quite a bit though (thank you!). I think this is ultimately a fair trade off though. More work on the back end results in a friendlier user interface on the front and vice versa.
Idea.org’s article, “Mapping the world of cultural metadata standards”, talks about the way that Metadata and language are connected. Metadata is necessary for making information useful and searchable. It is the data about other data, like titles, locations, or descriptions of things, and it allows for better organization and searchability. To make sure metadata is universal, rather than specific to a person or institution, a variety of standards have been established to make sure that a common language is used. This practice makes it easier to share, manage, and search through the site’s information.
Jenn Riley created a map of 105 prominent metadata standards that are used in the cultural heritage/humanities fields. These standards help codify the way that data is labeled and structured. A common tongue prevents confusion when different systems are used, basically creating a universal way to categorize and classify information. Search engines can quickly find documents based on keywords, but taxonomies (the specific jargon/organizational language of a specific field) can offer a more nuanced, structured approach, which provides alternative pathways and associated ideas.
In libraries, cultural heritage, and the humanities sectors, clear metadata and taxonomies are crucial. But, the vastness and extensiveness of the metadata standards can be overwhelming. For this reason, Jenn Riley’s map is such a gift for people like myself who are just getting started in this field. Like buying a translation dictionary when you start learning a language, the map is a helpful jumping off point for people that aren’t familiar with industry practices. It sheds light on these standards and how they relate to one another, ultimately helping to make sense of the complex landscape of metadata in the digital age.
Metadata is a crucial part of creating a user-friendly site, but it’s a small part of the overall creation. Paige Morgan’s blogpost, “How To Get A Digital Humanities Project Off the Ground”, details her two DH projects that she finished while she was working on her PhD. (Phew!) I think she does a really fantastic job of laying out the issues she encountered, while not sugarcoating things or making the reader think, “why would I ever do a project like this?”. I understand that it can be difficult to convince academics to include DH as part of their research or in lieu of a book or article. In the world of “Publish or Perish”, I think that Morgan makes a compelling argument for why upcoming academics can view DH as an alternative way to present your research. Her article isn’t instructional, but I can imagine that anyone starting on a DH project would read it and get good information and some encouragement. It seems like this is a cool trend in the DH space. Lots of articles and projects we’ve looked at have included a little (or a large) retrospective piece about the troubles they encountered and how they solved them. It seems like despite the competitive nature of DH (based on the short lifespans of digital projects because technology moves so fast), many people are committed to including elements of transparency in their projects.
This week, we talked about data. Matthew Battles and Michael Maizels give a review of how art history has grown and changed as a discipline in their article, “Collections and/of Data: Art History and the Art Museum in the DH Mode”. Although photography has been understood as something art historians study, the establishment of art history as an empirical field was deeply influenced by the development of photography. Before photography, attempts to compare the development of art were difficult without having both works in front of you. Early (and infamous) art historians like Johann Winckelmann and Gotthold Lessing in the 18th century attempted to analyze art stylistically and historically, but they didn’t have the ability to compare and juxtapose different artworks.
Photography changed this landscape by allowing scholars to directly compare and contrast artworks. The lantern-slide lecture, a photographic performance, played a pivotal role in art history’s evolution. Battles and Maizels call the lantern-slide lecture a “ritualized performance”, a statement I couldn’t agree more with. Figures like Aby Warburg, Heinrich Wolfflin, and Jacob Burkhardt used these lectures to demonstrate stylistic attributes and art’s evolution. This approach was foundational to various art historical movements that aimed to explain the development of the arts. The slide lecture is still very prevalent in art history education and is the foundation of how art history is taught across the country.
This seems like a bizarre pedagogical method of the past, a pre-WWII way of studying art. However, my advisors and professors (even ones in their forties) remember a time when gathering physical slides for a professor or having a book that contains all the art for the class as a study guide was a part of their educational training. Professors adopting powerpoints with images is a technological revolution that happened in my lifetime. I watched the switch from VCR tapes to DVDs to Netflix, books to kindles, radios to iPods to Spotify, and despite all of this, the thought of a professor bringing a flash drive instead of physical slides for the first time makes me feel sort of… proud? Nostalgic? An awareness of standing on the shoulders of giants in the field I love? I’m not really sure, but this reading really made me think about the field that in many ways seems so static and uninterrupted has actually had its own growth spurts.
However, the slide format has limitations, such as cost and fragility. Comprehensive image collections, which existed as surrogates in various forms, became instrumental in early 20th-century art history research. Walter Benjamin in particular emphasizes photography’s role in freeing art from context and ceremony, which led to its widespread use as a surrogate for viewing the actual thing.
The development of art history runs parallel to the appearance of the art museum, which originally were closer to an expression of nationhood/patriotism and a desire for orderly knowledge, rather than the impartial institutions that are dedicated to accessible knowledge (regardless of whether or not they actually fulfill that goal is a discussion for a different post. #MuseumsAreNotNeutral). Photographic archives, like encyclopedic museums, would ultimately become essential in art history because they provide a nucleus for surveying geographically and temporally dispersed artifacts. I’m sure that Warburg, Wofflin, and others wouldn’t even recognize what art history has become. The other day, I read Erin Benay’s book Italy by Way of India where she talks about how trade routes between Italy and India greatly affected the art produced. This kind of study, and many others, would be completely inconceivable without comparative studies (and the internet. That helps too. Clearly!).
Andre Malraux expands on this concept, suggesting that photographic archives could fulfill the museum’s mission by turning it inside out. The museum not only preserves artistic heritage but also enhances the concept of “art” itself. By putting ulterior motives, personal interests and original context from artworks on the back burner, these objects can be better situated within a more (not totally) neutral lens. Thus, a more “global” art history is born.
Ted Underwood’s blog post, “Where to Start with Text Mining”, shows the possibilities of using visualization tools for research. I love these visualizations and I think it would make pulling thematic concepts out of readings really easy. In his example, I can imagine that seeing words like “solitary”, “tranquility”, or “meadow” be popular in his corpus would send me down an interesting research path. Are other similar authors using these words and themes? Is the author looking at pastoral themes?
He argues that text mining in literature analysis serves two goals. First, it will uncover patterns and themes that could be used as evidence to compliment, complicate, or support literary-historical arguments. The examples he chooses really demonstrate the success of visualization in this regard. Second, text mining can be an exploratory technique. It can reveal clues or trends that might need further elaboration/study using more traditional methods. In many cases though, the lines between the two objectives are blurred. Finding patterns and holes in your text is a researcher’s dream, and text mining can be an important tool to do this.
This week, we continued our discussion about data visualization. Nathan Yau in his article, “Representing Data”, provides a comprehensive text about the variety of ways to display data, as well as their strengths and weaknesses. Data visualization involves a variety of steps and choices. Yau’s analogy related cooking to data visualization. Like how a chef picks ingredients, prepares them, and combines them to create a meal, data visualizers make specific and intentional decisions about how to encode data, choose the best graph, and pick appropriate colors and fonts. Software can play a big part in many of these decisions, but there are still many ways that the information can be tailored to make personalized choices that best represent the data. Understanding the many, many aspects of visualization and how they can be combined and changed allows you to use the software in the most effective way, rather than letting it take the reins on the process.
Like cooking, visualization, depends on the skillset of the creator. Knowledge is power in this situation. Those who are knowledgeable about the process and ingredients and invest time and energy into learning will produce better results. Alternatively, Yau mentions that someone who relies too heavily on the computer to figure it out might end up with a less coherent or successful outcome.
Data visualization incorporates four main components: the data itself, visual cues, scale, and context. Data is the foundation of the project, but the other aspects relate to how the data is interpreted and understood. A visual cue is the way that data is mapped out making sure that the core of the data is not lost during the translation from the numbers to the visuals. The choice of visual cues relates to the purpose of the visualization and how the reader will perceive shapes, sizes, and shades.
It’s easy to lead people astray with data, and Yau suggests many ways that mapping data can lead people astray and provides examples of how to show your data in a clear way. For example I really liked his section on color-blindness and how many graphs are less accessible or difficult to read for people who struggle to differentiate between colors. Yau’s article would be really helpful for anyone who hasn’t had a lot of experience with graphs, which frankly, is most art historians. It’s not quite a recipe but Yau’s article clearly spells out the ingredients that you can use to create a successful dish.
The Software Studies Initiative was an interesting read, and the way that they highlighted different Data Visualization projects was cool to see all the ways that this is being used. I’m not sure I fully understand the academic potential of these, but I’m sure someone is using ImagePlot for something! The tools seem more geared for the museum sphere, and I can see curators and archivists using these as another way to show art or see connections. Having a program that could organize saturation or hue by year would be an interesting way to look at artists’ phases. Like Picasso has a blue period, it would be really easy for scholars to see similar trends or inclinations in an artist’s corpus. Without this technology, you would have to create your own graph, but a computer could do this with greater efficiency without the flaws of human eyes. Maybe there was a short green phase before the blue phase that scholars haven’t recognized because they’ve been seen as being “blue enough” by human eyes.
When I was creating my WordCloud, I used Chip Colwell’s article, “Curating Secrets: Repatriation, Knowledge Flows, and Museum Power Structures”. I thought about many of the elements that Yau brings up in his article. I chose the shield form, because it wasn’t a distracting motif but I also felt like it related to the themes of cultural heritage protection that the article discusses. I also used a blue gradient for the text since blue seemed like a safe color that could be easily differentiated, rather than some of the other default options. I couldn’t figure out how to add a title, which is one of the suggestions that Yau says can add clarity for the reader. I also chose different fonts, a serif, a sans serif, and a monospace. I hope this adds clarity through different textures.
Pamela Fletcher and Anne Helmreich discuss the impact of transportation, communication, and financial networks in the late 19th century on the art market in England, especially London. The growth of these networks worked in parallel with the mobility of goods, which led to a bolstered international art market. The authors use data sets and visualizations to consider the geography of the London art market and the sales data of different groups like Goupil & Cie/Boussod, Valadon & Cie, and other prominent art firms.
The density of the London art market and the various pathways that artworks, artists, and buyers interacted within changed over time and Fletcher and Helmreich discuss the implications of the change, but make a strong argument for why visualizing these changes results in a greater understanding of the art scene. The art market is dynamic to say the least, and the mercurial nature of which galleries are open is clearly reflected in the mapping capability.
Fletcher and Helmreich also consider the evolution of the art market in the 19th century, with a major shift from state-sponsored patronage to private dealers and galleries. It mentions previous research on the British art market and the role of factors such as industrialization, Protestantism, and social circles in shaping it. I’m not sure if I see these more nuanced changes reflected in the digital map, but I think the authors addressing why
things are changing so drastically provides a deeper and more nuanced look at the map.
Basically, the text supports the map, and the map supports the text. This model best illustrates the way that art history can benefit from digital humanities. I’m not sure how else you could provide an analog way to achieve a similar goal. I’m not sure it would have the same effect, particularly with the interactivity of the map. I’m very impressed by what the authors could achieve with the map and how much stronger their argument looked.
I began my reading responses with a consideration on how art historians consider ourselves to be vaguely “interdisciplinary” but when push comes to shove, we are disinterested in exploring other fields. I think Beatrice Joyeux-Prunel in “Digital Humanities for a Spatial, Global, and Social History of Art”, does a really good job of capturing this dilemma once again. Geography and mapping is, simply put, not art history. Yet, in many cases art historians look at and rely on maps. Fletcher and Helmreich make a strong case for showing the capabilities of mapping within “traditional” art history. She begins by mentioning how in the past, the notion of “school” and “academy” in art was heavily influenced by national criteria and was even used to justify political theories like nationalism. Some scholars used artistic geography to support land claims. In the 1990s, when European and North American art historians moved away from nationalism and were more interested in aesthetics and forms, they neglected to understand social and geographical aspects. However, there has been a recent interest in artistic geography, where some art historians are advocating for a more thoughtful and critical approach to maps.
Art geographers have emphasized the importance of rethinking traditional concepts like eurocentrism, lens, and financial capital in art history, particularly with a postcolonialist viewpoint. This shift has paralleled the “global turn” in art history and the study of artistic globalization.
Art geographers definitely borrowed concepts from geographers, they didn’t initially consider themselves “mappers”. However, the introduction of digital methodologies in art history has led to a burgeoning interest in mapping, especially among art historians who are interested in sociology, economics, and trend analysis. The digital geographical approach has created new questions for art historians, and encouraged us to, once again, step outside the fence of “Art History” and engage in a meaningful way with the rest of academia. Digital mapping has enabled researchers to complete tasks that fifteen years ago would seem impossible to analyze and visualize such information clearly.
My map shows the movement of the objects I chose for my Omeka Exhibit, Afterlives. I hope the map provides clarity to the way objects move around. I also included some famous museums to provide landmarks so viewers can orient themselves. Some famous art works are very
close to famous museums, some are very far away. This is just a sampling of the way objects move, but I hope it provides insight.
In retrospect, I think I would have color coded each art work, rather than use the color coding to signify more of a temporal quality.
In many ways, the term “oral history” is as vague as it sounds. It can be many things, like rehearsed stories by tradition-bearers, informal conversations with family, print collections of stories, and recorded interviews with individuals seen as having important stories to share. People have learned about the past through spoken word accounts and many have worked to preserve firsthand accounts, especially when historical actors were about to pass away.
For example, shortly after Abraham Lincoln’s death in 1865, John G. Nicolay and William Herndon gathered recollections and interviews about the sixteenth president. The author didn’t mention this, but the person whose job it was to record Lincoln’s last words had his pencil break. In the scramble to find a new pencil, the recorder missed his last words (source: my sixth grade social studies teacher). It makes sense that Nicolay and Herndon sought verbal accounts of the late president, especially since his death was so unexpected.
One significant early effort to collect oral accounts was the Federal Writers Project (FWP) in the late 1930s and early 1940s. Although valuable, early attempts at recording firsthand accounts faced challenges due to the absence of modern recording technology, reliance on human note-takers, and idiosyncratic methods. From this, it’s interesting to think how advanced our technology has gotten since handwritten transcriptions.
Historians generally trace the formal practice of oral history to Allan Nevins at Columbia University in the 1940s. Nevins initiated a systematic effort to record, preserve, and make available historically significant recollections. His work began as a supplement to written records, as he found limited personal records for his biography of President Grover Cleveland. This marked the start of contemporary oral history, leading to the creation of the Columbia Oral History Research Office and the broader oral history movement.
The interviews that Linda Shnopes provides were really interesting and I would be curious to know more about transcription standards. Although I understand that recording things as they’re said is important, including mispronunciations like “Sabior” instead of “Savior”. I’m not sure what I would do in this scenario, but I appreciate the transcriber’s commitment to accuracy. If I was interviewing a British person, writing statements like, “Margaret Thatcha wahs a terrible leadah, guvnah” seems like it would lead the reader to make certain assumptions or confuse people. However, I understand that including euphemisms, idioms, or phrases are part of the interview and contribute to the overall accuracy. This seems like really a gray area, and I’m not sure if there is a correct answer. I think that having an open dialogue with your interviewer so they understand how you plan to record them is a good place to start.
Doug Boyd’s article, “Informed Accessioning: Questions to Ask After the Interview”, discusses good practices for interviewing subjects. Asking questions after the interview like, “is any of this confidential” or “would you like to remain anonymous” are ways to practice ethical interviewing strategies.
My digital project includes a picture from my master’s research. Click on the links and learn more about cassoni, a rich tradition that was extremely popular in Florence during the 15th century.