Claiming a Place

Women Architects and World Fairs

Page 2 of 16

Week 4

This week, our reading focused on building digital collections. Some of the popular platforms for building these collections are Omeka and Scalar. Although they accomplish similar goals, Omeka and Scalar have very different capabilities. Omeka is primarily designed for creating and managing digital collections, archives, and exhibits. It is often used by cultural heritage institutions, libraries, and museums to showcase and organize digital assets, such as images, documents, and metadata. It also seems more focused on the curation and visual presentation of digital assets, making it well-suited for showcasing historical documents, photographs, and other archival materials. However, it’s pretty static in its delivery. It has a more straightforward and catalog-esque interface that makes it easy to organize and present digital collections. While Omeka has a lot of structured, clear digital collections options, it definitely has some restrictions when it comes to creating very complex narratives or interactive content. Scalar is very different. It allows users to integrate more media content in a more narrative structure. It would be really great for projects that want to provide a more interactive and visually interesting reading experience. It offers a more flexible and dynamic interface that results in more storytelling as well as a blend of media elements within the text. Although Scalar’s visual capabilities are more visually interesting, I found that Omeka was easier to work with. Certainly if I had more time to dedicate to learning Scalar, I would have chosen to use it for my project. But, for a week-long project, I found the back end of Omeka to be a little more user friendly. Callie helped me quite a bit though (thank you!). I think this is ultimately a fair trade off though. More work on the back end results in a friendlier user interface on the front and vice versa. Idea.org’s article, “Mapping the world of cultural metadata standards”, talks about the way that Metadata and language are connected. Metadata is necessary for making information useful and searchable. It is the data about other data, like titles, locations, or descriptions of things, and it allows for better organization and searchability. To make sure metadata is universal, rather than specific to a person or institution, a variety of standards have been established to make sure that a common language is used. This practice makes it easier to share, manage, and search through the site’s information. Jenn Riley created a map of 105 prominent metadata standards that are used in the cultural heritage/humanities fields. These standards help codify the way that data is labeled and structured. A common tongue prevents confusion when different systems are used, basically creating a universal way to categorize and classify information. Search engines can quickly find documents based on keywords, but taxonomies (the specific jargon/organizational language of a specific field) can offer a more nuanced, structured approach, which provides alternative pathways and associated ideas. In libraries, cultural heritage, and the humanities sectors, clear metadata and taxonomies are crucial. But, the vastness and extensiveness of the metadata standards can be overwhelming. For this reason, Jenn Riley’s map is such a gift for people like myself who are just getting started in this field. Like buying a translation dictionary when you start learning a language, the map is a helpful jumping off point for people that aren’t familiar with industry practices. It sheds light on these standards and how they relate to one another, ultimately helping to make sense of the complex landscape of metadata in the digital age. Metadata is a crucial part of creating a user-friendly site, but it’s a small part of the overall creation. Paige Morgan’s blogpost, “How To Get A Digital Humanities Project Off the Ground”, details her two DH projects that she finished while she was working on her PhD. (Phew!) I think she does a really fantastic job of laying out the issues she encountered, while not sugarcoating things or making the reader think, “why would I ever do a project like this?”. I understand that it can be difficult to convince academics to include DH as part of their research or in lieu of a book or article. In the world of “Publish or Perish”, I think that Morgan makes a compelling argument for why upcoming academics can view DH as an alternative way to present your research. Her article isn’t instructional, but I can imagine that anyone starting on a DH project would read it and get good information and some encouragement. It seems like this is a cool trend in the DH space. Lots of articles and projects we’ve looked at have included a little (or a large) retrospective piece about the troubles they encountered and how they solved them. It seems like despite the competitive nature of DH (based on the short lifespans of digital projects because technology moves so fast), many people are committed to including elements of transparency in their projects.

Week 7

This week, we talked about data. Matthew Battles and Michael Maizels give a review of how art history has grown and changed as a discipline in their article, “Collections and/of Data: Art History and the Art Museum in the DH Mode”. Although photography has been understood as something art historians study, the establishment of art history as an empirical field was deeply influenced by the development of photography. Before photography, attempts to compare the development of art were difficult without having both works in front of you. Early (and infamous) art historians like Johann Winckelmann and Gotthold Lessing in the 18th century attempted to analyze art stylistically and historically, but they didn’t have the ability to compare and juxtapose different artworks. Photography changed this landscape by allowing scholars to directly compare and contrast artworks. The lantern-slide lecture, a photographic performance, played a pivotal role in art history’s evolution. Battles and Maizels call the lantern-slide lecture a “ritualized performance”, a statement I couldn’t agree more with. Figures like Aby Warburg, Heinrich Wolfflin, and Jacob Burkhardt used these lectures to demonstrate stylistic attributes and art’s evolution. This approach was foundational to various art historical movements that aimed to explain the development of the arts. The slide lecture is still very prevalent in art history education and is the foundation of how art history is taught across the country. This seems like a bizarre pedagogical method of the past, a pre-WWII way of studying art. However, my advisors and professors (even ones in their forties) remember a time when gathering physical slides for a professor or having a book that contains all the art for the class as a study guide was a part of their educational training. Professors adopting powerpoints with images is a technological revolution that happened in my lifetime. I watched the switch from VCR tapes to DVDs to Netflix, books to kindles, radios to iPods to Spotify, and despite all of this, the thought of a professor bringing a flash drive instead of physical slides for the first time makes me feel sort of… proud? Nostalgic? An awareness of standing on the shoulders of giants in the field I love? I’m not really sure, but this reading really made me think about the field that in many ways seems so static and uninterrupted has actually had its own growth spurts. However, the slide format has limitations, such as cost and fragility. Comprehensive image collections, which existed as surrogates in various forms, became instrumental in early 20th-century art history research. Walter Benjamin in particular emphasizes photography’s role in freeing art from context and ceremony, which led to its widespread use as a surrogate for viewing the actual thing. The development of art history runs parallel to the appearance of the art museum, which originally were closer to an expression of nationhood/patriotism and a desire for orderly knowledge, rather than the impartial institutions that are dedicated to accessible knowledge (regardless of whether or not they actually fulfill that goal is a discussion for a different post. #MuseumsAreNotNeutral). Photographic archives, like encyclopedic museums, would ultimately become essential in art history because they provide a nucleus for surveying geographically and temporally dispersed artifacts. I’m sure that Warburg, Wofflin, and others wouldn’t even recognize what art history has become. The other day, I read Erin Benay’s book Italy by Way of India where she talks about how trade routes between Italy and India greatly affected the art produced. This kind of study, and many others, would be completely inconceivable without comparative studies (and the internet. That helps too. Clearly!). Andre Malraux expands on this concept, suggesting that photographic archives could fulfill the museum’s mission by turning it inside out. The museum not only preserves artistic heritage but also enhances the concept of “art” itself. By putting ulterior motives, personal interests and original context from artworks on the back burner, these objects can be better situated within a more (not totally) neutral lens. Thus, a more “global” art history is born. Ted Underwood’s blog post, “Where to Start with Text Mining”, shows the possibilities of using visualization tools for research. I love these visualizations and I think it would make pulling thematic concepts out of readings really easy. In his example, I can imagine that seeing words like “solitary”, “tranquility”, or “meadow” be popular in his corpus would send me down an interesting research path. Are other similar authors using these words and themes? Is the author looking at pastoral themes? He argues that text mining in literature analysis serves two goals. First, it will uncover patterns and themes that could be used as evidence to compliment, complicate, or support literary-historical arguments. The examples he chooses really demonstrate the success of visualization in this regard. Second, text mining can be an exploratory technique. It can reveal clues or trends that might need further elaboration/study using more traditional methods. In many cases though, the lines between the two objectives are blurred. Finding patterns and holes in your text is a researcher’s dream, and text mining can be an important tool to do this.

Week 8.

This week, we continued our discussion about data visualization. Nathan Yau in his article, “Representing Data”, provides a comprehensive text about the variety of ways to display data, as well as their strengths and weaknesses. Data visualization involves a variety of steps and choices. Yau’s analogy related cooking to data visualization. Like how a chef picks ingredients, prepares them, and combines them to create a meal, data visualizers make specific and intentional decisions about how to encode data, choose the best graph, and pick appropriate colors and fonts. Software can play a big part in many of these decisions, but there are still many ways that the information can be tailored to make personalized choices that best represent the data. Understanding the many, many aspects of visualization and how they can be combined and changed allows you to use the software in the most effective way, rather than letting it take the reins on the process. Like cooking, visualization, depends on the skillset of the creator. Knowledge is power in this situation. Those who are knowledgeable about the process and ingredients and invest time and energy into learning will produce better results. Alternatively, Yau mentions that someone who relies too heavily on the computer to figure it out might end up with a less coherent or successful outcome. Data visualization incorporates four main components: the data itself, visual cues, scale, and context. Data is the foundation of the project, but the other aspects relate to how the data is interpreted and understood. A visual cue is the way that data is mapped out making sure that the core of the data is not lost during the translation from the numbers to the visuals. The choice of visual cues relates to the purpose of the visualization and how the reader will perceive shapes, sizes, and shades. It’s easy to lead people astray with data, and Yau suggests many ways that mapping data can lead people astray and provides examples of how to show your data in a clear way. For example I really liked his section on color-blindness and how many graphs are less accessible or difficult to read for people who struggle to differentiate between colors. Yau’s article would be really helpful for anyone who hasn’t had a lot of experience with graphs, which frankly, is most art historians. It’s not quite a recipe but Yau’s article clearly spells out the ingredients that you can use to create a successful dish. The Software Studies Initiative was an interesting read, and the way that they highlighted different Data Visualization projects was cool to see all the ways that this is being used. I’m not sure I fully understand the academic potential of these, but I’m sure someone is using ImagePlot for something! The tools seem more geared for the museum sphere, and I can see curators and archivists using these as another way to show art or see connections. Having a program that could organize saturation or hue by year would be an interesting way to look at artists’ phases. Like Picasso has a blue period, it would be really easy for scholars to see similar trends or inclinations in an artist’s corpus. Without this technology, you would have to create your own graph, but a computer could do this with greater efficiency without the flaws of human eyes. Maybe there was a short green phase before the blue phase that scholars haven’t recognized because they’ve been seen as being “blue enough” by human eyes. When I was creating my WordCloud, I used Chip Colwell’s article, “Curating Secrets: Repatriation, Knowledge Flows, and Museum Power Structures”. I thought about many of the elements that Yau brings up in his article. I chose the shield form, because it wasn’t a distracting motif but I also felt like it related to the themes of cultural heritage protection that the article discusses. I also used a blue gradient for the text since blue seemed like a safe color that could be easily differentiated, rather than some of the other default options. I couldn’t figure out how to add a title, which is one of the suggestions that Yau says can add clarity for the reader. I also chose different fonts, a serif, a sans serif, and a monospace. I hope this adds clarity through different textures. https://imgdlvr.com/img/6OWKU1bZ4dok2D5GYmnCHA/wordclouds.com/20231029-9525/public

Mapping Time

I was really impressed with the Mina Loy project we read about this week. One of my biggest takeaways was the amount of money and the amount of labor it takes to do a DH project justice. The Mina Loy project received a $75,000 grant from the NEH, had teams of undergraduate students across multiple universities working on the site as part of a class, and had several funded professors, and graduate student positions dedicated to working on the project. Earlier in the semester we looked at several projects that had great bones but that seemed to lack funding for long term maintenance because we ran into broken links and issues with old embedded flash features, etc. Though maybe it’s not a fair comparison as the Mina Loy project isn’t that old yet because it launched in 2020, so maybe it will be a different story if we check back in another three or four years. As far as the actual website goes, I found it to be a really successful layout – it felt like the user experience was considered at every turn. I really liked having multiple site navigation bar options – one for those who like to scroll and a fixed bar at the top of the page for those who like to jump right to specific content. I was also impressed by the layers of work behind each click as I went deeper into the website like how all of the carousel photos click out to their own story map with more detail about the image. This site sticks out to me as a kind of ideal model and would be an example I’d show to anyone wanting to see the full potential of a completely fleshed out digital humanities project.

Data Visualization: word cloud experiments

This is a word cloud generated from a PDF discussing the book The Art of Collaboration: Poets, Artists, Books. I chose to represent this word cloud with the icon of a book for, well, obvious reasons – I also wanted to experiment with how the word cloud would look with in circular shape so I tried that with the earth icon. The colors were chosen fairly arbitrarily because I struggled to think of meaningful reasoning behind color choice for this particular word cloud, though I do see how it could be useful or more important for other word clouds using different data sources. Overall, I think this is a somewhat successful word cloud but in a really narrow way: I was interested in this book’s title and wanted to use a word cloud to see if this 12 page synopsis of the book would reveal how closely aligned the book is to my own research. So for that very specific purpose, the word cloud was a quick and easy way for me to decide that I would like to grab this book from the library and take the time to read it!

Problems with digital surrogates/textmining curiosity

For this week’s batch of readings I was most drawn to Collections and/of Data: Art History and the Art Museum in the DH Mode by Matthew Battles and Michael Maizels. Specifically the discussion of the S.M.S. NOs 1–6s project.  This project is a digital version of William Copeley’s editioned, snailmail multimedia project S.M.S which was started in 1968. Subscribers of the project received a batch of music, poetry books, and other art objects created by artists like John Cage, Dick Higgins, and La Monte Young. The project was meant to move art out of the art gallery, out of the hands of wealthy collectors, and to provide more access to contemporary art outside of the traditional art market. In that same spirit of accessibility, S.M.S NOs 1 – 6 aims to exist as a “digital translation” or “digital avatar” of the original items found within S.M.S packages. When discussing the user experience for S.M.S Nos 1 – 6 Battles and Maizels write, “Users are now able to interact with digital avatars of each S.M.S. object: flipping it over, turning its pages, listening to its audio, or activating its intended motion.” While I can appreciate the dedication to creating a haptic experience that mimics the experience of handling the actual objects, and I really love the entire spirit of both Copeley’s original S.M.S and this digital translation, it still feels off. In class I discussed all the different experiences and nuances that are lost when someone is only able to access a digital surrogate – this is definitely the visual material/special collections archivist in me coming forward. I’m someone who absolutely values and champions digitization of materials but as a means of more equitable access, not as a replacement or as a kind of surrogate, I don’t think that is ever fully possible. As for data I’d be interested in working with and how I may apply it: I was really interested in text mining and the work being done over on Mining the Dispatch. It was really interesting to think about how I may be able to incorporate that into my research and work for Rhiannon, which heavily relies on language, text, keywords, etc. I think that the Mining the Dispatch project could work as a kind of model for this – so instead of mining civil war era Richmond Times Dispatch issues I could tailor it to Rhiannon’s interests by mining North Carolina newspapers (and from the same era would work as that falls into her years of focus) for mentions of immigrants, race relations, music, railroad mentions, etc. I’d imagine that I could basically plug in the list of search terms I’ve amassed while researching for her to see what kind of hits I’d get, what kinds of trends show up over time, and in what specific areas of the state which could all be of interest to her.

data visualization

The difference between Lev Manovich’s “Data Science and Digital Art History” and Nathan Yau’s “Representing Data,” provides an interesting point for comparison. Yau’s chapter gives widespread insights into the best practices for visualizing all types of data. Manovich explores concepts of data science that are specifically relevant to digital art history. Both authors deal with data science, yet only Manovich’s is specifically tailored to art historical work. Manovich emphasizes that concepts used in data science, such as objects, features, data, feature space, and dimension reduction, are independently important and critical for most people, in most fields to understand: “Anybody who wants to understand how our society ‘thinks with data’ needs to understand these concepts” (Manovich 13). Yet, some of these concepts and ideas are more relevant or useful for digital humanities scholarship. Yau includes an analysis of many types of visual cues: position, length, angle, direction, shapes, area, volume, color saturation, color hue. He references a study done at AT&T Bell Laboratories in 1985 where William Cleavland and Robert McGill how accurately people perceive visual cues from most to least accurate. The ranking order can be seen here:
I find it interesting that hue is the least accurately perceived visual cue. Yau warns against blindly following this ranking system and brings this ranking up to emphasize the need to understand how a specific visual cue or visualizing technique might be perceived by audiences. I think the perception of hue is an interesting concept to examine in the context of digital art history due to the extreme relevance of color, hue, and saturation in analyzing visual artworks. The use of hue or saturation can be used as a tool for sorting large datasets of visual artworks. For example, the Software Studies Initiative and Gravity Lab jointly developed the “287 megapixel HIPerSpace super visualization system.” In this software one can explore a set of paintings using cultural analytics techniques by turning paintings into sets of data that can be graphed and turn those graphs into collections of paintings. The video (linked below) demonstrating this software uses the works of painter Mark Rothko. At 1:38 in the video, they take the group of Rothko’s paintings, which are organized from left to right by year created and displayed overlapping one another on this loose chronological timeline, and add a transparency layer to the visual display. This creates a blurring effect that just shows a general impression of hues across the screen. Through this software and these types of sorting we are able to see color trends over the course of Rothko’s career. This example demonstrates the importance and practicality of using hue as a visual cue for understanding sets of visual art historical data.
287 megapixel HIPerSpace supervisualization system, 2009- The software is developed jointly by Gravity Lab and Software Studies Initiative http://lab.softwarestudies.com/p/cultural-analytics.html
I am also interested in exploring Manovich’s discussion of the crucial decisions in making representations.. The first decision involves creating a boundary, such as a time period, for the phenomenon being studied (other boundaries could be countries, artists, or groups of artworks). It is within this first decision that we are already feeding software or technology information defined by human terms. The idea that a certain art historical movement is defined on either end by two specific dates (or even decades) is a boundary created by historians. I would argue that we have come to accept some of these “boundaries” as truths within the art historical world despite their human construction. Movements such as the Renaissance, impressionism, or modernism have become widely accepted movements within history with (relatively) defined time periods in which they exist. Although we largely just accept these boundaries as facts, rather than a specific perspective in which to view history, there is still room for interpretation, critique, and alternating modes of understanding movements, time periods, and human created boundaries. I argue that these human defined boundaries become additionally obscured and hidden once mediated through technologies. Although it can be difficult, it is possible to recognize how an art historian’s writing is clearly specific to their viewpoint, knowledge, and construction. Once a human created boundary is inputted into a software, it attaches from human construction and becomes a truth on which the computer relies to put out new data, graphs, or information. I have concerns about the ability of technologies to obfuscate and conceal these human created boundaries within digital art historical scholarship.

Presenting data

One of the most important questions I ask myself when analyzing data is how to present this data to the audience. Data visualization means using bar graphs, dot plots, and line charts. This process will allow researchers to understand their data better and represent it. In this essay, I am discussing different techniques that could be used to present data. Data visualization means mixing between colored, scaled, and Visual cues. For example, using dark and light colors shows the difference between the represented data. Visualization comprises different components, such as visual cues, coordinate systems, scale, and context. There are various visualization methods, making the choice of the color, shape, and size that can be used to represent the data, like mapping, color, geometry, and color. Visualization mainly depends on the data’s size, as little data means little substance visualization. While data with a high number of dimensions, high substance visualization, and more visualization choices, many of those options will be poor ones. To filter out the bad and find the worthwhile options—to get to visualization that means something—you must know your data to present it better. Today, computers can easily enable researchers to present their data, but sometimes, it is crucial to think more about how you need to give your data to the public. As a public historian, I share my results with the public, which is more complicated than representing data to scholars. It took me longer to figure out the best way to visualize data. For example, public historians prefer to use maps to tell the story of historical size. Using a color that helps the audience understand the map is essential. However, some research requires the use of graphs and points. In this case, I believe using color and shape would engage the public and bring their attention to the study. However, it depends on the data you are presenting. For example, I have used graphs and pie to present my analysis to the public to show them how public outreach projects enhance the understanding of history; of course, I have spent some time thinking about the color that allows audiences to understand my analysis clearly. `
This week’s reading was essential because I learned the best way to represent my data.  The reader also helped me to understand how to present the data according to its size.  In addition, I learned how to present a map, which is frequently used in my research.

Data

The use of computers in Art history is related to the mid-1960s when the art historian Jules Prown used Yale University’s computer lab to understand the relationship between the social and economic factors and the sitter’s preferences in portraiture of John Singleton Copley’s. After a few years, computers and digital tools became part of Art history research. This essay addresses the use of computer tools to analyze big data in art history. Now, computers allow art history to explore large data and share their analyses in museums with the public. This extraordinary change wouldn’t have happened without the effort of Prown. Prown was the first Art historian to use computers and statistics to analyze 240 Copley’s sitters, comparing different elements such as age, gender, wealth, gender, religion, age, and size of the canvas[1]. Many scholars were conservative about using computers in art history research, including the department chair of Yale University, who advised him to delete the computer analysis from his book because the computer analysis may affect his tenure promotion.
https://www.collegeart.org/programs/conference/scholars/julesprown I found this article very interesting as the use of computers at that time was very limited. However, Brown decided to use it to save time and offer accurate results in his analysis. This chapter allowed me to compare the use of computer analyses between the mid-1960s and now. I could see how computer tools developed in the last 60 years. I understood that the computer would make incredible changes in the Art history field, which we are witnessing today. On the other hand, Matthew Battles and Michael’s article shows how the digital humanities enable art historians to analyze big data[2]. One of the most significant challenges historians and art historians faced in the past was how to work with big data. In the past, museum curators were mainly focused on presenting and preserving the data. These tools allowed art historians to understand the relationships between different museum collections. It also allowed the museum to present numerous collections of digital images, and the victories now can be seen in these. images and compare them. For example, the Harvard Art Museums created the Lightbox Gallery, an exhibition offering an interface where museum visitors can use the screen to interact with the collection. Technology allowed museum visitors to engage more with the collections. These tools allowed the visitors to explore the collections themselves.  For example, these tools enable visitors to locate object maps, explore the metadata associated with the objects; and know how art objects are labeled. These tools increased the idea of the museum as a place of learning and inspiration. Many teachers now mainly depend on museums as a tool for learning. Students will be more engaged with the information they get from the museums.  The analysis of big data benefits the audiences to have a better understanding of museum collections.
https://dhdebates.gc.cuny.edu/read/untitled/section/7cdd40a6-9ef4-4aca-8f53-bfee4cd9ed0e#ch27 Reading this week allowed me to understand data analysis in art history and the different tools art historians use to analyze and study data. Of course, data analysis is one of the topics I was worried about early in the semester because, for me, it seems complicated, but thanks to the reading, I was able to understand the topic and how I can apply what I have learned in my future digital history project. I was also inspired by Jules Prown through out-of-the-box and used computers; now, the art history field mainly depends on digital tools. This a lesson for me as a scholar to think, listen to my own voice, and do what I believe is right.
[1] Jules Prown. “The Art Historian and the Computer.” Art as Evidence: Writings on Art and Material Culture(New Haven, CT: Yale University Press, 2001). [2] Matthew Battles and Michael Maizels, “Collections and/of Data: Art History and the Art Museum in the DH Mode,” Debates in the Digital Humanities 2016, eds. Matthew K. Gold and Lauren F. Klein

 Spatial History: Mapping Places

Each humanity research should include a map to help the reader understand where this project originated. With the new technology, mapping software developers and digital maps now help researchers answer complicated questions that couldn’t be answered before. One example is the GIS software that allows the scholar to identify spatial analysis questions. This essay reflects on the critical role map plays in the humanity field and how it makes information accessible to researchers and the public. One of the most significant mapping projects was the project conducted by  Mapping Nineteenth-Century London’s Art Market by Pamela Fletcher and Anne Helmreich. The primary purpose of this project was to discover the national and international art market in London. In this project, the author’s methods were mapping and marking the art gallery stores between 1850 and 1914 conducted by Pamela Fletcher and David Israel. In addition, Anne Helmreich and Seth Erickson collected and analyzed sales data from the stock books of Goupil & Cie and Boussod, Valadon & Cie. These analyses included trades at several branches in Europe and the United States such as Paris, London, Berlin, Brussels, and New York from1846–1919.[1] This map and visual analysis assisted in exploring and understanding the art Market in practical ways. Moreover, this method revealed information and interpretations that weren’t available in the past.  I find this article very useful as it shows how mapping helped discover information that wasn’t impossible without this tool. This article allowed me to understand how the art market has changed. The audience now can have a good overview of the market in the past. In the photos, I wish they included more information about each image.
I like how they give credit to each one who participated and contributed to this project in this project as many other scholars don’t do that. You can see each one’s name has been included in the image below.
This project showed how maps helped examine and analyze large amounts of data. Now, maps allow historians and art historians to understand various space questions. As a public historian, I included ACGIS in my research. I was working in a cemetery in Dakhlah Oasis. There were 700 graves on the site, and it wasn’t easy to understand without using the map. The GIS tool allowed me to understand the spatial relationship between graves. The map also allowed me to see the distribution of burial according to age and gender (see map1&2).  Even the use of GIS was influential in my research. It was complicated sometimes, and I had to redo the maps many times. In addition, cleaning the data took me forever to be finished. I hope scholars also included the limitations and difficulties while working on the mapping project.
(Map1) (Map2) Overall, this week’s reading allowed me to understand how the map role became critical to any humanity project. It will enable researchers to expand their research. Now, the map can tell the site’s story and offer the audience much information and visual data that weren’t available before.
[1] Pamela Fletcher and Anne Helmreich, with David Israel and Seth Erickson, “Local/Global: Mapping Nineteenth-Century London’s Art Market,” Nineteenth Century Art Worldwide 11:3 (Autumn 2012)
« Older posts Newer posts »

© 2024 Claiming a Place

Theme by Anders NorenUp ↑