On the history of decorative art, design, and film. Doing Digital Art History

Author: JJ Bauer (Page 1 of 6)

Exploring 3D Modeling

Source: Exploring 3D Modeling

First I want to discuss the process I went through creating my own 3D model, then I will look at some of the more broad applications of the technique. The Ackland Art Museum graciously allowed our class to practice 3D modeling with objects from their collection. I worked with this Seated Male Figure, Nayarit Culture, 200 BCE-300 CE (link to Ackland catalog record). I used the 123D Catch app on my phone to take my pictures. Unlike the typical process of moving the camera around the object while the object remained stationary, my object was placed in a black shadow box and with the help of a curator, the object was rotated while my camera remained stationary. I did allow the app to process the photos, although the resulting model was a bit of a mess. Somehow he ended up with two faces, and no base at all. I wanted more control over the images and the resulting model, particularly since many of my shots featured the curator’s hand, so I ended up using the Agisoft PhotoScan software to process the model. After cropping and aligning the images carefully, I ended up with a somewhat more satisfactory model, (uploaded as a PDF document at the end of this post). The figure still has some errors, particularly with the base and the very top of the head. The head of the figure features a small, oval hole, and the PhotoScan software seemed to have a difficult time determining if the hole was part of the object or part of the model, and it was difficult to manipulate in the mesh stage. The issues with the base came more from how I photographed it, I found it was difficult to get accurate angles when we changed the position of the object so drastically (lifting it to the center of the shadow box, held upside down). Nevertheless, I really enjoyed the process of building the model and was very surprised at how relatively simple the process was for such an intriguing end product.
When looking at the possible implementations for 3D modeling, I want to briefly explore their use in the museum setting, as discussed by Wachowiak and Karas in “3D Scanning and Replication for Museum and Cultural Heritage Applications.” In this essay, published (in both English, French, Spanish, and Portuguese) by the Smithsonian Institute, the authors discuss how the SI’s Museum Conservation Institute has incorporated 3D scanning into their conservation workflow. The authors argue that while the technique does not replace photography or other modeling tools, it does have a place in the work of conservation and preservation. Most importantly, I thought, they also note that while many cultural heritage institutions will not yet be able to afford the necessary equipment (although that might have since changed with the availability of tools like 123D Catch) conservationists and museum professionals should still remain up to date and informed on the latest innovations in the technique. This applies to all the tools and techniques we have looked at throughout the semester, but it is a good reminder. Museums are also going to have start considering topics such as unauthorized use of 3D modeling (the Head of Nefertiti) and providing access to their own collection. For example, the Smithsonian has an initiative to provide free access and download to a large selection of 3D models. For the most part it sounds like an exciting chance for an unprecedented level of interaction with their collection, but some of the items made me pause and wonder if that kind of access was culturally appropriate. We will all have to stay caught up with this rapidly-changing landscape, but as Wachowiak and Karas note, it should not be much of a burden when so very many exciting innovations are providing solutions and possibilities with the potential to change everything.

moen_photoscan (2) (2)

Mapping Time

Source: Mapping Time

It’s a challenge to find a good timeline application.  In part, this is because timelines are so unwieldy to begin with.  But this is also due to the diversity of purposes for timelines.  Some folks want to track the life of one notable person (see Kelsey or Lauren’s post).  Others are looking to track the highlights of a period or topic (see Erin’s post).  Others still use timelines to track the development of a particular object or media (see Colin’s post).  I chose none of the above, and instead am tracking family history.  Since this is one of the main activities that engages the average person in an archival setting or research context, I figured that modeling various timeline options for family history might be most widely applicable.  So, welcome to the wacky world of the Grabs as they guide us through this timeline investigation.

Tiki-Toki

To see the Tiki-Toki timeline, click this link:

http://www.tiki-toki.com/timeline/entry/623027/The-Grab-Slamin-Family/

Inputting data into this timeline wasn’t too bad, but the subsequent visuals just weren’t doing what I was aiming for.

First of all, the data input isn’t into a spreadsheet.  Instead you’re creating entries based on a type: span or story.  Though consistency is harder to maintain due to the lack of database (and input takes far longer), the instructions provided by the interface a helpful and clear, so you can personalize each entry as you prefer.  Spans are a new function, so perhaps that’s why they don’t work as well.  I tried creating spans for lifespan of some Grabs.  As far as I can tell, there’s no way to better clarify where on ends if multiple are overlapping.  The stories are easy enough though.  Stories are marked by little lighting bug-looking things on the slider bar and appear like chat boxes on the timeline itself.  These lightning bug indicators, as charming as they are, would be problematic if you’re looking for something with more textual clues as to what’s going on from the slider bar.  N.B. You have to actually click “More” at the bottom of a chat box if you want to read an entire entry—you can’t just click anywhere in the chat box, which is what my intuition called for.

Secondly, the photos from Flickr just would not work, so I ended up linking to the copies on Facebook (linking is meant to save you and them storage space, so there’s no download option).  Then once (if) the links work, the span images aren’t adjustable, so you have to hope that your photo is composed with the primary area of interest concentrating in the exact center.  Mine were not, so you’re getting a lot of midsection and no face on people.  The linking goes for all media, as well.  I wanted to include an mp3 from my desktop, but needed to upload it to SoundCloud in order for Tiki-Toki to acknowledge it.

The really cool feature of Tiki-Toki is in the visualization.  It has a 3D option.  Instead of scrolling right to follow the timeline, your traveling down a road and the events slide past you.  It reminds me of a cheesy ’80s time travel movie.  So, not great for for academic settings, but a cool toy for at-home use.  Also handy:  Once you’ve input your data, you can print it, or it can be downloaded as a PDF, CSV or JSON file.  There’s a super handy little embed tab, too, but that’s only accessible if you have a paid account rather than a free one, which is why I’ve only linked to this timeline, not embedded it.  Tiki-Toki also has a group edit function, so other can contribute if you’d like.

Timeline JS3

Timeline JS provides a Google spreadsheet template into which you can enter your data.  Be careful not to alter or delete any of the column headings, otherwise it won’t translate back into Timeline JS properly.  The data entry part is pretty self-explanatory—it’s just like any other spreadsheet.  This format of data entry is nice, too, since it enforces consistency.  Though, Tiki-Toki does allow more play with the visual aspect of the stories and spans.

The website walks you through the process of opening the template, filling it in, linking it back to the website and then producing an embed code and preview without mishap.  I do like that you no longer need to have an account with Timeline JS, since it’s really just providing an interface for your outside spreadsheet.  Plus it’s one less password to remember.  Since it’s based on a Google spreadsheet, it would also be compatible with group contributions.

The appearance is quite clean and it travels linearly in an intuitive way. I like how each span and event are in their own space on the slider bar, unlike the overlap in Tiki-Toki.

This timeline also handles links to media, not direct download.  But it doesn’t appear to struggle with Flickr like Tiki-Toki does.

The one really major drawback of Timeline JS is that the slider bar covers a good portion of the timeline.  It’s not as much of an issue for me, but if you have a lot of text, there’s no apparent way to minimize the slider bar to allow a full screen view.

TimeMapper

TimeMapper is another spreadsheet based timeline application.  Like Timeline JS, TimeMapper uses a Google spreadsheet template, but you can create an account with the website and it does indicate the TimeMapper account to which the timeline belongs (though you can create timemaps anonymously).  I found the template for Timeline JS to be more intuitive, especially because acquiring the link to plug into the website requires one less step in Timeline JS (for TimeMapper’s spreadsheet, make sure you ‘publish to web,’ but then get the shareable link, NOT the link that the ‘publish to web’ window offers).

Like Tiki-Toki, TimeMapper accommodates different data views.  This application provides three: Timemap, Timeline and Map.  If you’re looking to map something over time, the timemap option provides ideal (and unique) functionality—as far as I’m aware, not many other mapping or timeline applications allow you to travel across a map chronologically.  If you do want a map, pay close attention to the data entry instructions from the template provided.  Because my data set doesn’t incorporate and GIS information, I’ve stuck with the traditional timeline view.

I did try to use the latitude, longitude field for the last slide here, but either I entered the numbers in a way that their system didn’t recognize or it doesn’t produce a map.  That will take some experimenting to make work.

The clean lines of this timeline are much like Timeline JS, but more all of the text is visible.  This, I think, is the best of the 4 timeline options I sampled for this post.

Dipity

Dipity is the social media version of a timeline application.  It’s meant to be more commercial and click bait-y.  By its own account, Dipity means to interactively “organize the web’s content by date and time” to “increase traffic and user engagement on your website.”  That mission might work if the website would actually allow me to create a timeline.  I’ve gone to create one three times, and every time it kicks me back to the homepage after I’ve entered the timeline’s title information (how do I know I’ve tried three time?  That’s the number of timelines allowed a free accounts).  Even more frustratingly, when I try to go to ‘My Profile,’ the website generates a page not found message, despite showing me as logged in.  Basically it looks cheap and it doesn’t work.  Give the other three timelines a chance before trying thing one out, if you can make it work.

Another option

Neatline

I haven’t tried this timeline application out, so I can’t attest to the functionality.  But Neatline looks super fun to use.  It’s only available as a plugin on Omaka.org (not Omeka.net), which means a paywall.  Unless your institution can offer you free access to its account, should it have one.  Neatline, like TimeMapper, allows for a timemap.  Check out some of the demos to see what it can do.

Visualizing Native Art

Source: Visualizing Native Art

When considering what data I wanted to visualize, my first thought was to return to the “Mapping Native Art” data that I had previously worked with. In my post I discussed how dissatisfied I felt with the result of that project, but I still felt there was something important there – so I turned to visualization. I used the Google Fusion table function, which is admittedly much less elegant and sophisticated than what the Gephi tool offers, but the data was already in a fusion table so it made it relatively simple to re-configure it. I went through each of the websites I had listed. Remember that this website was listing sites that sold “Genuine Native Art” and seemed directly aimed at the tourist market- exactly the dimension of the market I am interested in. As I visited each site I took a quick glance at the most prominent image on the front page – the first thing to catch my eye – and recorded what type of “art” it was. I was surprised to find that jewelry was by far the most popular, followed by painting, although I suppose I should not have been. I was then interested in what types of art were sold where, so I connected the physical locations of the stores with the type of art. Jewelry was sold everywhere, but there were trends like sculpture/carving being more prevalent in the Northwest and silver jewelry being most prominent in the Southwest. And of course outliers, such as places selling Zuni “fetishes” (and stuffed animals??) I want to gain a better understanding of this market – who is buying, who is selling, how it intersects with the “fine art” market, how it impacts tribes both economically and culturally. I do feel that visualization will be important tool in this endeavor – the very nature of a market is that it is about movement and fluidity, an element hard to grasp in text. I want the ability to “zoom out” with my focus, to view the market through “a macroscope” as Graham and Milligan put it.
In the same essay, these authors note that “any visualization we create is imbued with the narrative and purpose we give it, whether or not we realize we have done so.” I think for myself as a scholar who has not spent much time working with quantitative data, I am always aware of the constructs I am working within when I write, but when I work with data it is easier to slip into the false notion that these numbers, these figures, are objective. With this project, I was perhaps not aware that the subjectivity of questions I am asking are generating equally subjective data. I think this is an insight I will need to continually remind myself of. That is not to say that I am not excited about the power of visualization, the possibilities these tools offer. Native art, Native craft, and Native identity are enormously complex topics, and to my knowledge there have not been any large-scale analysis of the Native Art market. For example, in 1990 the Indian Arts and Crafts Act was passed, requiring that any artists labeling their work as “Indian” have to be able to prove their enrollment in a recognized tribe. What impact did that have on the market? What about the fact that someone who is 1/168th Choctaw can market their art as “Indian” while someone who is 1/4th Chippewa Cree cannot, due to the complexities of blood quantum laws?
For now this is only the most rudimentary of visualizations, but I am looking forward to continuing the project. I will need to find more sources of data, approach the information from both close reading and big picture perspectives, and deal with the learning curve of Gephi – with the likely result still being rudimentary and preliminary. However, as I have stated earlier, I feel there is valuable information here, a narrative worth exploring and presenting.

Digital Humanities vs Humanities?

Source: Digital Humanities vs Humanities?

This week our readings covered the digital humanities (and humanities generally) debate on instrumentalism vs criticism.  This idea that digital humanities is solely product-oriented, neglecting the traditional humanities’ concern with criticism is a divide with which I struggle.  Since she does display a critical approach, perhaps this is an artefact of JJ’s take on digital humanities (and of a mother who demands critical behaviour according to the principle: “Would you jump off a bridge if your friends did?”).  But I’m inclined to think that this divide is an artificial wall we’ve constructed, rather than anything inherent in either DH or humanities.

Digital humanities, as I understand it, is really just an extension of traditional humanities. Without humanities, digital humanities wouldn’t exist.  Digital humanities largely represents a new humanist method that helps the discipline to contribute to cross-disciplinary conversation and public relevance by meeting the audience on its native information ground.  Without the critical aspect, the digital humanities wouldn’t be able to perform that work.  From the projects we have examined in JJ’s course, it seems to me that both internal and outwardly-looking criticism are built in to DH. For example, the GIS project Transatlantic Encounters, conducted by Beth Shook on the presence of Latin American artists contributing to and interacting with the Parisian art scene in the interwar period. Shook used the tools provided by DH to critically examine the canon of history and art history with its focus on Western Europeans and white Americans. Or Digitizing “Chinese Englishmen” from Adeline Koh.  She also used DH’s production and criticism features to produce a — to make steps towards “decolonizing the archive.” Both of these projects tie instrumentality with the critical foundation of the humanities.  As does Medieval People of Color, or Barnard’s Reacting to the Past game https://www.uncpress.org/book/9781469641263/modernism-versus-traditionalism/, or University of Sydney’s Digital Harlem, or Dan Cohen’s Searching for the Victorians (for more projects that display the interplay inherent in critical product for critical scholarship, see Alexis Lothian and Amanda Phillips article in E-Media Studies).  A rough list alone could fulfill the required word count for this post.

Perhaps these projects are atypical in the way that they internally critique the humanities or pedagogy (and many also contending with outward critique regarding under-acknowledged sections of scholarship).  But just looking at the syllabus JJ created and the resources she compiled strikes me as a pretty broad practice of criticism already built into the instrumentalist nature of DH.

When I asked broadly about the division between product vs criticism in class, we landed on the comparison between a welder and a philosopher, thanks to Marco Rubios’ fictitious statement that welders make more than philosophers.  To which anther reference was made about how philosophy helps welders operate in the market economy ethically (those of you versed in the conversation around the presidential debates can supply the exact reference in the comments?).  I would take a slightly different tack, though; one sparked by a comment in Scott Samuelson’s Atlantic article.

those in the lower classes are assessed exclusively on how well they meet various prescribed outcomes, those in the upper class must know how to evaluate outcomes and consider them against a horizon of values.

Historically, this is true.  But isn’t the point of modern education in the United States to ensure that no matter one’s profession—plumber or a scholar—that each individual can think critically, that one can think for oneself?  Samuelson starts to tease out this idea, but remains on a loftier level.  My inclination is to examine the minute practicalities.  For others who also revel in home improvement shows like This Old House, you’ll immediately grasp why critical thinking is so essential to any manual labor at both the minute and holistic levels.  Skilled workers have to respond to the demands and quirks of each particular environment, analyzing that they are using the right work-arounds to ensure a project’s long term success and that those actions won’t interfere with other, unrelated projects (eg plumbing and electric).  Otherwise, next week, two years, or ten years down the line, a homeowner ends up with a massive plumbing, roofing or other nightmare that has negatively impacted other parts of the house.  Thus all that original, uncritical work proves not just useless, but damaging.

If the driving force behind the humanities is criticism, then isn’t it equally important for those receiving a technical education to learn independent thought as those with a liberal education?  It’s this foundational assumption that makes it so challenging for me to understand how criticism could possibly be divided from the DH, making it into pure product creation.  If not even a  plumber or welder’s everyday actions can be divided from criticism, then how can a direct derivation of the humanities be, at its core, uncritical?

This week’s readings:

Wendy Chun, “The Dark Side of the Digital Humanities – Part 1,”http://www.c21uwm.com/2013/01/09/the-dark-side-of-the-digital-humanities-part-1/ and Richard Grusin, “The Dark Side of the Digital Humanities—Part 2,” http://www.c21uwm.com/2013/01/09/dark-side-of-the-digital-humanities-part-2/
Alan Liu, “Where is Cultural Criticism in the Digital Humanities?” Debates in the Digital Humanities, ed. Matthew K. Gold (University of Minnesota Press, 2012), http://dhdebates.gc.cuny.edu/debates/text/20
Beth Nowviskie, “toward a new deal,” http://nowviskie.org/2013/new-deal/and “Ten rules for humanities scholars new to project management,”http://nowviskie.org/handouts/DH/10rules.pdf

If you’re at all interested in developing (digital) humanities projects or writing grants, check out these resources, as well:

Haley di Pressi et al, “A Student Collaborator’s Bill of Rights,”http://www.cdh.ucla.edu/news-events/a-student-collaborators-bill-of-rights/
Sharon Leon, “Project Management for Humanists,” #alt-academy,http://mediacommons.futureofthebook.org/alt-ac/pieces/project-management-humanists
Some resources for grant proposals (from the WebWise 2013 conference):Environmental Scan, Identifying Appropriate Funding Sources, Scoping and Scheduling Work, Guide to Writing a Short Project Proposal

Digital Assignment #4

Source: Digital Assignment #4

Unfortunately my phone camera that I used to take over 100 photos of the pre-Columbian “Figure from Top of a Burial Urn” from the Ackland’s collection died before I got around to saving them and processing them through 123d catch. One thing I enjoyed about the process of taking the photos was needing to spend the time to do such close looking at the object. It has not been since I got some practice doing condition reports at my last museum job that I spent so much focused time with an artwork. With my make-up object I did not have the benefit of even lighting (or a white-gloved curator to handle the object for me!) like at the Ackland. However, I chose an object in my home, a brown clay teapot, that had some visual interest but was not too large and was matte rather than shiny.

Using 123d catch on my phone took a little longer than I made room for, unfortunately. I took significantly fewer pictures of this object, as I didn’t want it to take any longer to process than it had to, instead trying to make sure the table I used as a base was clean, well-lit, and without too many shadows.

I am not as satisfied as I would be with the original object due to photo quality and time constraints, but I still found the assignment, especially coming on the heels of our workshop day in the VRL using PhotoScan, to be a good learning exercise and a fun tool. 3d image coming soon…(123d catch tells me it is “Thinking some more”).

Open Access: Increasing participation in scholarship

Source: Open Access: Increasing participation in scholarship

I have spent majority of the past week discussing the value of education vs. degrees and the barriers a significant portion of the population face in obtaining the credentials and associations required for respected participation in scholarship.

In areas where primary and secondary education provision remains troublingly weak, the higher ed options available to students produced from those systems are limited.  Unfortunately, this means that any scholarship addressing those populations is represented either by outside observers or a limited number of in-group folks that made their way into academia.  This leaves out the valuable perspectives of a massive section of our population.

Thanks to the growth of independent or self-published avenues and online, semi-formal scholarly platforms, however, participation barriers for a portion of that excluded population (and others not generally included in academia) are diminished.  Of course, these avenues still face ridicule from a vocal core of academics and administrators.  But the shuttering of university presses and the standardization of open access journals has cleared the way for a rethinking of publication options.

Open access (OA) journals are one hammer whacking away at the rigidity of academic publishing.  Open access literature is online, free of access charges, and free of most copyright or licensing restrictions.  Despite the insistence of many not using OA publishing, all major scientific or scholarly OA journals insist on peer review. The primary difference between OA and traditional publishing lies in the pay structure.  OA literature is free to read.  Producing OA journals is not without cost, though, even if it is much cheaper than traditional publications.  Like traditional journals, accepted authors pay a publication fee (often waved for instances of hardship).  Editors and referees all donate their labor.  This model ensures that readers don’t face a paywall, and thus ensures the widest possible online access.

Questions of reliability aren’t entirely unjustified, though.  Unscrupulous individuals do take advantage of this new publishing system.  In my library science reference class with Stephanie Brown, we went over the pitfalls of OA publishing.  Here’s a checklist Stephanie created on things to keep in mind when assessing or submitting to an OA journal:

Is the journal listed in the Directory of Open Access Journals?  If so, then it’s likely reliable.
Does the journal provide a clear statement on article processing charges (APCs) and other fees?  If the fees are unreasonable, stop and find another journal.
Receiving continual article solicitations from a journal via email?  File it under spam and find a different journal.
Does the journal make unlikely promises (e.g. brief peer review turnaround)? Stop & find another journal.
Download an article and read it.  Would you want your work to appear next to it?  If not, find a different journal.

Traditional publishing does have the possibility of facilitating inclusion if modeled to do so, however.  In the Wikipedia summary of Kathlene Fitzpatrick’s Planned Obsolescence: Publishing, Technology, and the Future of the Academy, the author notes that the current university press model treats the press as a separate institution from the university, one that’s meant to at least partly support itself financially.  But, if the university incorporates the press more fully into itself, then the press “has a future as the knowledge-disseminating organ of the university.”  In order for this to happen institutions of higher learning must first reconceive of themselves as “a center of communication, rather than principally as a credential-bestowing organization.”  Tabling the issue of overemphasis on credentialization in the job market, ensuring that a press reflects the learning of an institution’s constituents is both a way to provide professors and students an opportunity to publish and a means of holding the university accountable as an institute of learning rather than a degree churn.  Many schools’ student groups publish a law or business review comprised of student contributions, but few encourage the students to publish for a wider audience through its own press.

Until university presses are revamped, we have OA publishing and peer-to-peer online platforms.  Peer-to-peer—like OA— provides a different publication model, but this one focused on dialogue between participants for a broader conception of peer review.  MediaCommons from the Institute for the Future of the Book provides an ideal example of this new approach.  It focuses on exploring new forms of publishing for media studies through blog posts that others can comment upon in the same capacity as peer reviewers.  These posts are tied to profiles that link to the participants’ projects and works, which yields a networked approach to publishing, both through interpersonal networks displayed through post commentary and through links to related scholarship.

These online networks become increasingly important as the volume of publication submissions increase.  Peer-reviewed journals (the form of journal required by tenure committees) require a sufficient pool of referees from which to draw so that no individual is overburdened with requests for reviews.  As Maxine Clarke points out in her blog post on “Reducing the peer reviewer’s burden,” if more scholars with subject expertise are findable, then the pool of referees to participate in peer review deepens.  And given participation on communal review platforms like MediaCommons, those scholars will be more prepared to perform the duty of jurors, even if their university did not formally prepare them.  This has the added benefit of not just relieving the pressure on the current pool of pier reviewers, but also reducing the influence of a few on many.  A reader’s personal experience of the world and focus within the subject colors her or his perspectives, and thus her or his edits and comments.  More readers means more diversity in editing perspectives.

Other publishing avenues to keep in mind for monographs are self, print-on-demand or independent publishing.  Print-on-demand is a form of self publishing (an option still derided by the academy) that allows you to publish your monograph to an online platform from which visitors can download or order a printed copy of the book.  Lulu.com is a particularly popular print-on-demand self publishing site.  Independent publishers generally deal with smaller press and can also be print-on-demand.  For instance, Library Partners Press at Wake Forest University is one of the many independent presses that operates on a digital platform, allowing for the option of printing.

Folks have information to share with one another, and so many scholars (whether members of the academy or not) have expertise to tap.  The current Big Publishing business doesn’t fully acknowledge nor use those people—it’s only natural that legitimate alternatives would pop up in stead of that operating procedure.

From JJ’s syllabus for this week:

HASTAC Scholars and its Art History Group.
CAA and SAH, “Guidelines for Evaluating Digital Scholarship in Art and Architectural History,” (January 2016). http://www.collegeart.org/pdf/evaluating-digital-scholarship-in-art-and-architectural-history.pdf
Edward Ayers, “Does Digital Scholarship Have a Future?” Educause Review (August 5, 2013). http://www.educause.edu/ero/article/does-digital-scholarship-have-future
Ryan Cordell, “Creating and Maintaining a Professional Presence Online: A Roundup and Reflection.” ProfHacker, 3 October 2012,http://chronicle.com/blogs/profhacker/creating-and-maintaining-a-professional-presence-online-a-roundup-and-reflection/43030
Sydni Dunn, “Digital Humanists: If You Want Tenure, Do Double the Work,” Chronicle of Higher Education, 5 January 2014,https://chroniclevitae.com/news/249-digital-humanists-if-you-want-tenure-do-double-the-work
Wikipedia summary of Planned Obsolescence:https://www.wikiwand.com/en/Planned_Obsolescence:_Publishing,_Technology,_and_the_Future_of_the_Academy.
Kathleen Fitzpatrick, Planned Obsolescence: Publishing, Technology, and the Future of the Academy (New York: NYU Press, 2011).http://nyupress.org/books/9780814727881/.
PressForward initiative (there is a WordPress plugin, if you are interested in curating web content on your site) and CommentPress(also a WordPress plugin) and the OSCI Toolkit.
Alexis Lothian and Amanda Phillips, “Can Digital Humanities Mean Transformative Critique?” Journal of e-Media Studies, Volume 3 Issue 1 (2013), http://journals.dartmouth.edu/cgi-bin/WebObjects/Journals.woa/1/xmlpage/4/article/425
Joan Fragaszy Troyano, “Discovering Scholarship on the Open Web: Communities and Methods,” April 1, 2013,http://pressforward.org/discovering-scholarship-on-the-open-web-communities-and-methods/

Building digital literacy in the classroom

Source: Building digital literacy in the classroom

One of the most prevalent myths about undergraduate students currently matriculating is that they are born digital—so called ‘digital natives.’ These students, the story goes, have been using computers for as long as they can remember. They have grown up using digital applications for everything from completing homework assignments to ordering take-out. They cannot remember a time before the Web (or even before Facebook) because the Web has been a fact of life since the start of their lives. The apparent truism of the ‘digital native’ becomes more and more pronounced with each new incoming class of college freshmen—that is until all of the ‘digital aliens’ (i.e. anyone born before 1994) are either retired or phased out of the academy. The technological acuity of incoming students may seem intimidating for digitally-inclined faculty, who may want to assign digital assignments or utilize digital resources but feel that, in the digital realm, they have more to learn from their students than vice versa. How should faculty make use of digital resources in classes where the students are already thoroughly tech savvy?

While younger students may know their way around a computer, this does not mean that they are fully digitally literate. As with every myth, the ‘digital native’ narrative has some basis in fact, but is mostly perpetuated by a deeper set of social forces. Perhaps ‘digital aliens’ have spun the story of ‘digital natives’ out of an anxiety induced by the overwhelming wave of technology development witnessed over the past 30-plus years. Every week brings another hyped platform or tool and younger students seem to readily adopt each new thing before faculty are even aware of the penultimate trend. Digital adaptability, however, should not be confused with digital literacy. Students may have a great aptitude at parsing Twitter (if they haven’t already abandoned Twitter for Snapchat), but this does not mean they know how to search an online database, build a 3D model, or georectify an image.

Digital skills are increasingly essential for virtually all professions, whether formally part of the ‘information economy’ or in more service-oriented industries. We might broadly refer to the development of this skill set as ‘digital literacy,’ and although what all this entails is certainly subject to frequent and dynamic change, a college education should equip students with digital literacy as part and parcel with more traditional skills like critical thinking and effective communication. Even though I’m framing ‘digital literacy’ as a distinct body of skills, I would also suggest that these skills, today, cannot be separated from things like critical thinking and effective communication. The ability to create or read a visualization of a data set, for instance, necessarily combines all three of these skills. The overall point is that educators need to be thinking of ways to incorporate digital skill-building into their courses to prepare students for careers and lives in a networked world—and also that these digital skills are developed just as any other critical skill, which is not through a process of osmosis gained by immersion into the digital realm, but through carefully considered exercises, readings, and discussions.

As suggested, students and faculty alike already use digital tools and resources everyday for classwork, lectures, and discussions. Students create powerpoint presentations and access readings and research materials through online databases. To truly develop digital literacy skills, however, requires assignments and activities that encourage critical reflection on the tools being used. For example, students in a medieval art history course might spend time in class using the resource Mapping Gothic France1 to research the history of Gothic architecture in Western Europe. An assignment for this class might have students create their own versions of a similar map of medieval built structures. As part of this assignment, students might be asked to critically reflect on the choices they made in creating their own digital maps: what did they chose to include or exclude? what metadata did they need to create when making the map? how has creating the map influenced their understanding of Gothic architecture? A class lecture and discussion around this assignment could also focus on the history of map making, drawing connections between medieval map making practices, satellite imaging, and GIS data. Working together, this assignment, critical reflection, lecture, and discussion could help students to both deepen their understanding of the course material while also learning crucial skills for manipulating geographic information. Key to this, though, is that students are not only learning the how-to behind digital tools, but also learning to think critically about the role of the technology itself in the production and exchange of knowledge.

Digital literacy means not just being able to use digital tools, but to also understand how they work, what underlying mechanisms drive them, and how this influences the input/output of data and the dissemination of information via the tool. This does not mean that everyone has to be computer scientists and develop to the ability to create/understand the code that drives these tools (although some familiarity with code and technological infrastructure doesn’t hurt), but this does mean that everyone should have some understanding of the relationship between a given digital tool and what’s going on beneath the screen.

Although digital humanities is firmly established as a discipline, the field is very much under construction. The discourse is live as to how digital projects should be defined, what constitutes authorship in an increasingly collaboration-driven environment, and what skills current students should be developing to prepare for future careers.2 These debates will be staged at conferences, in journal publications, and through blog posts and social media—but they will also be worked out in the classroom. Digital pedagogy is a huge part of this debate, and educators can contribute to this by developing, discussing, and sharing the lesson plans and digital assignments they are using in their classes. Only a small portion of undergraduate students will go on to a career in the academy, as digital humanists or otherwise, but all of these students will go on to live and work in a world where digital literacy is essential.

NOTES
[1] http://mappinggothic.org/

[2] See for instance this white paper released by PressForward and the Roy Rosenzweig Center for History and New Media: Joan Fragaszy Troyano, “Discovering Scholarship on the Open Web: Communities and Methods,” April 1, 2013, http://pressforward.org/discovering-scholarship-on-the-open-web-communities-and-methods/

Pedagogy & Digital Media

Source: Pedagogy & Digital Media

When I heard Jaskot’s talk, I realized that I was missing out on a new and interesting approach to art history. I had previously used technology to record, organize, and even represent my work as part of a larger conventional framework. I had not used technology to help me better understand my work or to help me draw new conclusions.  —Nancy Ross, Dixie State University

This comment from Nancy Ross’ article “Teaching Twentieth Century Art History with Gender and Data Visualizations” gets at the heart of digital humanities research as we’ve understood it in this class.  For most scholars, digital humanity tools are a means of producing an accompanying visualization.  This neglects how digital humanity tools can actually serve as a new means of interpretation and scholarly output—an expression of research in itself.  Ross goes on to describe how she used a non canonical text book paired with a group networking visualization project to help her undergraduates to better grasp the implications of the research they were conducting on women artists and their social-professional networks.  The students responded with enthusiasm and noted how the process of constructing the visualization altered and strengthened the conclusions they had begin to draw before seeing their research in a new form.

In a class session on virtual reality models for architectural spaces, JJ commented that many of the scholars working to put the visualization together found the process of compiling the data and making it work cohesively was actually far more revealing than the finalized model itself.  Inputting the data dramatically altered the scholars’ understanding of how the space worked, while the final product looked as if the space was always meant to be understood in that way.  Process can be a powerful too in research.  See, for example, the outputs resulting from George Mason University’ new technology requirement for its art history master’s program.  These projects allowed the students to experiment with new media relevant to their professional interests while exploring previously unforeseen connections and research conclusions facilitated by their software.

In terms of pedagogy, digital humanities projects really can prove the ideal form for student engagement.  Randy Bass of Georgetown’s Center for New Designs in Learning & Scholarship provides a comprehensive breakdown of learning approaches made possible by assigning digital projects.

Distributive Learning – the combination of growing access to distributed resources and the availability of media tools by which to construct and share interpretation of these resources allows for distribution of classroom responsibility to students.
Authentic Tasks and Complex Inquiry – the availability of large archives of primary resources online makes possible assignments that allow for authentic research and the complex expression of research conclusions.
Dialogic Learning – interactive and telecommunications technologies allow for asynchronous and synchronous learning experiences and provide spaces for conversations and exposure to a wide array of viewpoints and varied positions.
Constructive Learning – the ability to create environments where students can construct projects that involve interdisciplinary, intellectual connections through the use of digital media that are usable.
Public Accountability – the ease of transmission of digital media makes it easy to share work, raising the stakes of participation due to the possibility of public display.
Reflective and Critical Thinking – in aggregate, learning as experienced within digital media now available to pedagogues contributes to the development of complex reflective and critical thinking that educators wish to instill in their students.

In my own learning experiences, I’ve found that engaging with new media with an assortment of these 6 learning approaches in mind allows me to conceive of my research in a broader context and with a more diverse audience while still delving deeply into the subject.  Like Nancy Ross’ students, my attention was sustained for much longer and in manifold ways by having to think critically about making the platform or software work for my purposes.

To see how one particular new media platform for visual bookmarking can serve as a means of facilitating research, keep reading.

Conceiving of social media as another teaching platform for students.

I use Pinterest with embarrassing regularity—both as a personal indulgence in planning the minutia of my theoretical future home, as well as a platform for more scholarly endeavors that incorporate various media.  Other than the lack of tagging capabilities, the site is beautifully suited for research and reference.

For example, in my Collections Development course, Professor Mary Grace Flaherty assigned a final project in which we developed a collection for a specific population.  I chose to create a core collection for book artists.  Instead of simply compiling a bibliography of resources, I created a Pinterest board to host a more dynamic catalogue.  Why Pinterest, you may ask?  One obvious advantage is that it embeds video directly into the pin (AKA a catalogue entry, in this case).  Of far more importance, however, are Pinterest’s networked search functions.  As mentioned, Pinterest doesn’t allow for tagging of pins to simplify searching by category within a single board.  It does allow for 1 keyword search function and 4 different browsing functions across boards and pins, though.

Let me break those 5 functions down for you:

A keyword search function that seeks out pins using 4 limiters: searching across all pins Pinterest; searching across your pins; searching for other pinners; or searching for boards.  This search function also suggests terms to allow for greater granularity. 
A browsing function that allows users to see other pins related to a particular pin.
A browsing function that allows pinners to see other boards with items related to a particular pin.
A browsing function that allows pinners to see other boards on which a particular pin is pinned.
A browsing function that allows pinners to see what else from a particular site has been pinned on Pinterest.

This sort of searching and browsing turns Pinterest into a highly linked catalogue and thesaurus.  One pin can create an entire network of related items, which turns the items in the Book Artists’ Core Collection into a starting point for practitioners or scholars to conduct more in-depth research into a project.  When I began the research that inspired this collection (for a book arts class, which also has its own reference board), Pinterest allowed me to delve deeply into contemporary and traditional methods for emulating every aspect of a British 14th century book of hours.  It also provided inspiration for how to shape that book of hours into an artist book documenting the history of the book.  By identifying one relevant pin on parchment & papermaking or regional binding methods or paleography & typography, I could follow those pins to related resources by using the browsing functions, or even just by following the pin back to its source.

By using Pinterest as the host for my catalogue, I also skirted around the limitations inherent in any collection—one can only develop each topic within a collecting area up to a point before the resources are outside of the collecting scope.  Pinterest lets users control when they’re deep enough into a topic, since the boundaries of the catalogue are so malleable.  For instance, my core collection doesn’t contain an encyclopaedia on the saints or regional medieval plants.  This information is highly relevant to a book artist working on a book of hours, but it’s too granular for a core collection of resources.  For more on the Book Artists’ Core Collection, see this blog post.

Articles on digital pedagogy:

Roberd DeCaroli, “New Media and New Scholars,” presentation to Rebuilding the Portfolio, July 17, 2014 http://arthistory2014.doingdh.org/wp-content/uploads/sites/3/2014/07/CLIO.pdf
Kimon Keramidas, “Interactive Development as Pedagogical Process: Digital Media Design in the Classroom as a Method for Recontextualizing the Study of Material Culture” Museums and the Web 2014: Proceedings http://mw2014.museumsandtheweb.com/paper/interactive-development-as-pedagogical-process-digital-media-design-in-the-classroom-as-a-method-for-recontextualizing-the-study-of-material-culture/
Digital pedagogy teaching tools: Art History Teaching Resources, Smarthistory, and Art Museum Teaching
Nancy Ross, “Teaching Twentieth-Century Art History with Gender and Data Visualizations,” Journal of Interactive Technology and Pedagogy, (Issue 4) http://jitp.commons.gc.cuny.edu/teaching-twentieth-century-art-history-with-gender-and-data-visualizations/
Gretchen Kreahling McKay, “Reacting to the Past: Art in Paris, 1888-89,” http://arthistoryteachingresources.org/2016/03/reacting-to-the-past-art-in-paris-1888-89/

Folksonomies in the GLAM context

Source: Folksonomies in the GLAM context

Folksonomies, also known as crowd sourced vocabularies, have proved their usefulness time and again in terms of deepening user experience and accessibility, especially in cultural heritage institutions.  Often termed GLAMs (Gallery, Library, Archive and Museum), these institutions can use folksonomies to tap into the knowledge base of their users to make their collections more approachable.

For example, when one user looks at a record for Piet Mondrian’s Composition No. II, with Red and Blue, she may assign tags for colour blocking, linear and geometric.  Another user may tag the terms De Stijl, Dutch and neoplasticism.  By combining both approaches to the painting, the museum ensures that those without contextual knowledge have just as much access to their collections as those with an art historical background.  Linking tags can allow users to access and search collections at their needed comfort level or granularity.  It also frees up the time for employees handling those collections, since they can depend—at lease in part—upon the expertise and observations of their public.

The diversity of tags also increases findability levels for educational uses of online collections.  If an elementary school teacher wants to further her students’ understanding of movements like cubism, she can just open up the catalogue of her local museum to explore the paintings tagged with ‘cubism.’  This becomes increasingly important as field trips are increasingly unavailable to public schools and larger class sizes require more class prep than ever.  With the linked folksonometric vocabulary, the teacher need not fight for the field trip nor dedicate even more time to personally constructing a sampling of the online collection to display.

Crowd sourcing knowledge can also go beyond vocabularies and prove especially useful in an archival context.[1]  When limited information is known about a collection or object and those with institutional knowledge are unavailable (a persistent problem plaguing archives), another source of information is needed.  The best way to do that is to tap those who would have the necessary expertise or experience.  For example, Wellesley College’s archive began including digitized copies of unknown photographs from the archive in the monthly newsletters emailed to alumni.  With those photos, the archive sent a request for any information that alums could provide.  In this way, the archive has recovered a remarkable amount of knowledge about the historical happenings around college.

But folksonomies and other crowd sourcing projects are only useful if institutions incorporate the generated knowledge into their records.  Some gamify the crowd sourcing process in order to engage users, but then lack the impetus to follow through in terms of incorporation.  Dropping the ball in this way may be due in part to technical challenges of coordinating user input and the institution’s online platform.  It may also stem from the same fear that many educators hold for Wikipedia: What if the information provided is WRONG? Fortunately, both anecdotal and research evidence is proving those fears largely unfounded.[2]  The instinct to participate with institutions in crowd sourcing is a self-selective process, requiring users to be quite motivated.  Those interested in that level of participation are going to take the process seriously, since it does require mental engagement and the sacrifice of time.  In terms of enthusiastic but incorrect contributions, institutions may rest assured that the communities that participates in crowd sourcing efforts are quite willing to self-police their fellows.  If something is wrong or misapplied (eg Stephen Colbert’s Wikipedia antics), another user with greater expertise will make the necessary alterations, or the institution can step in directly.

GLAMs are experiencing an identity shift with the increased emphasis on outreach and engagement.[3]  The traditional identity of a safeguarded knowledge repository no longer stands.  GLAMs now represent knowledge networks with which users can engage.  If obstacles exist to hinder that engagement, the institution loses its relevance and therefore its justification to both the bursar and the board.  These open sourced projects can break down those perceived barriers between an institution and their public.  Engaging with users at their varying levels and then using that input shows that the institution envisions itself as a member of that community rather than a looming, inflexible dictator of specific forms of knowledge.  Beyond that, though, institutions can use all the help they can get.  Like at Wellesley, an organization may be lacking in areas of knowledge, or they just don’t have the resources to deeply engage with all of their objects at a cataloguing or transcription level.  Crowd sourcing not only engages users, but it also adds to an institution’s own understanding of its collections.  By mining a community’s knowledge, everyone benefits.

From a more data science-y perspective: Experimenting with folksonomies in an image based collection

For a class on the organization of information, we read an article covering an experiment analyzing the implementation of user generated metadata.[4]  For those in image based institutions looking to build on the attempts of others in the field of crowd sourcing, this experiment is a solid place to begin.  Some alterations, however, may prove helpful.  Firstly, I would recommend a larger pool of participants from a broader age range, at least 25-30 people between the ages of 13-75.  This way the results may be extrapolated with more certainty across a user population.  Secondly, for the tag scoring conducted in the second half of the experiment, I would use the Library of Congress Subject Headings hierarchy in tandem with the Getty’s Art & Architecture Thesaurus, so as to compare the user generated data to discipline specific controlled vocabularies.  Retain the two tasks assigned to the users, including the random assignment of controlled vs. composite indexes and the 5 categories for the images (Places, People-recognizable, People-unrecognizable, Events/Action, and Miscellaneous formats).  In terms of data analysis, employ a one-way analysis of variance, since it provides a clear look at the data, even accounting for voluntary search abandonment.  By maintaining these analytical elements in the one-way ANOVAs and scoring charts for the pie charts, it would be easy enough to compare your findings with those in the article to see if there’s any significant difference in representations of index efficiency (search time or tagging scores) for general cultural heritage institutions and GLAMs with more image based collections.

[1]  L. Carletti, G. Giannachi, D. Price, D. McAuley, “Digital Humanities and Crowdsourcing: An Exploration,” in MW2013: Museums and the Web 2013, April 17-20, 2013.

[2]  Brabham, Daren C. “Managing Unexpected Publics Online: The Challenge of Targeting Specific Groups with the Wide-Reaching Tool of the Internet.” International Journal of Communication, 2012.

Brabham, Daren C. “Moving the Crowd at iStockphoto: The Composition of the Crowd and Motivations for Participation in a Crowdsourcing Application”. First Monday, 2008.

Brabham, Daren C. “Moving the Crowd at Threadless: Motivations for Participation in a Crowdsourcing Application”. Information, Communication & Society 13 (2010): 1122–1145. doi:10.1080/13691181003624090.

Brabham, Daren C. “The Myth of Amateur Crowds: A Critical Discourse Analysis of Crowdsourcing Coverage”. Information, Communication & Society 15 (2012): 394–410. doi:10.1080/1369118X.2011.641991.

Lakhani; et al. “The Value of Openness in Scientific Problem Solving.” 2007. PDF.

Saxton, Oh, & Kishore. “Rules of Crowdsourcing: Models, Issues, and Systems of Control.”. Information Systems Management 30 (2013): 2–20. doi:10.1080/10580530.2013.739883.

[3] “Throwing Open the Doors” in Bill Adair, Benjamin Filene, and Laura Koloski, eds. Letting Go?: Sharing Historical Authority in a User-Generated World, 2011, 68-123.

[4]  Manzo, Kaufman and Flanagan Punjasthitkul.”By the people, for the people: Assessing the value of crowdsourced, user-generated metadata.” Digital Humanities Quarterly (2015) 9(1).

Further Reading on Crowd sourcing, Social media & Open access (drawn from JJ’s class syllabus for 11 April)

Crowdsourcing Projects: Smithsonian Digital Volunteers and the Smithsonian Social Media Policy.
Jeffrey Inscho, “Guest Post: Oh Snap! Experimenting with Open Authority in the Gallery,” Museum 2.0 (March 13, 2013).http://museumtwo.blogspot.com/2013/03/guest-post-oh-snap-experimenting-with.html
Kristin Kelly. Images of Works of Art in Museum Collections: The Experience of Open Access. Mellon Foundation, April 2013.http://msc.mellon.org/research-reports/Open Access Report 04 25 13-Final.pdf/view
Nate Matias, “Crowd Curation: Participatory Archives and the Curarium Project,” https://civic.mit.edu/blog/natematias/crowd-curation-participatory-archives-and-the-curarium-project
NMC Horizon Project Short List, 2013 Museum Edition,http://www.nmc.org/pdf/2013-horizon-museum-short-list.pdf
Karen Smith-Yoshimura and Cyndi Shein, “Social Metadata for Libraries, Archives, and Museums” OCLC Research Report, Executive Summary only,http://www.oclc.org/content/dam/research/publications/library/2012/2012-02.pdf
For those interested in Museum 2.0, Nina Simon, The Participatory Museum, http://www.participatorymuseum.org/

« Older posts

© 2020 Dressing Valentino

Theme by Anders NorenUp ↑

css.php