President’s Day 2011: Technology and the Teaching Learning Process

What better occasion to return to the blog than a spring semester President’s Day devoted to “Technology and the Teaching Learning Process”?  Below are a few links that I’ll discuss bright and early tomorrow in the Lally Forum with my colleague Michael Brannigan.

The New York Times on Digital Humanities: “Digital Keys for Unlocking Humanities’ Riches

The Pew Research Center’s Internet and American Life Project “Teens, Video Games and Civics

The It Gets Better Project on YouTube, and on its own site

And while I won’t get a chance to talk about these, they’re also great examples of smart people thinking in sophisticated ways about the learning potential of new media technologies:

USC Annenberg School for Communication and Journalism: Project New Media Literacies

HASTAC: Humanities, Arts, Science and Technology Advanced Collaboratory

MacArthur Foundation Spotlight: Digital Media and Learning

The Handwriting’s on the Wall

According to the NCTE inbox for last week, Jan. 23rd was National Handwriting Day.  Who knew?  Furthermore, who knew such a day existed?  One last one: who thought that such a thing was necessary?

Anyone with whom I have a writing relationship (my students, multiple family members, friends, cat sitter, etc.) can tell you that my handwriting is a horror.  That’s not hyperbole; it’s truly wretched.  It’s not effective, as virtually no one can decipher every word in a given document (this sometimes includes a check); it’s not aesthetically-pleasing as letters are not uniform size and are often not fully formed to begin with.   It would work wonderfully as a code that only I could read, but I have found myself returning to books and notes being able to recognize that I wrote something, but not the content of that writing.  As long as I can remember, I’ve blamed this on being left-handed; surely having to invert all graphical instruction (i.e., “do this, only backward”) has a dire effect on the quality of  handwriting!  As an adult, however, I’m sure that there are plenty of lefties out there who write right.

For all of these reasons, the increased move toward writing in a digital space has been a godsend.  I tend to embed comments on drafts using Word, my film notes are usable a year later because I typed them while viewing, I send grocery lists to my spouse that he can actually read.  (True story: “Why did you buy coconut milk?”  “It’s on the list.”  “Where?”  “Right here!”  “That says ‘chicken broth’…”)

All of that, you’d think, would add up to a unilateral love for writing=typing.  Effective, efficient, decodable—what more can you ask for?  Add to that this quote from NCTE’s 21st Century Literacies Report: “digital technology enhances writing and interaction in several ways. K–12 students who write with computers produce compositions of greater length and higher quality and are more engaged with and motivated toward writing than their peers.”  Hot diggety!

And yet.  There’s something to be said for the tactile experience of pen on paper.  The sense of the ink flowing out behind your hand as you move across the paper; the ability to cross out–rather than delete–text; the indentation of the nib.  These have given literary scholars some of their best metaphors ever:  Pushkin’s white ink; Heidegger’s Being; Derrida’s palimpsest.  What happens at the moment when these experiences of writing become only metaphors, and no longer associated with the process of writing itself?  And what new metaphors await us in the age of new media composition?

DMAC, Day Two

I’m currently at the end of my second day at the fount of information and innovation in digital media and composition that is DMAC at The Ohio State University. What happened to DMAC Day One, you ask? Lost to the sands of time (or the exhaustion of brain, more likely). In the spirit of sticking to a schedule, I’ll leave off an attempt to reconstruct Day One.  In fact, I’ll abandon the task of reconstructing Day Two as well, even though it’s fertile territory for recording here: we experimented a bit with audio editing and watched parts of two documentaries (Living Proof: HIV and the Pursuit of Happiness and Errol Morris’ Fast, Cheap and Out of Control) on our way to capturing and dumping digital video footage.

Despite the centrality of those tasks to our work today, I find my brain returning to a minute comment made by Cindy Selfe this morning in our discussion of texts about multimodality. Cindy moves from the practical (what do multimodal texts look like and how would they be published?) to the theoretical (in what ways would LaClau suggest strategies with which to address the schism between composition and literature?) with aplomb. In one of these seamless moves, she mentioned the way in which Heidegger might have come at a listener’s attenuation to a sonic text. I think Cindy was on her way to another point, but I couldn’t help but linger with Heidegger for a moment. Once upon a graduate school time, I read a lot of Martin H.—Being and Time, Introduction to Metaphysics, etc., etc. I must have spent three semesters working with his essay “The Question Concerning Technology,” but hadn’t, until this morning, ever considered the ways that Heidegger might bring some interesting insights to the definitions of technology that we’re currently wading through. [In truth, I’m fudging the significant differences between technology and media here.  Cut me some slack.]

If memory serves, Heidegger makes a couple of important moves in his essay. The first is to remind the reader of the etymology of the word “technology,” which springs from the Greek techne. As one of many points he extrapolates, Heidegger reads techne as linked to the artistic practice of bringing something into its true being, or revealing its nature. [The level of oversimplification here is quite stunning; MH is surely spinning in his grave.] The root of technology, then, is in art and philosophy—a far cry from the means to an end that we tend to classify it as today. The second important move, then, is to examine what happens when we recognize something, anything, as solely a means to an end. Here, “technology” becomes a kind of reductive, instrumental thinking—yet another way to complicate our everyday vernacular expression of the term. We might phrase this to resonate with a recurring question here at DMAC, what is lost when we see technology as only technology? Heidegger uses the example of a river, which, under a logic of technology (means/end thinking) is nothing but a continuous source of power when paired with the appropriate machinery. When we see the river as nothing but a power source, we lose everything else that makes a river a river: its aesthetic qualities, its role in an ecological system, its metaphoric value, etc. In essence, we’ve lost the true nature of the river, over and above its productive function.

So, if the logic of technology has hidden the true nature of technology—its potential for artistic and philosophical value—then can we use manifestations of that same technology (computers, video, audio, programs, etc.) to reveal it? And what would that look like?

And to rescue myself from ending with a question I can’t answer, I give you a multimodal approach to Heidegger’s essay itself, via John Zuern at the University of Hawaii.

Form and Function?

Via the brilliant resource jill/txt comes this video from anthropology professor Michael Wesch:

Now, for the record, I COULD have just posted a link to Jill’s website, but then I wouldn’t have had any motivation to experiment with embedding video in the blog, right? So, there it is.

Jill’s post calls this video “(almost) everything I teach in a three minute video.” I’d take on that phrase with a slight modification: it’s everything I’d like to be teaching! Wesch’s video, it seems to me, encapsulates a number of important ideas, from the most mundane (what’s the difference between html and xml?) to the most idealistic (what will the onset of digital text mean?). In so doing, it represents one of the significant challenges that occurs with new media pedagogy: the need to teach both the entry-level practices of web 2.0 as well as (and in order to get to) teaching the philosophical and social possibilities and implications brought about by such technologies. In this way, the video does what a great montage should do: inspire you by representing the hours of difficult labor needed to pull of a great feat (see Rocky, Vision Quest, Real Genius, and the I-can’t-believe-I’m-saying-this -all-right-fine-I’m-going-to-have-to-own-it archetypal South Park film.)

From a film studies perspective (oooh, going all old school!), Wesch’s video also employs the point-of-view technique to great effect. In essence, we see the web and its tools from the perspective of the filmmaker, as if we were him. Thus, the video also models a version of web interaction that shows us how someone invested in the medium navigates it. In some ways, this is what I find most fascinating: how do different people navigate/loll around in the internet? [Strangely, I just went from the pedagogical perspective to the voyeuristic one. From teacher to stalker in one short step!] What could we learn from watching each other? This might be akin to the experience of observing a tech person remotely operating your computer—a new possibility for IT at Saint Rose. It feels a bit like your computer is possessed, even as you see new ways of accessing information that you’ve never tried before.

So, all in all, well-played, Michael Wesch. Many reasons to view and re-view your video.

***Note: while browsing for this video, I happened to use “web 2.0” as a search term on YouTube. It will surprise no one but me, I suppose, that there are several instructive videos posted expressly designed for teachers. Who knew?

Location, Location, Location

Once the particular provenance of real estate agents, location is becoming one of the more troubled terms in technology.  The Chronicle of Higher Ed just posted a short “pros and cons” piece about mobile social networking, but it doesn’t quite do justice to the lived experience of this phenomenon.

I’ve just returned from a weekend trip to New York City (brrrr!), where I was privy to any number of conversations on the street.  I got an earful of tantalizing snippets, including: “but your hard drive is just, storage, man”;  “like, you said I could have it when I needed it”;  and “do you remember who we said created Captain America?”  (thankfully, this was a father to a son–and the answer is Jack Kirby, for those of you playing the at-home game).  The repeated refrain in the hetereglossia of city discourses, however, was “where are you?”—a refrain most often yelled into a cell phone.  From Union Square (a brief and non-debt inducing trip to the Strand) to the Jewish Museum (85 blocks or so from the Strand: I walked every one), the consistent theme was that of locating your interlocutor.

I’m not much of a cell phone user, and Albany is hardly the big city, so the ubiquity of the instantaneous location is a bit of a new phenomenon to me.  My friend Ginna, a 7 year denizen of NYC, informs me that this is standard practice.  “If  a restaurant is full, we relocate and need to tell people.  I get held up at work and tell J. to wait for me somewhere.  It happens all the time.”  Okay.  So in a city like Manhattan, in order to meet up with your friends who live and work all over the island and the boroughs, you have to be locatable and reachable.  This makes sense to me: it’s not like you’re going to take the train back to Brooklyn and check the messages on your apartment answering machine to find out where to meet.  Better yet, when you get a call, you can change directions in transit.  Headed downtown on the subway?  We’re actually going to go uptown instead.  You get out at the next stop and change platforms: voila!  Conservation of motion.

Just as  I thought I was really beginning to understand how this all worked, Ginna handed me one more tip: “actually, I don’t even bother to call people any more.  I just text them a location.”  [In the spirit of full disclosure, she told me this over the phone when I called to return her message.  She was shocked to speak with me.  Clearly, I was supposed to have texted her back.  Given the level of noise in the city, that was the best chance I had to understand what she was saying, anyway.]

It’s difficult for me to fully articulate what a different way of life this is from my own bumpkin ways.  It seems to me that at any given time in my day-to-day life, I’m infinitely locatable.  I’m at home, I’m at my office.  Occasionally, I’m out running errands, but then I’m back at home or office.  It’s a fairly staid existence, really [I can hear you snoring, you know!].  For city dwellers, however, life is increasingly mobile, as are the multiple connections within a city dweller’s life.  My intuition tells me that the experience of current education is similarly mobile: you’re somewhere on campus, you’re on your way to class, you’re at work, you’re at the gym, you’re out with friends, etc.

It’s a happy coincidence, then, that “location” and “interlocution” share a phoneme.  If my experience this weekend is any indication, then where you are is becoming a significant part of your discussion with someone.  It’s the first question you and your interlocutor must answer, and it determines the course of the conversation from there.  In GPS terms, it indicates how long it might be before you can make human contact with another, and whether it’s even worth the effort to get to each other.  In more social geographical terms, it delimits the field of appropriate conversation (consider the difference between “I’m at Carnegie Hall” and “I’m at Grand Central Station”), who else might be privy to the chat, and its approximate length.  If Ginna’s move to the text tells us anything, it’s that location is taking on a larger and larger signifying function, such that one need only tell the other where she is, and all will be understood.

Does this hold out necessary considerations of privacy, as mentioned in the Chronicle article?  Clearly.  Bugeja’s question about the “addictive” nature of technologies that place such an emphasis on location, however, seem beside the point.  I’d rather hear us asking what the cultural implications of the primacy of location are, and what other systems of meanings are now taking backseats to “where are you?”.

Reading Atavism

Earlier this evening, I was engaging in a nightly ritual/indulgence—watching E! News Daily (you say tomato, I say tomahto). One of the stories tonight covered the baby boom in Hollywood and the fact that toy companies are beginning to offer free products to stars who might give them good press. It’s only a matter of time, it seems, before the room of swag at the Sundance Film Festival places a Barbie table next to the Bulgari display. The E! story focused particularly on the LeapFrog company, which specializes in toys and game systems designed around learning objectives for infants, toddlers, elementary school kids, etc. Their representative spoke a good deal about the game system that “brings books alive.” My immediate reaction to this was a bit histrionic: “What’s going to happen when these kids grow up? How can ‘regular books’ compete when these play songs and have video and are interactive? Why would kids want to read print novels?”

These are the moments that I can’t help dwelling on. If someone had read me this LeapFrog tagline where they purport to “combine research-based curriculum with multisensory technology to advance student achievement” I would have immediately jumped on board. Yes! Isn’t this what we’re after in education, activating multiple senses to increase student interaction with content? It’s clear from my knee-jerk reaction above, however, that there remains in me some significant fears about the future of the book.  I’m certainly not alone in this, and the good people over at if:book (see sidebar) are doing yeoman’s work thinking about how narrative and books as we know them will look in the future.  I’m at least as interested in the psychology that subtends these contradictory impulses.  Trying to strip away my own professional investment in the continuation of paper novels—which, to be honest, is a significant part of my first reaction—I find myself returning to some of my most positive moments as a reader: discovering an idea that I wanted to explore in a text for the first time; understanding and articulating stylistic components that did or did not appeal to me; re-reading novels to re-live not just the plot, but the mood they could create.  The first two of these are certainly activities that can be replicated by other types of media, particularly games.  Stephen Johnson makes a compelling argument in his Everything Bad is Good for You that exploration of a digital world is a large part of the appeal of a game.

It’s the last, however, that English types might spend a bit more time thinking about as we engage and enthuse about new media and simultaneously mourn the possible death of the book.  Where is the space in interactive media for contemplation, for re-living?   We might explore the pleasures that accrue from repetition, as these seem distinctly underexplored in new media.  What makes the process invigorating, and when is it just boring?  What kind of text rewards a second visit, and which frustrate it?

As we bang the drum for interactivity, for multisensory experience, for collaboration and forecasting and all of the new means of learning and thinking that new media offer us, where does contemplation fit?


Who’d have thunk it?  At the end of April, you can find me playing with the big kids over here.  This is a new area for me, so it’s both exciting and a bit nerve-wracking to think about sharing ideas with the very distinguished company at Media in Transition at MIT.  Happily, I understand that we’ll be posting full papers in advance of the conference, so the participants will have a sense of what’s going to go on there.

I suppose I need to go and do some research now, right?