Reading a book in one hour

In the midst of reading some very thought-provoking essays published in the novel digital humanities work Hacking the Academy, I found one of the most practical and useful pieces I’ve read in graduate school. Larry Cebula’s “How to Read a Book in One Hour” should be recommended reading for every student entering graduate school in the humanities, not because Cebula encourages laziness or inattentiveness in advising students to skim academic monographs methodically, but because his pragmatic method insidiously reinforces the pedagogical goals of a graduate education in history.

When I started my graduate history program at George Mason, I found it hard to communicate to family and friends exactly what this course of study was going to teach me. I think that the general public believes that a post-secondary education in history involves intense memorization of facts and figures, a training to be able to recall famous names, dates, places, and speeches and to speak to the obvious greatness of the same. Few outside of the profession know that in fact historians view their job as interrogating evidence and formulating arguments to make amongst themselves.

Several brilliant professors I’ve had the opportunity to lean from at Mason have stressed the importance of understanding historiography, and I’m much the better for it professionally. I realize now that it’s nearly impossible to make a worthwhile argument about a scholarly monograph without understanding where the author is coming from – who s/he’s read, is responding to, agreeing with, or challenging. In an age when historians had faith that their treatment of a subject could be the complete, objective telling of a story, a reading of previous authors’ works might only serve to confirm facts or one’s own mastery of the subject to others’ detriment. Post-modern historians, however, recognize that theirs is but the latest in a revolving and sometimes cyclical series of arguments, that their role is to explore new facets and neglected sources with novel methodologies to reinterpret and come to a greater understanding of a topic or concept. This work is necessarily predicated on an understanding of historical work in context.

All this is to say that I’ve changed the way that I read given what I look for now in a scholarly work, and Cebula’s article was a justification of my evolving methodology and philosophy on the subject. As he wrote, “plodding through a book one page at a time is not the best way to understand a book in graduate school.” Instead, he advises, students should spend their time reading the introduction, conclusion, and table of contents, and then skimming the body of the text for useful arguments and interesting methods and sources. Most importantly, he suggests taking good notes and reading two scholarly reviews of any work, essentially creating an annotated bibliographic entry and comparing one’s thoughts to those of important scholars in the field. These steps make the entire exercise useful beyond the single class period for which the harried student might be cramming; the useful notes and check on one’s analysis make this an exercise in flexing the academic muscles and preparing for comprehensive exams and future papers.

As much as I agree with this method and its simple effectiveness, I’ve found that my persistent problem in employing it is my inherent interest in the material. More often than not, I take Cebula’s advice to read the introduction thoroughly, then forget to take the next step of skipping to the conclusion, getting bogged down in compelling anecdotes and details while forgetting to keep my eye on the bigger picture of the author’s thesis and methods. Well, as this is my last regular response post for Clio Wired, I’m going to consider this a New Semester’s Resolution. I’m going to follow Cebula’s method, take good notes, and then let myself get lost in the details as soon as I know what I want to talk about in class!

Response: The Shallows: What the Internet is Doing to our Brains

I feel bad about my reaction to The Shallows: What the Internet is Doing to our Brains. I think that Nicholas Carr’s book is well-researched, well-written, and interesting. Still, I have the worst kind of complaint to lodge with the author: I think that the entire premise is founded on anecdotal evidence, and I’m not entirely convinced. This is despite the dozens of peer-reviewed studies the author cites which have found that exposure to and use of the internet hurts our brains’ abilities to concentrate, process information, and use it to interact with the world. I don’t doubt those studies’ findings, and I find Carr’s argument about the plasticity of our brains to be compelling and irrefutable. I even loved his logical and historical musings about the foundation of our civilization resting on individuals’ ability to lose themselves in a book – to read and write without distraction. That all sounds pretty solid.

Still, I can’t shake the feeling that I’m listening to a frazzled quasi-luddite complain about the way that age and technlogy have hurt his personal attention span and capacity for processing knowledge. This is despite the fact that the author explicitly refutes that point early in the book. Others feel this way too, he says. Studies have confirmed his personal findings. If you think it through rationally, of course our brains have had to adapt to the new media and technology environment, and the changes will not all be good. And too, as a “born digital” millennial, as the media likes to call us, I can’t claim to have known anything other than the way I interact with a wireless world.

So I don’t have anything real to challenge his argument, except for my own anecdotal evidence that I can function in this analog world with myriad distractions and pings and ringtones. Anecdotally, I guess I’m as bad as he is.

Edit: Phew! After an enlightening conversation in Michael O’Malley’s Clio Wired class last night, I finally understand what was bugging me. Carr goes to great lengths to explain how the wiring of our brains is always changing – adapting to new challenges and maximizing efficiency to deal with repeated and vital tasks. This, he says, is the problem: our brains have been rewired by the constant distractions of the Internet and hypertext to make us unable to focus in the way we once did, concentrating deeply on a printed text.

Essentially, Carr, having just told us that our remarkable brains can adapt to any situation that we ask, then argues that our brains are in existential danger. It’s like a father showing his son how to cross his eyes and make funny faces then immediately warning him not to do so, or it will stick that way.

I find that his dire warning about the irrevocable changes the era of information overload have wrought on our brains fundamentally undermines his better argument about the plasticity of our brains – organs which, he proves, adapt to culturally-derived needs. If humans could only dive into books once they were safe and well-fed enough in the nurturing embrace of Medieval European abbeys to ignore cacophonous external stimuli, that was because that time and place allowed and required them to do that kind of reading. Maybe our modern world has introduced different types of reading, driven us to distraction, and divided out attention. But at the same time, we aren’t monks in Lombardy poring over arguments about how many Angela can fit on the head of a pin. We’re globally-connected, democratic information consumers – our intellectual grasp of current events, information about our surroundings, and professional expertise are vital social currency that we can only accumulate by reading a great amount and a great variety of the content which is literally swirling all around us. Aren’t our brains adapting to that reality? And isn’t that going to be good in the end?