Conference Presentation Feedback, Part 3: “But we need numbers!”

(See this post for an introduction to this blog series.)

The second most-common idea I found in the minute paper responses can be summed up in the following quotation:

We are going through accreditation, and the committee wants more than anecdotal – how do you assess to get the “numbers”?

I also got asked virtually the same question in the Q & A following my presentation at Brick & Click, and it was all I could do not to just goggle at the person asking the question and say, “oh my, you’re doing it wrong.”1

Okay, I’m going to put this in bold and italics to emphasize it because it’s important: assessment can be either qualitative or quantitative.  If, on your campus, “assessment” is synonymous with “quantitative,” well, I’m very sorry that you’re stuck in that predicament. And unfortunately, your predicament may have a lot to do with the distrust of assessment, especially among faculty (especially among humanities faculty) that I described in the previous post.

But rest assured, assessment can be qualitative, and my sense is that the regional accrediting agencies all recognize this.2 Deb Gilchrist, who writes and teaches extensively on assessment, is fond of saying that what you’re assessing doesn’t have to be measurable, it just has to be judgable.  That’s an important distinction.

But knowing that doesn’t help you solve your problem; chances are, you can’t wave your magic instruction librarian’s wand and change your campus’s policies regarding quantitative assessment.  What can you do with that magic wand?  Here are some thoughts:

  1. Do whatever you need to do to get “the numbers,” but also do qualitative assessment when and where you are able to fit it in.  (Nothing says you can’t tack on a quick “what’s one thing you still have questions about” question on an otherwise Likert-filled evaluation form.3) Then, use the qualitative data to tell a compelling story anyway, and see who listens. (Bonus points if you can sneakily, but explicitly, say that the story you’re telling is informed by qualitative assessment data.)4
  2. Also, keep in mind that you don’t have to assess (qualitatively or quantitatively) every thing every time.  Gradually build up a collection of anecdotes, and over time, they become data.  (More on this later.)
  3. The primary thing to keep in mind, though, is that while this kind of assessment work can be submitted to accreditors, etc., it doesn’t have to be.  It’s just as helpful to you as a teacher if, like my own minute papers, it never leaves your office.

 


  1. Please note that I fully recognize that that would have been a spectacularly inappropriate response, not least because a) it’s rude and unprofessional, and b) it almost certainly wasn’t that poor librarian’s decision to require quantitative assessment data.
  2. I don’t actually have any evidence for this claim, but I can’t imagine that people like Linda Suskie and Deb Gilchrist could be blithely advising campuses on qualitative assessment methods that don’t meet the requirements of one or more regional accrediting agencies.  But, if you’re reading this and have knowledge of a regional accrediting agency that doesn’t accept qualitative assessment data, would you please let me know, either in the comments below or via email — catherine dot pellegrino at Google’s email service — so that I can update my statements above? Because I’d be pretty mortified to be making statements like that that are just flat-out wrong.
  3. Just, for heaven’s sake, don’t put it at the end of the form. Students will never bother filling it out if you do. To give yourself the best chance of actually getting responses, put it right smack at the beginning.
  4. Note: I am not a PR/marketing consultant and have never tried this maneuver.  But it seems like it should work.

Conference Presentation Feedback, Part 2: Getting Buy-In

(See this post for an introduction to this blog series.)

The single most common response I got on my conference presentation minute papers can be accurately represented by this quote:

How on earth to institute this kind of reflective practice among other librarians!!

First off, this is a problem of leadership and organizational culture. Assessment, when it’s done right, and regardless of what you call it, helps you determine what’s working well in your classroom, and what isn’t.  It helps you know what your students are learning, and what they aren’t.  Fundamentally, it helps you do your job better.

If you have a bunch of librarians who don’t want to do their jobs better, you have a big problem.

Now, it’s entirely possible that librarians (and, ahem, teaching faculty) don’t realize that that’s what assessment — again, when it’s done well — is for.  They hear “assessment” and immediately jump to “multiple-choice exams standardized tests administrative oversight Spellings Commission OMG ACADEMIC FREEDOM.”  That’s an unfortunate, but also understandable, chain of associations for academics to take.

But assuming that this isn’t the case, or isn’t entirely what’s going on, what can you, the lowly instruction librarian, or instruction coordinator, do to convince your colleagues to adopt classroom assessment techniques in place of “course evaluation” type instruments?  Here are some thoughts:

  1. Lead by example.  Do it yourself, either instead of, or in addition to, standard evaluation instruments.  Presumably nobody has said that you can’t do this.
  2. Talk it up. Talk about what you’ve learned from your classroom assessments, and how it’s helped you do things differently in the classroom. Show your colleagues how assessment is helping you do your job better.
  3. Get their ideas: what don’t they like about the current evaluation system (if there is one at your library)? What would they like to know about their teaching and/or the students’ learning that they’re not currently finding out? How would they go about finding that information?  Get them to come up with the idea themselves and you’ll have buy-in.1
  4. Send them to LOEX or Immersion.2

The other piece of the responses I got on this topic was the consistent thread running through them of how can I get my coworkers on board without them feeling threatened by assessment?  Part of this is addressed above when I talk about assessment as a tool for doing your job better, but there are other ways to address it as well:

  1. You don’t have to call it assessment.  Call it a minute paper (or whatever technique you choose; there are many) and leave it at that.  If assessment is such a charged word on your campus (and if, as I hear from other campuses, “assessment” by definition means “quantitative data,”) that it sends people scurrying for the hills, just don’t call it that.3
  2. And these librarians don’t feel threatened by teaching evaluations, which students often fill out capriciously and without thought for their effects on instructors’ careers?  Evaluations measure the teacher. Assessment measures learning. I feel much more threatened by the former than the latter.

So, there are my thoughts on getting buy-in.  Next up: the vicious spectre of quantitative data. (Cue the spooky music!)


  1. Note: I am not a management consultant and I have never actually tried this maneuver. But I have been a union organizer, and I can say from personal experience that this approach works in the context of labor organizing. YMMV.
  2. Okay, I’m partly kidding here. If someone is really resistant to assessment or changing their own pedagogical practice in order to improve student learning, sending them to a conference against their will isn’t going to help; it’s only going to make them more resistant. But if they’re open to the idea, but hesitant about implementing it, the experience of hearing others be excited about innovation — especially in the supportive atmosphere of Immersion — can be a powerful experience.
  3. I did an instruction session for a faculty member on my campus who is extremely vocal about his opposition to assessment practices (which haven’t even been implemented yet), and did a minute paper at the end of the class, and shared the results with him afterwards.  I was sorely tempted to add in a little zinger at the end of my email message to him, to the effect of, “did you realize that this is assessment? Doesn’t seem so awful, does it?” But I didn’t.

Conference Presentation Feedback, Part 1: Introduction

So, about a month ago, I gave two conference presentations at two different conferences in the space of two weeks. It was a little nuts, and I probably won’t be trying that crazy trick again any time soon, but I did get to go to two completely different, and extraordinarily valuable, conferences and give presentations that, by all accounts, were positively received.  For my first time(s) on the conference-presentation circuit, that seems like a pretty good outcome to me.

The first presentation was “But what did they learn? What classroom assessment can tell you about student learning,” which I gave at the ARL Library Assessment Conference in Baltimore, MD on October 25.  Being an ARL conference, this was kind of a big deal for a librarian from decidedly-not-an-ARL-library.

The second presentation was “But what did they learn? What classroom assessment can tell you about student learning,” which I gave at the 10th annual Brick & Click Academic Library Symposium at Northwest Missouri State University, which entirely lived up to its reputation as a terrific small conference for academic librarians, and which I am recommending to all of my coworkers for their future professional development.

Astute readers will deduce that I gave precisely the same presentation at both conferences. I’ll address that issue momentarily.

Because my presentations were about minute papers, a form of classroom assessment, I decided to “eat my own dogfood” and close the presentation by actually doing a minute paper: I passed around pieces of scrap paper and asked the attendees to write down two things:  one useful thing they’d learned, and one thing they still had questions about.1

I finally had a chance to look through the minute papers from the two presentations last week, and I noticed that a lot of recurring themes came up in them, especially in the “one thing I still have questions about” part.  This is good, and it’s precisely the kind of information that informs pedagogical practice when classroom assessment is done right.  Unfortunately, unlike a credit-bearing course, where the faculty member can come back the next day and clarify things that the students didn’t understand; and unlike a one-shot instruction session, where you may never see those students again but at least you can do it better for the next set of students, I probably won’t ever get a chance to clarify these issues.  The best I can do is respond in a public forum and hope the Long Tail (or … something … ) fills in the gaps.

So, over the next week or so, I’ll be responding to some of the recurring themes that I saw in the minute papers.  I’ll collect all the posts under the category of “Conference Talkback 2010” so you can view them all there once they’re finished.

Giving the same presentation twice?

So, yeah, I gave pretty much exactly the same presentation twice at two different conferences.  I had a hard time deciding whether that was right or not, and I’m still not sure whether it was.  I wound up justifying it as follows:

  1. I wasn’t being paid for either presentation. In fact, I paid full registration for the ARL conference, and a moderately discounted presenters’ registration rate for Brick & Click.  (I was paid in what Dorothea Salo [borrowing the term from Cory Doctorow] has described as the “whuffie” currency of adding a line to my CV, which, when you’re on the tenure track, is frankly far more valuable than cash.)
  2. I judged it extremely unlikely that there would be any overlap between participants at the two conferences. As it turned out, there was one person at Brick & Click who identified herself as having been at the ARL conference, but she didn’t say she’d actually been to my presentation at the previous conference.
  3. The time slot at the ARL conference was 20 minutes, while at Brick & Click it was 50 minutes, so I expanded the talk a bit for the latter conference, included more examples, and we had time for a more extended Q & A afterwards.
  4. I consulted with my boss about the question, we talked over the issues involved, and decided it was probably okay to go ahead and do both.

As I said, I’m still not 100% certain I did the right thing, so if you feel that I didn’t, I’d appreciate hearing your (respectfully expressed) thoughts on the matter.  And, heck, if you think I did do the right thing, I’d appreciate hearing that, too.

And, as they used to say before advertising got all viral and whatnot, “watch this space!”


  1. There are many variants of the minute paper; this is the one that I use most often in one-shot library instruction classes.

Reference Book Petting Zoo

It’s been pretty quiet in Blogville these days: not much happening over the summer, and then BAM the semester starts and suddenly I’m teaching eight different library sessions (two of them built completely from scratch) in nine work days.  But the fall-semester rush is dying down, and even though apparently blogs are dead (who knew?) I’ve still got stuff to say.

A colleague at another university recently asked for suggestions for teaching reference books, and I emailed her my thoughts on an activity I often do in classes that I call the Reference Book Petting Zoo.  I had to restrain myself from getting totally carried away in the email, so I thought I’d elaborate on it a bit and post it here:

The basic idea of the reference book petting zoo is for the students to actually touch the books.  Open them, flip through them, get a feel for what they contain and the idea that subject encyclopedias are different from the Britannica.

First, I pre-select a bunch of reference books that I hope will be relevant to their topics (this is easier or harder, depending on the subject of the class and how much information I have about their assignment).  The more glossy, colorful, and/or sexy the reference book, the better.  (If at all possible, I try to work in the Encyclopedia of Sex and Gender, which is definitely NOT G-rated; and/or the Encyclopedia of Body Modification, which has color photos and always elicits a resounding “eewwwwwww!” and/or the St. James Encyclopedia of Popular Culture.)  I also aim for what my favorite library-school professor called highly “generative” books: books that contain articles on subjects that you wouldn’t expect to find in that particular reference work, but that are related, somehow.  The Encyclopedia of Community, the Encyclopedia of Children and Childhood in History and Society, and the Encyclopedia of Sex and Gender are all fabulous for this.

Then I break the class into groups of 3-5.  I give each group a reference book, or sometimes a pair of related books (we have two one-volume works on TV that I often group together).  Each group has 5 minutes to examine the books and then prepare to report back to the whole class the answers to three questions:

  1. How is it organized?
  2. What (or even better, who) would it be useful for?
  3. Tell us one interesting or weird thing that you found in the book.

I emphasize that “alphabetical” is NOT a sufficient answer for #1.  They need to tell the class about indices, cross-references, lists of articles at the beginning of the book, etc.  In short, access points.  And especially bibliographies.  I really try to hammer home two concepts: use the index (not the alphabetical listing of entries) and use the bibliographies.

The second question is good if, as a class, they know what each others’ topics are, and can say, “hey John, there’s an article about your guy in here,” or whatever.  They sometimes bail on the third question, which is fine, but if they don’t, it’s often an eye-opening experience for me to see what they find interesting and/or weird.

Then as they’re reporting back to the class, I make sure to fill in any gaps they may have missed.  This is especially important for #2, as they often don’t think very far outside the title of the book for what kinds of topics it might be useful for.  I find that I really need to lead them by the nose in thinking broadly about potential sources of information on their topics.  As they’re going around, I’ll also ask, “who has a topic that might be covered in this book?” and get them to say a little bit about their topic.  Or I might ask a specific person, “what’s your topic?” and then get them to think aloud through which books might be useful to them.  The best is when I ask, “who’s got a topic that you think isn’t covered in any of these?” and then I take their topics and talk about how two or three different sources might be useful.

I’ve never had this activity go really badly, even with the more recalcitrant classes, and with an engaged class it is fabulous.  The downside to this is that it really does not translate well to online reference sources, like Gale Virtual Reference Library or Credo Reference.  Luckily, we don’t have any of those at the moment!

Real life information literacy

I recently saw a very interesting case study of information literacy in a blog I follow, but before I tell that story, I need to provide a little background first.

About a month ago I attended two information literacy events at Purdue University: the first was a day-long workshop by Ross Todd, from the library school at Rutgers, titled “New Foundations: Building An Inquiry-Based Information Literacy Agenda.” One of the goals of Todd’s workshop was to get us to change our vision of what the end product of information literacy instruction ought to be.  The goal is not to produce “information literate students.”  It’s certainly not to produce mini-librarians. Rather, the goal is to produce adults who can use information to solve problems.  He emphasized that all the rhetoric about information literacy being essential to “survival” in the current age is just that: rhetoric.  People can survive just fine (most of the time; see below re: health literacy) while lacking IL skills.  But they won’t be able to solve the world’s problems.

The second event was the May meeting of the National Forum on Information Literacy, which is an organization I had not previously heard of, but whose mission is to promote information literacy through partnerships with community organizations.1 This meeting was interesting in other respects; for one thing, I learned a little bit about this organization that you would think I would have encountered before now.  We heard reports from people working on information literacy in agriculture, and people working on health literacy,2 both of which are areas where lots of people who have no experience with higher education really need information literacy skills.

So anyway, all this got me thinking about what I think about when I imagine an “information literate” adult, and keep in mind that this is only one image, but it’s a particularly vivid one for me, and one that I return to frequently as a kind of a touchstone for why I do what I do:

a pregnant woman.

No, really, hear me out: middle- and upper-middle-class pregnant women (and new moms) are bombarded by so freakin’ much information, so much of it contradictory, so much of it loaded with agendas – it was nearly overwhelming for me, and I consider myself to be a pretty darned information literate person, thank you very much.

Breast or bottle? Give birth in a hospital or a birth center or at home? OB/GYN or midwife? Eat peanuts or avoid them? Vaccinate, or not, or delay?3 Cloth diapers or disposables? Crib or co-sleeper?  “Cry it out” or not?  It’s endless – and very little of the information out there is actually evidence-based, and even less is from truly disinterested parties.

So with that in mind, I can get back to the little case study I mentioned at the beginning of this post.

The Environmental Working Group recently released a report evaluating the safety of the active ingredients in a wide variety of sunscreens, and raising concerns that some ingredients (oxybenzone and retinyl palmitate, primarily) may increase the risk of cancer.  Magda Pecsenye, (nom de blog Moxie), mentioned the report in a blog post, and the ensuing discussion in her comments was a fascinating study in information  literacy.

Now, I should mention that Moxie has assembled a remarkable community of compassionate, respectful, intelligent, and good-humored commenters.  Trolls and flame wars don’t happen at Moxie’s blog, which as you probably know is unusual for a longstanding blog.

Anyway, the comments started out in a predictable trajectory:  the usual panicky, “oh my god the sunscreen we’ve been using for years is horrible” and “but all the recommended products are exorbitantly expensive and not available in my rural community” comments.  But then something interesting started happening: people started calming down, and then started questioning the EWG’s study, getting down to interrogating their methodology and some of the underlying assumptions behind their research.

First we have this comment, which advises proceeding cautiously with the results of the study, and in the excerpt I’m quoting below, astutely identifies the attention-grabbing headline/soundbite the EWG used:

On this data, I’m not sure I’m as 100% persuaded as folks here seem to be that some sunscreens are this big, bad, baddie worth freaking out about. I also don’t think I buy the “some sunscreens cause cancer!” line anymore than I would pay heed to a headline that screamed “news flash: the sun causes cancer!” I don’t mean to be glib. Sun protection is a valid concern, but let’s not go overboard with the worry.

Then there’s this comment that followed shortly afterwards, which I’m going to quote almost in its entirety because if a student of mine ever said this I would fall down on my knees and thank a Higher Power:

If you’re really worried, I would check out some of the science on PubMed. This is just my personal opinion, but EWG appears to be the kind of environmental group that tends to overstate risks and does not present a balanced picture – according to them, it seems like everything is toxic and every toxin is highly dangerous. I have my doubts about whether the scientific literature they cited is an accurate representation of the current knowlegde [sic].4

And then immediately following, we have this commenter, who in addition to criticizing the EWG for fear-mongering, also did a little legwork and figured out something about the EWG’s methodology — which, as it’s described here, seems a bit dubious:

[I]t is important to note that they only did their rankings based on the ingredients, they did not do actual tests of the products. There is no accounting for the amounts of each ingredient in the product, and with any ingredient, the risk is in the *dose*. … In summary, I think that their ratings are half-baked, not scientific, but more or less a resource for ingredients in sunscreen…. The best form of sun protection is the one that you use, or that you can get on your kids.

Then a commenter who’s an actual scientist weighs in with some really heavy-duty evaluation of their methodology:

[M]y quick skim of their methods section doesn’t really tell me the gory details, like whether they included every single study they could find in their meta-analysis, and if they didn’t, what their exclusion criteria were. Also, how do they compare results across studies with different methodologies? I know that there are methods for doing that, but I can’t tell what they did.

And she continues with:  “I’d feel a lot better about their conclusions if they would write them up in a scientific paper and submit them for peer review [emphasis added].”  Whoa.

The conversation continues, with commenters sharing what sunscreens they use and how well they work for them.  And at the very end, someone posts an unsourced “article” from a dermatologist who appears to be representing the American Academy of Dermatology, which questions the report’s findings (and points out that their study was not peer-reviewed).  But my point is, here are a bunch of people on the internet, using their very best information literacy skills to make informed health decisions for themselves and their families. (And doing a pretty darned good job, I have to say.)  This is what we are trying to accomplish, right?


  1. At least, that’s what the presenter said their mission was. On examining their website, I can’t actually find a current mission statement.
  2. I have to admit, I’m a little dubious about the validity of “health literacy” as a concept. The presenter, citing the Institute of Medicine, defined it as the ability to “obtain, process, and understand health information and services needed to make appropriate health decisions.”  Notice that there’s no “evaluate” in there, so supposedly all “health information” is valid, reliable health information? And how, exactly, is this different from information literacy, except as it relates to that subset of “information” that is “health information”? And depending on how you parse that sentence, the ability to “obtain…health…services” could be part of health literacy, so someone who’s whip-smart and knows exactly what she needs, but lacks health insurance, can’t be considered “health literate”?  Something about this doesn’t really make sense for me.

    Update: Rachel Waldman, in the comments, recommends a much better resource, from the National Network of Libraries of Medicine, for understanding the concept of health literacy.  Among other things, this explanation clarifies that “health information” generally consists of things like prescription instructions, patient care information sheets, and the like — so, reliable health information.  That’s good.

  3. See this cartoon for an excellent summary of the recent Andrew Wakefield affair.
  4. This comment leaves aside the question of whether a non-university-affiliated person would have access to any or all of the journal articles cited by the EWG, which is a whole separate issue. -ed.

Uphill, both ways, in the snow

I am old enough to have used the Reader’s Guide to Periodical Literature, in print.

There. I said it.  As a matter of fact, I used print indices of various sorts right through my undergraduate degree and my first graduate program.  (Ah, those print volumes of RILM, eventually supplanted by the CD-ROM version that ran on the DOS-only computer. Good times, good times.)

But I do have a point here:  I’m actually very, very glad that I am this old, and that I have this experience under my belt.  Because now, whenever I use a database to search for articles, I have a very clear mental model for what I’m doing: what exactly is contained within the database, how it’s searching, what it’s finding.  And quite frankly, I don’t think you can use a print index and not have a very clear mental model for the process of indexing the periodical literature.

My friend and former colleague Kim Duckett talks and teaches a lot about the two processes of scholarly research: discovery and accessDiscovery is what happens when you’re searching an index: you’re discovering what has been written about the topic. Access is getting your hands on the full text of whatever it is that you’ve found.  The advent of full-text content in online bibliographic databases has elided the distinction between the two processes somewhat; sometimes so much so that students are unwilling to pursue articles that aren’t in full text in the database they’re searching (or don’t realize that they can pursue those articles).  And I’m not entirely sure that’s an entirely good thing.

When you’re working with a print index (this is the part of the post to which the title refers), the research process is pretty straightforward:

  1. Identify articles/items of interest.  Collect them into a list.
  2. Identify which of those items you have immediate access to, and go get them.
  3. Of the remaining items, prioritize which ones you want to pursue.  This generally involves a cost/benefit analysis: is it worth waiting for Interlibrary Loan? (how fast is your ILL? is there a cost to the user? etc.)  Is it worth going to another library to get it? (how far is that other library? will you be going there anyway? etc.)

It’s that third step that’s important here: you’re constantly thinking critically about your articles/items and their relative value to your overall research project, constantly re-evaluating your priorities, etc.  And this, I think, is a step that gets largely lost in the process when it’s facilitated by full-text access.  The tendency to grab the first full-text items you can find (what I think of as a “pillage and plunder” approach to building a bibliography) seems to be much, much more overwhelming in this environment.1

And this, finally (FINALLY) brings me to my point — or, part of my point — which is that recently I re-discovered a fascinating article by Martin Gordon called “Article Access — Too Easy?” published in a book entitled Serials Librarianship in Transition: Issues and Developments in 1986.  Yes, you read that correctly: nineteen eighty-six. Nearly a quarter-century ago.

And Gordon makes essentially the same point, though some of his language and ideas are charmingly quaint when viewed from this perspective (due primarily to the fact that online searching at that time was largely mediated by librarians).  His concern is largely that the tremendous leap in accessibility of material via online searching will lead to research papers becoming “an exercise in seeing how many citations they can append to their essay in the hope that quantity will either add to its substance or hide the lack thereof” (170).

And lo, it came to pass.

His other concern is that the greater accessibility of the journal literature will lead to undergraduates over-using it, in situations where books might be more appropriate:

Unlike monographic sources that tend at the undergraduate level to provide overall views of a topic, periodical articles are apt to be as pieces in a landscape of possible sources that require careful selection and placement in order to be of value. … How well they mesh with one another as well as their ability to update or expand the monographic choices are of primary importance in selecting them. (171)

Which leads me to my other point (FINALLY), which is that his concerns have, at least to some degree, come to pass.  And this is not entirely a bad thing: undergraduates should work with the primary research literature in their field.  But so often, the primary research literature is either written at a level that’s far beyond their comprehension, especially in their first couple of years; and/or it’s exactly as Gordon describes above: one very narrow slice of a much, much larger picture.  And that’s not always the best kind of information for what undergraduates need.

Now, thinking about the kinds of information that undergraduates need, got me thinking about Barbara Fister and her colleagues’ recent study of the contents of aggregated multidisciplinary databases (PDF), and librarians’ assessments of the value of those databases.  They found, unsurprisingly, that aggregated databases tend to pad their offerings with journals of dubious quality, and/or highly specialized or technical journals whose value to undergraduates is questionable.  They also found, however, that librarians were generally satisfied with these databases and didn’t want to see them restricting or reducing their contents to better meet undergraduates’ needs.

I’m not precisely certain, but I think I may have taken the survey that Barbara and her colleagues administered as part of the study.  In any case, I probably would have sided with most of the librarians in the study for one simple reason: I don’t want vendors making the decisions about what to include and what to exclude; I want librarians making those decisions.  And this is the wonderful bit of history that I learned from Barbara’s article: at one time, we did.  The periodicals to be indexed in general indices2 like the Humanities Index and the Social Sciences Index from Wilson were voted on by the subscribing libraries (275).

Partly this was for practicality’s sake: in an age when interlibrary loan was much more cumbersome, libraries wanted their own holdings to be foremost in an index’s contents, and before online catalogs, it was much simpler for librarians to report this information than for the vendor to assemble it it/themselves.  But there was also a pedagogical/collection development component at work here: librarians understood which journals were more appropriate for their undergraduate students’ work, and prioritized those journals for inclusion in the indices.

Maybe this makes me hopelessly old-school and out of touch (see above re: “used a print index”), but I’m not sure that such a thing isn’t such a bad idea after all.  Maybe instead of an “undergrad” checkbox, we need a whole separate “undergrad” database?


  1. Digression: my background of having used print indices is perhaps one reason why I feel less strongly than many of my librarian colleagues that library tools (online catalogs, databases, etc.) that require instruction to use to their fullest extent are the devil’s work.
  2. One of the things I love about Gordon’s article is his use of the historically-correct plural of “index.”

Changing my game plan, slightly

Something like half of the one-shot instruction sessions I do follow the same pattern: the faculty member wants me to teach the students “how to find (scholarly) journal articles.” During the first couple of the semesters I was in this position, I gradually worked out a lesson plan that works pretty well for this:

I start with an exercise on taking a topic phrase like “reducing juvenile delinquency through after-school sports” and translating that into a database-friendly keyword search like “juvenile delinquency AND (sports OR athletics).” Then I have the students go through the same process with their own topics on a structured worksheet, working in pairs, and we put one or two examples on the board to discuss them. No computers — mine or the students’ — are used in this portion of the class.1

Then I turn on the instructor’s computer and projector, and do a short demo of whatever the relevant database is for the class. 99% of the time it’s an EBSCO database, due to the intersection of the classes that tend to do instruction (Psych, Education, and Communication are our biggest customers) and the particulars of our database subscriptions.  Lately I’ve been having a student “drive” the computer while I stand in front of the screen and point and talk.

Then I turn them loose, either individually or in pairs/small groups, to search on their own for the remainder of the class (usually between 5 and 15 minutes), while I circulate and try to solve problems in the room.

This has worked relatively well for the past year or so, but I’ve become increasingly dissatisfied with the database demo portion of the class: I feel like it’s too lecture-y and I’d like to get away from it.  Here’s a chronology of my thinking on this:

  • I started being involved in some online conversations with librarians I respect and trust, who were talking about how much they’d moved away from the more mechanistic, “click here, type here, now use this menu option…” aspects of demonstrating databases, and how much they had shifted over to letting the students figure out the mechanics of the database interface themselves.  I wanted to move in this direction, but just wasn’t quite sure that our students were up to the challenge.
  • A colleague of mine gave a presentation on using blogs and wikis with her classes — her extremely humanities-based classes2 — and explained that she really doesn’t teach the students how to use the blog and wiki tools’ interfaces, and the students are generally able to figure them out just fine.
  • I heard a presentation a LOEX just a few weeks ago, where Emily Mazure talked about asking students who were about to attend one-shot library instruction sessions to view a couple of tutorials ahead of time, so that certain basic concepts wouldn’t need to be addressed in the session itself.3
  • Also at LOEX, I heard a fantastic keynote address by Brian Coppola that, among many other fabulous things, introduced me to research from cognitive psychology that suggests that people learn stuff more deeply and thoroughly if they think they’re going to have to explain it to someone else (regardless of if they ever actually do so), than if they’re just learning it for themselves.

Whew. That’s a lot of influences and relatively random, unconnected conversations and presentations.  But what it convinced me was, students can probably figure out the basics of the EBSCO search interface on their own, especially if they get a little bit of pre-preparation via a tutorial of some sort; and having them figure it out on their own, and then report back to their classmates, is probably much more effective than me talking at them and pointing at the screen.  So here’s what I’m planning to try this fall:

  • Before the session, ask the faculty member to ask/require the students to watch NCSU’s “Article Databases in 5 Minutes,” and/or NCSU’s “Peer Review in 5 Minutes,” and/or a tutorial (to be determined, possibly by EBSCO themselves, but definitely not created in-house, so as not to reinvent the wheel) on the basics of the EBSCO search interface.
  • Keep the Boolean/keywords section as is, with lecture/discussion followed by working in pairs, followed by group discussion of 1-2 search strategies.
  • Then put them in groups and give them 5-10 minutes to find appropriate/peer-reviewed articles on one of the topics we’ve just discussed in class, not on their own topics.4 I’ll give them directions for how to get to the database, but nothing beyond that.  And I’ll tell them they should be prepared to teach the rest of the class what they learned through this process.
  • After the 5-10 minutes, call 1-2 groups up to the podium to teach what they’ve learned.  Make sure to fill in any essential gaps that they’ve left out (e.g., if nobody mentions the “peer-reviewed” checkbox, make sure to mention that).

I have no idea if this will work better than what I was doing before. But, it’s different, and it’s less me-focused, which can only be a good thing.  I’ll try to remember to post about how it’s working sometime in the fall.


  1. Also, the word “Boolean” is never uttered in class.
  2. A major part of the mythology of her department is “I’m majoring in Humanistic Studies because I hate/fear/suck at computers.” Needless to say, there is considerable angst when the students discover they’re going to have to use them for more than just Microsoft Word in her classes.
  3. Now, the concepts she chose to address through tutorials aren’t the same concepts that I would have chosen, and there was some complicated stuff with pre- and post-tests that I think was mostly there to measure the effectiveness of the tutorials, but the basic concept was sound.
  4. This is somewhat counterintuitive — why not have them search on one of their own topics? — but necessary, I think. First, because otherwise the groups would spend a good deal of time deciding whose topics to use; and second, because even after the worksheet exercise, they still manage to come up with lousy search terms. If they’re using search strings that we’ve already discussed, I’ll be able to head off wildly non-useful search strings at the pass and they’ll have a better chance of getting good results earlier in the process.

Technology and Learning Outcomes

This is not the blog post you think it’s going to be.

Walt Crawford has a great post up on his blog about the choices he makes about technology for his own use.  It’s a great post not just because his specific choices mirror mine in many ways, or because his decision tree about whether to adopt a new technology or upgrade an existing one largely mirrors mine as well, but because he so clearly explains that decision tree.  Walt doesn’t ask “what can this new tool do?” but rather, “what do I want to do, and which tools will help me do that in the most efficient way possible?”

But the reason I’m posting about this is because in the shower this morning (really) I realized that his decision process about technology is nearly identical to the decision process we use to determine the learning outcomes for a class, course, or program.  Instead of asking, “what do we need to cover?” we ask, “what should the students be able to do at the end of this session/course/program?”

So for example, instead of saying “hey, I need a smartphone!” I ask, “what do I need in a phone?”  When the answer is “the ability to call for help and be reached in an emergency,” the choice is clear: my $8/month cheapie phone from Virgin Mobile is perfect.  But when I say, “hey, I’ve got a toddler, and he does cute things, and I want to capture that on video,” I go out and discover that (relatively) inexpensive and (very) easy to use video cameras are available.1  So, purchasing a video camera is a logical response to what I want to do.

Likewise, instead of saying, “we need to cover reference books, the catalog, at least three databases, interlibrary loan, and explain about plagiarism in this session,” we ask, “what do students need to be able to do for this assignment?”2 And then we have them practice doing just that.

So I’m not really sure where I’m going with this, but I thought I’d put it out here anyway.


  1. This can sometimes lead to a chain reaction, whereby I discover that my 5+ year old computer isn’t really capable of handling the files produced by that camera, so I have to consider upgrading a computer that has previously done just fine for me. And then I discover that the manufacturer is so caught up with its latest toy that they haven’t bothered to update the product line that I’m interested in….but that’s a whole ‘nother blog post.
  2. Often, the answer to this question is, “find books, find articles, request stuff from ILL, use reference books, and not plagiarize” in which case the fallback position is: “we can’t do all that in an hour.” This is a problem.

SafeAssign vs. Google for plagiarism detection

I’m gearing up for a conversation/presentation with faculty on our campus about SafeAssign, the “plagiarism” “detection” tool (more on those quotes in a moment) that’s integrated into Blackboard, so I’ve been doing some testing to see how it compares with Google for finding and sourcing suspicious passages.

But first, some definitions:  I put both “plagiarism” and “detection” in quotation marks in the previous paragraph because I don’t think SafeAssign does either one.  First of all, it’s not smart enough to detect when a student is properly citing a source, so any quoted material will be flagged as suspect.  This is fine as long as you’re reading the report carefully, but it could very easily be confusing to a student.  Second, calling what SafeAssign does “detection” can be misleading, since (as I’ll show below) SafeAssign doesn’t detect all plagiarized material and can also incorrectly flag common turns of phrase as plagiarized when in fact they’re simply common phrases.

So what I did was this: I put together a document that consisted entirely of plagiarized passages from resources I had at my disposal.  I tried to get as many different kinds of sources as I could, stopping short of patronizing online “term paper mills,” since I wasn’t about to spend any money on this project.  I used passages from Wikipedia, the open web, full-text articles from EBSCO’s Academic Search Premier, JSTOR, Project Muse, Biography Resource Center, and Oxford Reference Online, as well as a book from Google Books and a print (gasp!) reference work.  For the online resources, I tried to get a mix of HTML and PDF texts.  (This required re-typing some text from a JSTOR PDF!)  Most sources were quoted word-for-word, but I did some bad paraphrasing as well.1

(If you’re interested in the document I came up with, it’s here (Word doc). Fair warning: it makes no sense whatsoever, but that wasn’t really the point.)

Then I ran the paper through SafeAssign, and also checked representative sentences/phrases with Google to see what it could find.

The results were very interesting:  SafeAssign indicated that 66% of the paper was “suspect” and identified 7 sources that matched the text (of the 15 separate passages, it identified 10).  Google found 8 of the 15 passages.2  What was interesting was the patterns in what SafeAssign and Google could, and couldn’t, find:

  1. On the whole, SafeAssign was much better at identifying paraphrased passages than Google, though an expert Googler could probably identify more than just the 8 passages that my method found.
  2. Google did much better than SafeAssign on content from JSTOR and Project Muse (SafeAssign didn’t find any of this content.)
  3. Oddly enough, all the content that SafeAssign detected was attributed to web sources (e.g., answers.com, university web sites), even when I had originally found that content in licensed databases. 3  This is interesting to me in that one of the main selling points for subscription-based plagiarism detection tools (like TurnItIn.com, which has leaned heavily on this in their marketing) is that they can compare student papers against sources (like licensed databases) that are not on the open web.
  4. Finally, I noticed at the very bottom of the SafeAssign report that there’s a little logo that says “powered by Windows Live Search.” Make of that what you will.

So that’s what I learned.  I’ve always had serious concerns about the ethics of using SafeAssign, TurnItIn.com, and other such tools, and I’ve has suspicions about the effectiveness of these kinds of tools, but now I have some (not very hard) (actually pretty squishy, but evocative) data on that question.


  1. As an aside, this was all kinds of fun!  Trying to construct bad paraphrases was particularly challenging and weirdly satisfying.
  2. My method for Google searching was to take a single sentence or longish phrase and search for it in Google with quotation marks around the whole phrase. If Google found the passage in the first page of results, I counted it.
  3. In at least one case, I used text from a scholarly article, which article then was quoted (properly!) in a senior thesis, which was online at the student’s college, and SafeAssign attributed my text to that senior thesis!

Movers and Shakers 2010: Congrats, folks!

It’s that time of year again: Library Journal has just released its list of “Movers and Shakers” for 2010.

This year, the Library Society of the World (better link here) was well represented, with Movers & Shakers Maurice Coleman, Matt Hamilton, Jason Puckett, and Andy Woodworth.  But most notably for the LSW, our very own Steve Lawson and Josh “Sheriff” Neff were honored for . . .  well, founding doesn’t seem like quite the right term for an organization as disorganized as the LSW.  Perhaps catalyzing is a better term.  Anyway, they’re two of the driving forces behind the LSW, and the award couldn’t have gone to a better duo.  Congratulations, carping nerdboys!

LSW aside, I also want to draw people’s attention to another Mover & Shaker this year, Bonnie Tijerina.  Bonnie and I were Fellows at the NCSU Libraries together (this brings my list of one-degree-of-separation M&S up to six) and I remember when she was starting to organize the first Electronic Resources and Libraries conference.  I know a number of people who attended this year’s conference in Austin and had a terrific experience there, but I had no idea that was her conference, and that it had grown so much.  I’m not surprised, though, and I’m so happy that she’s being noticed by her colleagues and by LJ for this essential work for the library community!