Friday Early-Reader Blogging

It’s Friday, and more than that: it’s the last day of the semester! This particular semester has been a bit of a slog, and I’m not sure why. It’s possibly because I’m becoming more involved in more initiatives, both here on campus and in the wider librarianship community, that pull me in disparate directions. These are all good directions, of course, and things that are important and making a difference on campus and in the larger world — but they do leave me feeling like I’m squeezing my instruction into the cracks between them, rather than instruction being the core focus of my work.

But summer is upon us, and I have grand plans:

  • May will be the month when I finally write up the results of the research I did last fall and submit it for publication.
  • June will be devoted to preparing my pre-tenure review portfolio (see “publication” above!).
  • In July, I’ll be preparing to transform my presentations on assessment versus evaluation into a webcast for ACRL, scheduled for July 19.  Stay tuned for additional blatantly self-serving announcements; if I can’t pimp my webcast on my very own blog, where else can I? 🙂
  • And then August will be getting back into the swing of things and getting ready for the new semester.

So, with that in mind, here’s a gratuitous photo of an early reader:

Have a good weekend, everybody!

New guest post at ACRLog

This is just a quick post to note that I have a guest post up this week at ACRLog, entitled “Context Matters.”  Mostly I’m musing on issues of local campus and classroom contexts, and how they affect what works (and doesn’t work) in a library instruction classroom, building on my not-very-successful experiment with no-demonstration classes.

Go check it out!

Changing domain registrars, possible interruptions

Just a quick note to let you know that I’ll be transferring this domain, and the other domains that I own or manage, from to a different registrar in the next few days.  I don’t know what this will mean for DNS stuff, so the blog may disappear for a day or so and then reappear.

The No-Demonstration Class, Or, Not

Wow, it’s been pretty quiet around here.  I’m sorry about that; the semester got kind of busy.

So what have I been working on this semester?  Well, to begin with, I made a real effort to move toward fewer database demonstrations in my instruction sessions. A lot of the one-shot instruction sessions that I do, after I consult with the faculty member about what their students need in order to succeed on their assignments, end up with the same basic pattern:

  1. Practice turning a topic phrase (e.g., “reducing juvenile delinquency through after-school sports programs”) into a Boolean1 search string (e.g., “juvenile delinquency AND (sports OR athletics)”).
  2. Basics of the user interface for whatever the relevant database is for the class. Because of the way our database subscriptions are configured, 9.5 times out of ten, it’s an EBSCO database.
  3. Locating the full text of an article, whether online, in print, or via Interlibrary Loan. Our link resolver, she does not have the most intuitive UI design in the world.

Now, in the corners of Information Literacy Instruction Land where I hang out, the conventional wisdom is:

  • Students learn better by doing than by being told.
  • Today’s 18-22-year-old college student is pretty darned good at figuring out online interfaces.
  • Therefore, they can, and should, do better by figuring out how to search a database on their own.

And thus was born the Holy Grail of database instruction: The No-Demonstration Class. No more standing in front of the class and futilely waving your hands at the projector screen! No more “click here, type here, and don’t forget this radio button!”  Student-directed, active learning!  Hands-on practical practice!  Social learning, because they do this in groups!

Different librarian instructors handle this differently: some turn the students loose from the get-go and gradually move them from their inevitable initial Googling into more and more specialized resources, while others give them a head start by pointing them to the library databases to start with. Some have the students present what they’ve found to the class after a certain amount of time spent fumbling around, and have the rest of the class critique the search strategies that  are presented, moving towards more and more sophisticated strategies.  Some provide more structure; some less; and some none at all.  All of them report dynamic, engaged, actively-learning classrooms, instead of classrooms full of snoozing, texting, Facebooking students.

Sounds great, right?  Of course it does! So I had to try it out for myself, and since so many of my classes seem to devolve into “how to search EBSCO databases,” and since I’m relatively comfortable with completely unstructured chaos in the classroom, I figured, what better opportunity to try it out?

Now, I should start by saying that I didn’t exactly jump in with both feet.  We still started the class with a brief discussion and exercise (in pairs) on constructing keyword searches.  I have enough experience with my students to know that, if we didn’t cover this ground first, I’d be turning students loose who would just do this to the poor unsuspecting database:

using an entire phrase as a keyword search

Or this:

single keyword search with a far too general keyword

Or, heaven help them, this:

What would this search retrieve, exactly? I don't even know.

So we went through the Boolean part of the lesson plan, and then I got them into pairs or groups, sent them to Academic Search Premier, and turned them loose with one of the two search strings we worked on in class — which were derived from their own topics, not “canned” topics that I had brought into class. And…

…it kind of bombed.

What did they do? Well, they dutifully went to the database, input the search more or less as one would expect, got some results, and were promptly done.

What did I expect them to do, based on what I’d heard from other librarians was happening in their classrooms? Well, for one thing, I’d expect them to look at their search results a little more carefully than just long enough to say, “oh, there’s an article that looks good.”  Maybe notice that a lot of their results were from, say, the New York Times and think twice about why that might have happened.  I’d hope that they’d explore the interface a little, poke around, try some options, discover the limiters (limit to peer-reviewed articles, limit by date, etc.).  Heck, even making a gonzo mistake like searching by subject heading right off the bat would be good, if only because it would make a useful example to talk about with the rest of the class.

But instead, they did what they were told: no less, certainly, but also no more.

Why didn’t it work as well for me as it clearly does for many other librarians?  I really don’t know.  It’s absolutely possible — even probable — that it’s something in the way I’m presenting the task that constrains them.  Either I need to give them more direction, or less, or different directions.  It’s convenient (and plausible) to blame the existing classroom dynamics that have developed between the faculty member and the students over the course of the semester.2  It’s also convenient to blame a campus culture that is not especially hands-on, experimental, techy, or DIY (our campus is kind of the opposite of MIT). Some of the campuses where, anecdotally, I hear about the No-Demonstration Class working very well, have more of those hands-on qualities as part of their campus cultures.

So what can I do to improve on the situation?  I can’t really change the existing classroom dynamic between students and the teaching faculty member, nor can I change the campus culture.  The only thing I can change is how I approach the lesson: how I frame the activity, what I tell the students as I turn them loose, what specific prompts and questions I direct them to answer (or not).  That’s clearly where I’m falling down, and where I could use some help.  Ideally, I’d go visit a bunch of campuses where librarians are teaching this way and see how they do it, but that’s clearly not practical (hellooooo, sabbatical project!).

In the absence of a grand cross-continent tour, then, I’ll have to ask for suggestions: How do you make this work in your classroom?  What has worked for you — and not worked?  How do I get my charmingly dutiful students to break out of their constraints and experiment a little bit?

  1. I never use the word “Boolean” in class. Ever.
  2. We know that whatever dynamic and form of instruction the faculty member has established for a class will be the form of instruction that, when the students arrive in the library, they are most comfortable with. If the faculty member lectures, they’ll be most comfortable with a librarian lecturing at them, and uncomfortable with active group work.  And vice versa.

Friday Preschooler Blogging

Hi folks! I’ve got a real post in the pipeline for Monday, but it’s not quite fully germinated yet.  So in the mean time, get a load of this objectively adorable three-year-old:

Grand-Christmas: Planets books

Stay tuned Monday for thrills, chills, excitement, and lesson plans: I promise!

Conference Presentation Feedback, Part 8: Getting All Meta

(See this post for an introduction to this blog series.)

Well, this is the final post in the series, and I thought I would wrap up with one last thought, which is to point out that this entire series has been enabled by the minute paper assessments that I got back at the end of my two presentations.

Now, some conferences (including the leading national conference on information literacy instruction) ask attendees to complete course-evaluation-type instruments, either at the end of each session or at the end of the conference as a whole.  And in the context of a professional conference, there certainly is value, both to the conference organizers and to the presenters, to getting answers to questions like “did the presenter talk too fast?” and “were the objectives for the session explained clearly?” (Never mind “could you read the slides?”)

But consider whether that kind of evaluative instrument could have gotten at the kinds of questions that the attendees raised in my minute papers, and which I have responded to in this series of blog posts.  Would I have been able to continue the discussion and learning from the presentation without “one thing you still have questions about”?

So it’s not only in classroom settings — where “what the students learned” is as important or more important than “were they satisfied” — where assessment can be preferable to evaluation. It’s also in professional development and dialogue where asking “what are you still confused about” can be a useful question to ask, and (and this is critical) can better enable continued professional dialogue and learning.

And isn’t that what we’re all about?

Conference Presentation Feedback, Part 7: But will it get me tenure?

(See this post for an introduction to this blog series.)

One question that got asked over and over again was essentially, “how can I use this information in my annual performance review and/or my tenure portfolio?”  And that’s an excellent question!  It’s common practice for teaching faculty members to include course evaluations as part of their tenure portfolio, presumably to speak to the quality of their teaching, so how could/should this information be included?

Well, librarians at my institution are tenure-track, and I don’t have tenure yet, so here’s what I plan to do:

  • I’m going to include the transcribed minute papers from a few classes early in my career here, before I made certain changes to some aspects of how I teach certain concepts.1
  • Then I’ll include some transcribed minute papers from some classes from later in my career, after I had implemented those changes.2
  • In the write-up, I’ll describe how these sets of examples tell a story: I used to do this in the classroom, and I learned this from the assessment data, so I changed that, and now students say this about what they’ve learned.  In other words, I’ll demonstrate how I’m implementing the assessment cycle.

My hope is that by providing this information, I’ll be able to demonstrate not only that I’m a good teacher, but also that I am improving.  Evaluations can only provide the former (and are somewhat suspect as a measure of that anyway), while assessment data can demonstrate both the former and the latter — and I would argue that the latter may actually be more important than the former.

Next up: the final installment in our series, in which I get all meta about this project.

  1. This would be the flowchart that I used to use to describe how our link resolver works.
  2. This would be when I switched from using the flowchart to using a screencast.

Conference Presentation Feedback, Part 6: Self-reporting vs. “actual” assessment

(See this post for an introduction to this blog series.)

I got several comments on my presentation that were roughly analogous to this one, which I am quoting exactly (the emphasis is in the original):

So, to me, they’re still evals of what students think they’re learning, not whether or not they’re actually learning. So, it seems assessment would still need to be done…

There are several issues being conflated here, so I’m going to try to unpack them one-by-one:

  1. The commenter’s use of “assessment” at the end of the quote suggests that s/he doesn’t think that minute papers are “actual” assessment. Or, to put it another way, minute papers are not a test or exam, and therefore cannot measure learning, and therefore aren’t assessment.  I would counter that both of those latter two premises are incorrect: things that are not tests can measure learning (as anyone who has graded an essay can attest), and therefore can be considered part of assessment.
  2. It is true that minute papers, and other informal assessment tools, do rely on students to self-report their learning.  And I do, in fact, have a lot of problems with self-reporting as a research methodology.1  If I had a nickel for every study that’s been published in a peer-reviewed library science journal that essentially relies on “students’ self-reported confidence with X” or “students’ self-reported mastery of Y,” well … I wouldn’t be as rich as if I had a nickel for every “librarians’ perceptions of X” study, but I’d still have a good chunk of change. BUT, at the same time, I also have a lot of problems with assuming that tests accurately measure learning, and especially that tests accurately measure students’ behavior in real-world situations.  Students may very well choose the correct answer on a test that asks them how to evaluate information they find on the web, but then turn right around and cite a completely bogus web site in a paper they have to write for another class.2  (Put another way: every single college student in this country can tell you what the legal drinking age is.)
  3. It’s important to note that the context in which I’m using the minute paper is formative, rather than summative.  Formative assessment is ongoing, and is used to improve instruction (and, presumably, learning) on a day-to-day basis. Summative assessment happens at the end (of the unit, course, program, etc.) and is used to determine if the students have mastered the specified learning outcomes for the unit, course, program, etc. Nobody is asking seniors, at the end of their college careers, “what’s one useful thing you learned, and one thing you still have questions about?”

Next up: But will it get me tenure?

  1. This is part of the reason I decided to remove “one thing you already knew” from my minute paper assessments: I was getting a lot of responses along the lines of “I already knew how to use encyclopedias,” to which my reaction was “yes, but do you ever use them? Even when they would be helpful? Judging by the lack of crowds in the reference stacks I have to conclude that no, you do not.”
  2. This was one of the most remarkable findings of an ethnographic research study (PDF) presented by Andrew Asher and Lynda Duke of the ERIAL Project at the Library Assessment Conference: students’ actual research behaviors differed markedly, and not in a good way, from their performance on a test that was designed to measure whether they knew how to evaluate sources of information.

Conference Presentation Feedback, Part 5: Print journals?

(See this post for an introduction to this blog series.)

So the first example I used in my presentation was a collection of excerpts from minute papers that eventually convinced me that a large portion of our students, given a journal article citation, don’t know how to find it in a collection of bound print journals, even when those journals are shelved alphabetically by title, not by call number.

One of the “one thing I’m still confused about” comments read: “I’m still confused on why you’re teaching print articles.”

Now, there are two ways of understanding this comment, and I’m not sure which one is correct:

  1. Why are you spending time teaching a very rudimentary skill that sufficiently motivated students will learn on their own, when you could be spending time teaching Big Important Information Literacy Concepts like how information gets created and disseminated? That’s a very good question. I’m not sure how to answer it, except to say that it’s clear that our students don’t, in fact, know how to do this, and I worry that they are not actually sufficiently motivated to figure it out on their own.  Some of them will, of course — they tend to be the students we don’t see all that often, because they do figure stuff out on their own.  But others won’t, and my librarianly DNA cringes at the thought of students giving up when the “perfect” source is available right in our library, just because the organizational system for print journal articles — which nobody has ever taught them — is too intimidating for them to navigate on their own.But the comment did make me think hard about whether it’s worth spending precious class time on what is, really, a pretty rudimentary skill.  I’ve long had fantasies of developing a separate learning object, probably a video, on this topic, so that I can offload some of that work from in-person class to offline learning.  Now I’m even more motivated to do just that.
  2. Why are you spending time teaching students how to find print articles? Haven’t you converted all your print subscriptions to online by now? No. No, in fact, we haven’t converted all of our print subscriptions to online. Actually, we are just beginning to convert some of our current subscriptions to online format.  And we certainly haven’t converted our extensive backfiles of bound periodicals to online.  So, as I say over and over to our students, we still have quite a large number of articles and journals that are available in print, and only in print.  So, yes, they do still need to know how to do this.

Next up: self-reporting vs. “actual” assessment, or, formative vs. summative!

Conference Presentation Feedback, Part 4: Scalability and Sustainability

(See this post for an introduction to this blog series.)

As the sole instruction librarian at my library (among a staff of seven), scalability and sustainability are really important to me.  So when I saw comments to the effect of:

well, it’s lovely that you’re able to do this at your itty bitty liberal arts college, but we teach 500 sections of intro comp every semester, with 75 students in each section. How on earth are we supposed to transcribe all eleventy-bajillion of those minute papers, never mind actually analyze and think about them?1

I sit up and take notice.  Yes, it’s certainly true that I’m able to do things in a program of this scale, that people managing a program of a much larger scale simply can’t even contemplate.  (But also, vice versa.)  But that doesn’t mean you have to write off any kind of assessment or evaluation that doesn’t rely on a scantron form.

Here are some thoughts on how to manage minute papers, and other classroom assessment techniques, in a much larger program:

  1. You don’t have to administer minute papers on paper. An online form, either hosted at your library’s or university’s web site, or hosted elsewhere (SurveyMonkey, a Google Docs form, etc.) can accomplish exactly the same thing, with no transcription required.  (Google Docs is a particularly inviting option, because of the ability to export your results in a variety of formats.)
  2. Even more importantly, you don’t have to assess every thing every time.  I said this before in the previous post in this series, but it’s even more pertinent here.  A representative sample is just fine, and even a non-representative sample works well, as long as you’re continually gathering information over time.  So instead of assessing all 500 of those sections, only assess 100 of them.  Assess a different 100 next semester, and a still-different 100 the semester after that.  Gradually, you build up a comprehensive picture of what’s happening in your program, what’s working and what isn’t, and how you can change things to improve learning.
  3. Tools like Wordle and more sophisticated content-analysis tools are your friends here.  When you get enough qualitative data all in one place, you can do some really interesting things with content analysis.

Next up in our series: those blasted print journals!

  1. I’m exaggerating for comic effect here; this paraphrase is much, much snarkier than any of the comments I actually received.