Critical thinking and knowledge bases

A couple of weeks ago our campus hosted a guest speaker who gave a presentation on assessing critical thinking skills, especially in the context of general education.  (This was part of an ongoing project on our campus to reform our general education curriculum and move more in the direction of assessment of learning outcomes.)

The presentation was engaging and thought-provoking, but one piece in particular stood out to me as particularly relevant to information literacy instruction.  The presenter cited research1 that had been done to try to get at what constitutes generalized critical thinking skills.  This research had focused on expert chess players, because the researchers figured that there were few better models of generalized, disciplinary-context-free thinking skills than chess. So they investigated how expert chess players think, what drives their decision-making, how expert chess players’ thinking differs from novice chess players’ thinking, etc.

What they found, however, was that instead of anything that could be described as generalized thinking skills, what the expert players were drawing on was a vast knowledge base:  of patterns of chess moves, of strategies, of famous games, of players’ personalities and their likelihood of making certain strategic decisions, etc.

Click to enlarge

Click to enlarge

And that got me thinking about information literacy, and specifically, the process of sorting through vast reams of search results to find the relevant and reliable results, and to sort them from the rest.  I’ve recently become more and more aware of the incredibly dense and complex filters that I apply to a set of search results.  Try it yourself with the search results for “affirmative action” in Academic Search Premier that are linked in the image at the right:  how quickly did you pull out the popular magazine article and the editorial; how quickly did you sort the other, more scholarly, treatments into their disciplinary pigeonholes?

It’s astounding to me how quickly, and on how little evidence, I’m able to make these kinds of decisions.  (Admittedly, I sometimes get them wrong!)  And I always have to remind myself that 18-year-olds simply do not have the knowledge base yet to do this as quickly as I do.  Even a hypothetical college senior who had achieved all of the ACRL Information Literacy Competency Standards for Higher Education couldn’t do it, at least not as quickly as I — or any of you — could.  It’s not a question of the student’s information literacy skills, it’s a question of her knowledge base, or lack thereof.

And that’s my question:  how do we impart that knowledge base to our students?  Can we?  (Should we, is another question I suppose.)  Or is the best we can do, to throw up our hands and say, “go out and live mindfully in the world for twenty years and then come back and ask that question again?”

  1. I’m not sure precisely what the citation is for the research; it was probably the following article, which I have not actually read, because our library doesn’t have it and I’m not about to tax our already-overburdened ILL system just to verify a citation for a blog post. Ahem. Where was I?  Oh yes, the citation:

    Perkins, D. N. and Gavriel Salomon. “Are cognitive skills context bound?” Educational Researcher 18/1 (1989): 16-25.


  1. Posted November 25, 2009 at 10:53 am | Permalink

    I work in a business school library. Some of our students can’t even tell the difference between a book and a periodical. I sometimes get asked if this document, with an explicit (for me?) book title like “Handbook of financial econometrics” is a magazine or not. So, identifying different types of periodicals? It’s a long shot. We do our best at the ref desk and during library instruction sessions but the question still pops up every now and then.

  2. Posted November 25, 2009 at 2:02 pm | Permalink

    Such a good point, and something I think about a lot, because the older I get the more I realize how much my purported intelligence has to do with having amassed a good deal of information — through reading books, through having academic parents, through school to some extent, and through random life experience. You can’t design a curriculum to replicate that knowledge. I suppose I try to impart what I know as I can, and I hope those around me will do the same, because God knows there’s still plenty I don’t know.

  3. Posted November 30, 2009 at 4:43 pm | Permalink

    The term “knowledge base” is troubling to me because I think it’s missing a crucial distinction. Chess players may have amassed a multi-modal collection of data banks (consisting of outcomes, personalities, moves, etc., from which they can access from within their own heads) but we have all that information as well. In theory, at least, I could construct an artificial knowledge base and then compete with grand masters of chess, so long as I was granted unlimited amounts of time between moves! So I guess I’m not seeing the knowledge base as the problem, it’s the element of time. In your example above you mention distinguishing between popular and academic publishers. To me that type of filter implies a functional knowledge of network behavior. Magazines may reference the latest research in a superficial way, but academic journals will give you access to that research plus a selected body of research to support it as well. You could build a curriculum around understanding how these networks coexist and influence each other, or you could build tools to help researchers understand how the networks interact with each other. (or both) Here’s a good example: an online chess game which visualizes the computer’s analysis of the network of available moves. Try a couple of moves, it’s interesting. 🙂

One Trackback

  1. […] responses, and it depends on having a set of criteria you can use, and it even depends on having some previous knowledge.  I can’t teach all of that to a class of fifth graders in a one-shot session. I doubt you […]