Assessing Information Literacy

failure

Failing an Information Literacy assessment recently--here's my post about it--raises two questions: "what is information literacy?" and "how is it best assessed?" The answer to the former is largely answered by the Digital Information Model on this site. That aside, there seems to be a wide net cast by various proponents around a number of independent skills and dispositions on which information literacy depends that cause confusion when attempting to measure information literacy.

Information Literacy is not completely dependent on language literacy

Start by imagining you need to search in a language you don't understand. Pick one you don't know: Finnish, Mandarin, Arabic. A native English-speaker is at a disadvantage using an unfamiliar language tool. Suddenly I am illiterate, but not completely lost. It's possible to recognize all three sites as search engines. They resemble Google, Yahoo, Bing, Duck Duck Go, etc. without familiar words. My first instinct is to look for the translate button, which all these sites have, so I can continue in English. The search box and search button are easily recognizable, given previous experience using a search engine box. When results are displayed, it's easy to figure out they are results to a query.

Digital information literacy helps make sense of an otherwise indecipherable page: hyperlinks--knowing that if you click one it will take you to other information, abstracts and relevance--knowing that search results are merely portions of information and those at the top of the list are stronger matches for a query (even if you can't understand them), icons--universal symbols for filters, images, video, volume, and so on, site logo (home page)--clicking an image in the upper left navigates to the home page, browser tools--regardless what the site is, the browser software (Firefox, Chrome, etc.) still works and helps with tasks like going back, translating, reading the URL and so on.

Poor language skills do not sufficiently explain Information Literacy failures

While the exercise above demonstrates how information literacy does not completely depend on linguistic literacy, there are many ways it does. Knowing good keywords to use in queries, finding better keywords in search results, knowing if search results are credible or not, these are some of the ways that information literacy depends on understanding the language used. But a failure to use good keywords is not necessarily a linguistic failure. The problem could be laziness or not caring. Not knowing that better keywords could be used or how to find them are two distinct information literacy failures. Those understandings, both of which can be measured, apply to completely unfamiliar languages (although executing them does depend on knowing the language).

hover over a link

Another key understanding that doesn't necessarily depend on language is knowing that hovering over and clicking a hyperlink results in additional information. Many browsers display the URL of a hyperlink when the cursor hovers over it; when a hyperlink is clicked it takes the user to a new URL. Language ability comes into play if one needs to discern whether the resulting information is helpful or not (unless the information is pictographic). A related competency, evaluation, requires language proficiency to distinguish between a result that is factual or a fabrication. But knowing that evaluation is necessary and techniques to test the veracity of information depend as much on knowing and using an effective search process than the ability to comprehend the language being used.

Forging a 'clean' information literacy assessment

A true assessment of information literacy should be careful not to confuse the ability to read and write with the ability to apply search strategies and techniques. Digital research skills are independent of language ability in a number of ways. The problem occurs when, attempting to measure information literacy, other literacies are measured unintentionally: the ability to make sense of words, images and even the specialized subject matter of the assessment tasks.

    For example: "What steps should an unemployed person take who needs money to make ends meet? Click all the options that would be good for paying monthly bills. Here are the choices: talk to a credit counselor, purchase a lottery ticket, find a paying job, take out a loan, don't pay the bills."

Should an item like this be part of an information literacy assessment? All five answers clearly measure financial literacy. I do like the use of a scenario to provide context for a search assessment, but I believe it isn't relevant to score such an item as an indicator of digital information literacy. An item like this appeared on the assessment I failed. Intentionally submitting answers that would fail, the final report disclosed the purpose of the assessment item: to measure a student's ability to 'define a problem, formulate a question, or identify a decision that needs to be made.' None of the choices pertain to digital searching.

Testing search box strategy, not other literacies

What task shows unambigously whether a student can define a problem, formulate a question, or identify a decision that needs to be made? A search question is typically a good place to start. Using a different financial scenario, any number of questions could serve as prompts:

"Using the Internet..."
  • "how may Robin find the best price for a fingerboard? (almost any item will work)
  • "what was the highest value of Bitcoin in the last 12 months?
  • "who is currently the richest person in the world?

Enter a query in the search engine box provided.

In the financial example about paying bills, the student was not allowed to define a problem or formulate a question--it was provided by the assessment. In any of the Using the Internet examples, the literacy challenge is to convert the question into a query. This task involves knowing how to formulate a query, which is a step beyond formulating a question or defining the problem. Lots of outcomes are possible, from copying and pasting the question word-for-word, to selecting keywords from the question or substituting other keywords. It's even possible to use keywords that aren't suggested by the challenge question. What's harder about this more authentic information literacy assessment is that care needs to be exercised in scoring the student's query. Here is a suggested scoring guide and feedback for the student.

"Passing Responses"

  1. WORD-FOR-WORD QUERY: shows a basic understanding of how to use a search engine
  2. EXISTING KEYWORD QUERY: shows a deeper understanding how search engines work
  3. APPROPRIATE KEYWORD SUBSTITUION QUERY: demonstrates information fluency using search box strategy
"Failing Responses"

  1. INAPPROPRIATE KEYWORD SUBSTITUTION QUERY: shows a lack of understanding of the search task
  2. NO QUERY

Feedback for the Student, depending on which type of query was provided.

  1. You have a basic understanding of how to use a search engine. Copying and pasting questions does not demonstrate you can define a research problem.
  2. You demonstrate an intermediate understanding of how to turn a question into a query, which indicates you can define a research problem. This ability may be improved by finding better keywords to use in the search results.
  3. You demonstrate fluency in turning a question into a query by using keywords that aren't found in the question but that may be more effective finding the desired information.
  4. The search terms you entered demonstrate you may not have understood the question.
  5. By not answering the question, you did not demonstrate whether you can turn a question into an effective query.

Hopefully this example encourages more careful crafting of assessment tasks. Many more digital information fluencies exist to measure. For more on this and a discussion of how to administer an assessment, see Curriculum Matters in this issue of Full Circle.

Information Literacy is not something you have to get right the first time

Finally, one of the concerns I had about my "failed" assessment--and the same thing happened to me the second time it took it--was that the "right" answer only counted on the first attempt. I could continue to click and explore the assessment sample page but apparently only my first attempt was logged. The truth is, information literacy, or its more robust sibling, information fluency, does not depend on getting it right the first or even the second time. What matters is that you "get it right" as a result of a process. Most searches are like this and you don't get the answer or information you need on the first attempt. Assessments should allow for this.

Curriculum Matters: Setting up and Managing Assessments

Identifying Information Fluency competencies to measure and suggestions for administering assessments. Go

ActionZone: What was the highest value for Bitcoin in the past 12 months?

See how your students answer the question and how they did it. Go

Assessment: Search Box Strategy

A sample question to measure your Information Fluency. Go

Spring 2023 Fullcircle Directory