In discussions of the ability to locate, evaluate and use information, information literacy is the term that dominates the literature. Information fluency occurs less frequently in print and, while the terms certainly overlap, are they significantly different?
There may be some qualitative differences, but quantitatively, it’s challenging to draw a line between literacy and fluency. From a common experience perspective, it could be argued that a fluent person, as opposed to one who is literate, possesses greater linguistic power: they know and use more words, they understand nuance and they do so with greater ease. A fluent speaker sounds different than a literate one. Given such a linguistic distinction, when it comes to task performance involving digital information, does a fluent searcher behave differently than a literate one? If so, how?
To gain some ground on this matter, consider a related performance question: is there a minimum level of digital information literacy, and if so, what is it? The following percentage continuum may be helpful. For present purposes, imagine these represent an individual’s ability successfully to execute a variety of search (this includes evaluation and citation) tasks of intermediate difficulty.
Literacy must start somewhere above 50%, otherwise failures are at best equal to successful attempts. A sustained success rate of 100% is unrealistic; there are too many variables which one cannot control to maintain a level of flawless performance. The best searchers sometimes fail. Assuming similarities exist between the constructs of linguistic literacy/fluency and information literacy/fluency, literacy becomes fluency somewhere between 50% and 100% as one’s task performance improves.
The tipping point between literacy and fluency is not known or universally agreed upon. The same is true of illiteracy and literacy. Knowing where one stops and the other starts may be unimportant, as long as students succeed at least as much as they fail. If grades for information literacy are ever assigned, however, knowing where to draw the line between success and failure becomes more important.
Let's agree on one thing. While fluency may be preferred, literacy is the primary goal to be achieved. Numerous studies have repeatedly shown that students lack information literacy and it does not occur without intervention.
The Center for Talent Development at Northwestern University partnered with 21st Century Information Fluency (21CIF) to assess the search, evaluation and citation performance of students participating in their middle school and high school summer enrichment programs. The previously unpublished results support the claim that the majority of students considered “above average” academically are illiterate in terms of digital information retrieval--if that mark is set above a 50% success rate. Looking at the results of a representative data set involving over 400 high school students, 70% of them were unsuccessful on half or more of 10 search tasks.
These results underscore both the need for and effectiveness of training in digital information retrieval and evaluation. Knowing that these students represent the top 1-5% of their class in terms of academic achievement, the percentages for “average” students on the same tasks could well be lower.
If the goal of instruction in information literacy is to succeed more often than one fails, then a score of 60% may be a good place to draw the line. Evidence from the chart below suggests that task performance tips at 60% before and after instruction. There is a marked drop in the number of successful cases from 50 to 60 percent before intervention. Following instruction, there is a corresponding rise in task success at 60%.
This task performance data shows the shift before and after instruction:
Finding a tipping point between literacy and fluency is less clear. If the cutoff between literacy and fluency is set at 8 tasks (80% success), about 27% of these students would be considered fluent on the Posttest. Prior to instruction, only 5% of students were successful at this level. Whether a threshold of 80% should be considered the mark of fluency is open for discussion. The 80% number has been used by 21CIF for over a decade as a hypothetical threshold of fluency. The only reason for picking 80% is that it coincides with successfully completing search assignments most of the time, leaving room for error due to an inability to guess what keywords an authority might use, searching the wrong database, encountering content that is intentionally misleading, and so on.
More work could be done to analyze the types of tasks the 60% segment does better than the 50% segment. The same could be done for the differences between the 70 and 60 segment and the 80 to 70 segment. Findings could shed light on what separates literate and illiterate searchers as well as differences between fluent and literate searchers, although the latter is not a priority. The driving goal is achieving information literacy since the majority of students fail at moderately challenging search tasks typical of research assignments in school.
“The driving goal is achieving information literacy”
Returning to the question of what constitutes literate task performance, there is evidence of a tipping point between 50 and 60 percent both before and after instruction. Providers of information literacy instruction may want to consider this as a measure of an effective intervention.
Distinguishing between literacy and fluency, on the other hand, is more elusive and less important than helping students first attain literacy. There is no evidence yet that fluency is necessary for normal schoolwork, although that could become clearer with more testing and examination of assignments that require digital research. Perhaps fluency, like linguistic skills, takes time to develop and is only really needed for more demanding tasks such as graduate-level or professional-grade research. Whether searching becomes easier or harder as search engines evolve and the digital landscape increases in complexity could impact the need for fluency.
If you would like to test your students’ search abilities, consider giving them this pretest (available with a site subscription). Try a sample question for free.
For more on the differences between literacy and fluency, read Competent, Literate, Fluent: The What and Why of Digital Initiatives by Mo Pelzel.
Fullcircle Directory