Article Review 2 – Lori Sowa
When adequately facilitated, asynchronous discourse can be an effective learning tool that is unique to the blended or online classroom. In my brief experience with both taking and co-teaching online courses, I’ve experienced a few variations on how the discourse can be structured. Writing prompts have been a helpful means to focus the discussion, and quantifying the number of expected responses provides direction to the participants as to how much collaboration is expected. Intrigued by a comment from the recently reviewed meta-analysis regarding the inherent advantage of this type of discourse, I decided to dig further into the topic of how to best structure and facilitate online discussion groups.
In Tagging Thinking Types in Asynchronous Discussion Groups: Effects on Critical Thinking, Schellens et al. (2009) studied the effect of requiring students to use specific scripts when they post to a discussion group to describe their underlying thought process. The study included a very small sample size: 35 students from a junior-level undergraduate class on instructional strategies. The class was randomly divided into 6 groups — 4 experimental and 2 control — and required to participate in a discussion group debating different perspectives, possibilities, and limitations of e-learning. Participation in the discussion group was a formal part of the students’ grade, and they were required to post at least five messages over a two-week period. The assignment was identical for each group, except that the experimental groups were required to tag each post using “thinking hats’ adapted from those developed by De Bono (1991). As an example, the description of one of the six hats in Schellen’s (2009) article reads:
The blue hat is the color of the sky high above us. This hat stands for a reflective perspective to see whether the right topic is addressed. What is relevant? Defining what to think about and deciding what is to be reached. (p. 81)
At the conclusion of the two-week period, the messages were coded based upon Newman et al.’s scheme (1995) which identified ten critical thinking categories. The authors’ found evidence of critical thinking in both groups, but significantly more positive indicators (and less negative indicators) of critical thinking in the experimental group using the thinking hats.
Overall, the experimental design was rigorous and grounded in sound theoretical context. Coding of the individual posts was performed by two individuals, with interrater reliability tested and found to be reasonable. The sample size was quite small, and thus repeated experiments involving more students over longer discussion time frames would provide more representative results. It would also be interesting to survey the participants about their perceptions of the value of the discussion group, and perform longitudinal studies to see if this method improves the critical thinking of these students in future discussions.
It is difficult to infer the context and wording of the assignment from the text of the article, and what specific guidance was given to the class regarding the expectations of the content of the posts. But just by providing the description of the thinking hats, and the instructions to use the full range of hats, the experimental group was provided with additional instruction and guidance compared to the control group, leading them towards aspects of critical thinking that would then be counted using the model. Requiring that a tag be used each time likely cut down or eliminated irrelevant posts in the experimental group. This is a good thing overall for meeting the goal of the assignment, but also likely skewed the outcome in favor of the experimental group. Perhaps a better measure of the effect of this specific tagging scheme would have been to discuss the idea of critical thinking in general with the entire class, but then to require only the experimental group to use the thinking hat tags for posts.
Any pedagogical tool used in any classroom must take the audience and the intended learning outcomes into account during the course design phase. The level of direction from the instructor to the students is one of these aspects. Finding the balance between being overly-specific and vague in assignments can be tricky. The results from this study are promising in terms of providing a system that effectively scaffolds students to be intentionally critical in their thinking when posting to online discussion groups. I can see using this or a similar method, particularly for students new to discussion groups, but even for more advanced students.
De Bono, E. (1991). Six thinking hats for schools, Resource book for adult educators. Logan, IA: USA Perfection Learning.
Newman, D.R., Webb, B., and C. Cochrane. (1996). A content analysis method to measure critical thinking in face-to-face and computer supported learning group learning. Interpersonal Computing and Technology, 3, 56-77.
Schellens, T., Van Keer, H., De Wever, B., and M. Valcke. (2009). Tagging thinking types in asynchronous discussion groups: effects on critical thinking. Interactive Learning Environments 17(1), 77-94.