Category Archives: Article Review

Article Review 5: Student satisfaction

Article: Digital Learning Impact Factors: Student Satisfaction and Performance in Online Courses

Authors: Chitkushev L, Vodenska I, Zlateva T

Journal: International Journal of Information and Education Technology, Vol 4, No. 4 (2014)

Paper Overview:

This paper explored the student evaluations of 93 online courses to gauge the interdependences as well as correlation between student performance in an online course and its relation to the performance of the instructor, the facilitator and the grades distribution.

This study analyzed surveys filled out by a total of 4920 students who were a part of 93 online courses over a period of 18 semesters. The students were surveyed on student satisfaction with the course based off of the variables of the instructor, the facilitator and the grades distribution.

The results of this study were interesting:

  • It was found that the correlation between student satisfaction with a course and their satisfaction with their instructor is so high that it is statistically improbable to have student satisfaction of a course in which they were not satisfied with their professor and vice versa.
  • It was found that if students were satisfied with their facilitator, they were likely more satisfied with their instructor and vice versa.
  • Students are more likely to be satisfied with the course if they are satisfied with their facilitators.
  • If students are satisfied with the course, it is statistically more likely that their grades will be higher.

Personal reflection:

This paper was well written and comprehensive for the reader. The range of data that was analyzed was a large range and to me seemed like an accurate representation of the population.

This paper brought out the basic question for me of how do we know (when we design an online course and execute it) if it was effective. Students learning outcomes are one way of assessing the overall effectiveness of the class.

However that does not mean that the student was necessarily satisfied with the course. This paper was able to highlight the key points for an instructor to keep in mind when it comes to student satisfaction. The student needs to be satisfied with the instructor if they are to be satisfied with the course. This correlation, I thought, was the most important one of all the other results.

In a traditional classroom setting the onus has always been on the instructor to create an effective classroom setting and a course that students are satisfied with. This becomes a tougher thing to do in the online pedagogy. (That seems like a lot of pressure for the professor.)

Another aspect that stood out to me was that if the student was satisfied with the course, their grades were likely to be higher in the class. This too makes sense to me since it seems like if a student is satisfied with the course and is engaged in the learning material, they are likely to work harder at the assignments and put in more effort, thus achieving a higher grade.

These outcomes are things to keep in mind for a teacher creating lesson plans and online classes. It is important that a professor conducting an online class understand how to best cater to their students. Otherwise, if the students are dissatisfied with the class, then the professor has essentially failed in his task to teach and facilitate learning.

Article 4: Assistive Technology

Article: Assistive technology in the classroom

Authors: Netherton D. and Deal W.

Journal: The Technology Teacher 66.1 (2006)

Paper Overview:

The authors of this paper discuss the need to provide information about the availability of assistive technology, advances in improving accessibility and functionality of assistive technology, and appropriate methods to secure and utilize assistive technology in order to maximize the independence and participation of individuals with disabilities in society.

The authors make use of some real-life case studies of individuals with disabilities, who were able to avail of affordable high technology in order to maximize their participation in society. The authors make the case that often times, while there exist very expensive high-technology assistive gadgets in the market, due to the development of technology people with disabilities can now avail of such technologies at more affordable prices.

The authors also provided a list of services that provide help in assistive technology that most people do not know about.

Personal Reflections:

I was expecting a different topic when I found this paper and decided to review it. From the title alone, I had assumed that the paper was about technologies that could be used as assistive teaching tools in the classroom.

But when I began reading the paper I realized it was about assistive technologies that help people with disabilities maximize their independence and participation in day to day life. I was intrigued by the article.

It was a very well written and the writing was comprehensive in a way that made the authors seem like they really wanted to get their message out. The paper made use of 3 case studies of real people with disabilities and how they were able to increase their participation in society, simply by finding out more about affordable assistive technologies that they didn’t know about before that.

The authors made a compelling case and based their premise off of the fact that due to the developments and advancements in technology, better and more affordable assistive technologies exist that would highly benefit people.

It is not the onus of the school system to provide students with the best assistive technologies and hardware in the market, as they can tend to be expensive. As a result, students may lose out on opportunities due to their lack of access to the technologies they require to complete independent functionality in day to day work.

However, with the development of new technology, there are alternatives that have been created specifically as more affordable variations of the expensive assistive technologies in the market. The argument made by the authors is that there are many resources available now and that they should be advertised more so that more people can know about them and avail of their services.

I liked the writing style of the authors and I enjoyed reading their paper. It was a topic I hadn’t expected to write about or read about and so it was refreshing, though I don’t know that it will help me in the my ultimate final project.


Article Review 3: Student engagement in an online chatroom

Article: Engaging diverse student cohorts: Did someone say completely online?

Authors: C. Moore and L. Signor

Journal: International Journal of Information and Education Technology, Vol 4, No. 4 (2014)

Paper overview:

This paper explored the question of engagement of a diverse student cohort in a completely online classroom. The authors analyzed the effectiveness of chatrooms as the tool for synchronous meetings and discussions among peers and the instructors.

The instructors relied heavily on Wright and Shoop’s Constructivist model of learning called Student Centered Discussion (SCD) in which the students are engaged in productive and positive discussion through which students construct new understandings of different topics of discussion.

The researchers studied the engagement of the students of an entirely online and otherwise asynchronous class, in synchronous meetings in chatrooms. During the synchronous chats, a set of rules were laid out for students according to the SCD model, that would promote mutual respect for each other during the chats. The instructor would have to play a very big role in facilitating the discussion and leading it in the direction that was prearranged in the agenda by the professor.

The results of this case study were that it was found that by developing a SCD based model among the students, there was greater student engagement in a class that was otherwise asynchronous.

Personal Reflections:

Personally, I thought this paper was not very well written. While it had numerous citations, the researchers didn’t bother to actually state the information that they were using as part of their narrative but merely cited the paper from which they took it. As a result, the paper was often difficult to understand and it felt incomplete and empty to a reader who was not familiar with the rest of the work that was cited.

The authors cited their reason for studying the use of chatrooms in classes as the fact that there was a lack of funding for other tools to be used in their classes and so this was the sole method of synchronous collaboration they could set up with the students.

However, this paper was published in 2014. It seemed hard for me to believe that in today’s world the sole method of synchronous meeting for the students of an online class was through a chatroom. Student collaboration could take place in any sphere easily, and a chatroom seemed like an archaic tool to be using in an online class.

In a chatroom, only one person is able to speak at a time. Sitting through long, teacher facilitated chat (written) discussions seemed like something that would not receive a lot of participation from students and the engagement, while present in comparison to a completely asynchronous class, would dwindle and seem ineffective in comparison to a class where the synchronous sessions are conducted through an aural or visual medium like Blackboard or a webcam.

Article Review 2

Article: ‘Gaming Research for Technology Education’

Authors: Aaron Clark, Jeremy Ernst

Journal: Journal of STEM Education: Innovations in Research 10.1/2 (2009)

Overview of paper:

The author’s intent for this paper was to determine the attitudes about gaming, its use in education and the need to utilize gaming as an integrator of STEM subject matter into the classroom. The authors started their research in this field when they noticed the popularity of gaming. They wanted to see if they could harness the power and popularity of gaming to help students who were struggling, to get through their STEM coursework in school. For this purpose, the study was developed to evaluate the effects of gaming on the classroom and the attitudes that students, teachers, parents and administrators have about the use of this technology as a pedagogical tool.

The authors conducted a survey with 258 participants from varying backgrounds and fields of study. They asked them three sets of questions to determine the general outlook towards gaming and its use in education. The findings from the survey indicated:

  • A majority of people (74%) agree or strongly agree that gaming is a valuable resource and learning tool for students.
  • The majority (72%) also agreed that homework assignments that included computer-video gaming could be a useful tool for student learning.
  • 48% of people indicated that they were interested in developing such games.
  • 71% of people indicated that gaming could be a great tool in science and mathematics instruction.
  • 89% responded that gaming has a future in education.

The survey data revealed to the authors that gaming could be a useful tool for gaining and maintaining student interest in all areas of STEM education.

The paper did not research existing gaming tools and what the attitudes of the people are towards them. As a result, this paper seems incomplete.

Personal Reflections:

This research was interesting to read. It was conducted in 2009 and the researchers were attempting to find how exactly people viewed gaming as an educational tool. The results of this research were very interesting to me. I knew that gaming is popular and it would be looked upon favorably by most but I didn’t realize that so many people thought of it favorably as a teaching tool.

I had attended a conference a while back on educational gaming and one of the speakers there answered a question about why educational games were not more popular. His response was “because they’re boring!’ This really made sense to me. Most educational games are designed less to be games and more to be educational. As a result, they are not as much fun to play with and are not as popular.

I guess my take away from this article is that while people may look upon gaming as a valuable possible tool in education, the research does not indicate if any such STEM based games have actually been effective teaching tools and how the actual users of such games feel about the games.


Article Review 1: Collaboration in a virtual classroom

The 21st Century skills[i] framework that had been developed by the ‘Partnership for 21st Century Skills’ include collaboration as an important part aspect of the skills required for the 21st century that all students must develop to flourish. According to this framework, to be able to collaborate effectively, one must:

  • Demonstrates the ability to work effectively and respectfully with diverse teams
  • Exercise flexibility and willingness to be helpful in making necessary compromises to accomplish a common goal
  • Assume shared responsibility for collaborative work, and value the individual contributions made by each team member

Collaboration allows for students to have access to different viewpoints on various topics that they wouldn’t normally have access to in a traditional classroom setting where the teacher stands at the front of the class and teaches out of a textbook. Collaboration, thus contributes to a higher order of learning with a more in-depth view of the subject.

The very nature of knowledge itself is impacted by technology. It shapes what counts as knowledge, how knowledge is produced, how people are involved in the production of knowledge, and how knowledge is valued [ii](Gumport and Chun, 2000). Students, who generally wouldn’t participate in a face-to-face class, now come forward with their opinions through the anonymity that is provided as a result of technology.

As a result, technology has the potential to bring out even better collaboration and thus, a better in-depth understanding of a subject in and of itself. However, effective collaboration through technology is difficult to create due to the disconnect felt between participants that never actually see each other face to face. While designing an online course, or creating a virtual classroom, it becomes especially important to develop a sense of community among the participants by introducing constant collaboration.

The instructor needs constantly try and bring in the ‘human’ element to the virtual classroom through constant collaboration with peers, to make the class an effective learning environment. Collaboration intrinsically requires a mediating tool to foster practices of meaning-making through joint activity and technology is a medium that supports collaborative work.

For the purpose of this assignment, I reviewed the article “Collaborative Learning Using Integrated Groupware: A Case Study in Higher Education[iii]‘. This study was undertaken by four researchers whose intent of research was to conduct collaborative learning using information, communication and technology, for its college students, and evaluate its usage in learning. They designed a platform to cater to individual profiling, classroom interaction and group activities. The groupware they used were Microsoft Sharepoint platform, cloud computing abilities and editing of work real-time, online.

The students participated in various group activities, with simultaneous work and editing of projects. They were able to see other students’ work and comments and the instructor was able to monitor this in real time as well. This seemed to promote students’ interactivity in the class as they were able to make changes and edits even after interactions were made.

Through the study, the researchers concluded that with the use of a carefully planned and designed groupware, a common college seminar can be transformed into a highly interactive and collaborative environment.

My personal take-away:

It is difficult to create engagement of participants in a virtual classroom, due to the presence of so much distance between each other, both physical and virtual. The instructor has to work hard to create that sense of community among the students.

Technology often creates an arms-distance between people. This can be seen in nasty online commenting made by people. These people are not necessarily bad people, but they get a certain kind of anonymity on the internet. Also, a kind of arms-distance is created and people don’t realize their nasty comments are actually reaching other real live people. While this does promote interactions from students who wouldn’t normally speak in class, this does not necessarily promote healthy and productive interactions.

This can be tough to combat, but it is the instructor’s job to create that sense of community through collaboration, so that effective learning takes place in the class. With more positive interaction between classmates, there is more of a feeling of ‘knowing’ each other and as a result, students will be more inclined to learning in the class.

One thing that I feel is very important is virtual meetings. In a class with little or no definitive meetings (even virtually), it is extremely difficult to engage the student and create any kind of collaboration.

[i] Framework for 21st Century Learning, Partnership for 21Century Skills, Washington DC, 2011


[ii] Gumport P., Chun M., Technology and Higher Education: Opportunities and Challenges in the New Era, National Center for Postsecondary Improvement, 2000

[iii] Iinuma M., Matsuhashi T., Nakamura T., Chiyokura H., Collaborative Learning Using Integrated Groupware: A Case Study in a Higher Education Setting, International Journal of Information and Education Technology, Vol. 4, No. 4, 2014

Article Review #5: Team Learning and Performance Goals

Nahrgang, J. D., Jundt, D. K., DeRue, S. D., Ilgen, D. R., Hollenbeck, J. R., & Spitzmuller, M. (2013). Goal setting in teams: The impact of learning and performance goals on process and performance. Organizational Behavior and Human Decision Processes, 122, 12-21.

Since we’ve been reading about writing learning goals and objectives, I wanted to relate the current topic to the class I am teaching this semester. We do a “group” section of the basic communication class, where speech students learn small group theory and must do activities and a speech as a group. I am interested in how to write goals that  require collaboration to reach. This article gives a framework for how to test the effect of different types of group goals, but didn’t include explicit examples of the wording used to set goals for the different group conditions. I would have loved to see an appendix that spelled those out.

Nahrgang et al. (2013) studied 80 teams of undergraduates as they completed a military style strategy simulation. Students were assigned randomly to teams of four. The researchers had several hypotheses related to differences in how teams would orient to learning goals versus performance goals. Also, task complexity and specificity of learning goal were varied. In sum, the authors state that “we theorize that the combination of a learning focus and high specificity will cue team members to adopt a more narrow focus on learning specific aspects of the task and therefore impair team coordination. We also expect task complexity to be an important boundary condition of the goal-performance relationship in teams” (p. 12). Also, “We propose that specific learning goals are less effective in team contexts than general ‘’do your best’ learning goals and specific performance goals, and that these differential effects operate through the process of team coordination” (p. 13).

All teams were offered a $40 prize as an incentive for reaching the top level of whatever their goal condition was. Teams were not told what the different goal conditions or hypotheses were. Each team of four met separately from the other teams, in a room where each team member was at a separate computer. Each team member controlled a different “asset” in a situation where they were strategizing how to move friendly military forces into a certain geographic area while keeping enemy forces out. I thought this was a good design for ensuring that the teams were not influenced by other participants in the study. By not having multiple teams in the same room, there is no chance they will overhear the differences in goals for other teams, or be influenced by their decisions.

To illustrate the difference between “learning” goals and “performance” goals, the authors explained that “participants in our specific learning goal condition were told that their team should focus on learning specific strategic aspects of the task” including how to
execute successful attacks” and “understand speed and accuracy trade-offs” (p.15). Performance goals were “specific offensive and defensive goals” based on a points system. As I was reading, this part was a bit confusing. I’m not sure students would automatically be able to separate, in their thinking, what they’re supposed to be learning from what they’re supposed to be doing.

It seems to me like there would be a gray area, tough to define or measure, where students are moving from learning to performing (and back if they are reflecting on successes and failures). The authors did note in their results that groups reacted differently to performance versus learning goals. In their discussion they note that “the best team performance  may occur when teams are given learning goals in order to learn  tasks or develop strategies for the task, and then are switched to  a performance goal after they have mastered the task.” So I think the authors would perhaps agree that students need assistance separating the two types of goals and when to focus on them. I’ve been doing this somewhat in my classes when I have groups go over the chapter and identify what they think are the take-aways and would make good test questions, and how the concepts might apply to their own life. After we brainstorm best practices for application, they move to actually practicing the new information in an outline or speech.

As a supplement if folks are interested, here are the verbatim hypothesis compiled, as well as whether or not they were supported. From pp. 16-17:

  • In Hypothesis 1 we predicted that teams with specific learning  goals would have lower performance relative to teams with general ‘‘do your best’’ learning goals.
  • Hypothesis 2 predicted that the negative effect of specific learning goals on team performance relative to general ‘‘do your best’’  learning goals would be mediated by lower team coordination.
  • In Hypothesis 3, we predicted that teams with specific learning  goals would perform worse than teams with specific performance  goals.
  • Hypothesis 4 predicted that the negative effect of specific learning  goals on team performance, relative to specific performance goals,  would be mediated by lower team coordination.
  • Hypothesis 5 predicted that the negative effect of specific learning  goals on team coordination relative to general ‘‘do your best’’  learning goals would be stronger for teams operating in a complex  task.
  • Hypothesis 6 predicted that the negative effect of specific learning goals on team  coordination, relative to specific performance goals, would be  stronger for team performing a complex task.

Hypotheses 1 and 3 were supported. Hypotheses 2 and 5 were supported “in that complexity moderated the negative effect of specific  learning goals on team coordination, relative to general ‘do your  best’ learning goals, and team coordination mediated the moderated negative effect of specific learning goals on team performance,  relative to general ‘do your best’ learning goals.”  Hypotheses 4  and 6 were supported “in that complexity moderated the negative  effect of specific learning goals versus specific performance goals  on team coordination, and team coordination mediated the moderated negative effect of specific learning goals on team performance  versus specific performance goals on team performance.”

Increasing Engineering Knowledge Through a Photo-Journal Project

Article Review 5 – Lori Sowa

Purdue University is one of the few schools in the country with a PhD-granting School of Engineering Education, with faculty research focusing on graduate, undergraduate, and P-12 engineering education.   The Institute for P-12 Engineering Research and Learning (INSPIRE) was established in 2006 at Purdue, in part to promote engineering learning in the elementary classroom.   Each summer, INSPIRE hosts engineering academies for local elementary teachers.

Duncan et al. (2011) used Bloom’s revised taxonomy as a theoretical framework to evaluate teachers’ ability to recognize and understand engineering in the world around them through a photo-journal project.   Cameras were sent to elementary teacher participants prior to the start of the summer INSPIRE academy with instructions to take ten photographs related to engineering. For each photograph, teachers were instructed to record the date, time, and location, and to explain how each scene related to engineering.   Then, after the first of a five day workshop, the teachers were given further instructions to take ten additional photographs and to record the same information.   The additional ten photographs/journal entries were to be completed by the end of the workshop.

The photographs and associated journal entries were collected and categorized as either pre- or post- workshop.   Each journal entry was coded using Bloom’s revised taxonomy to determine the cognitive level of the entry.   Since Bloom’s taxonomy is hierarchical, the levels were given a numeric indicator, with 1 being “Remember” and 6 being “Create”.   Due to the nature of the exercise, none of the journal entries showed evidence of “Create”, so the highest level achieved was “Evaluate”. The authors used statistical methods to determine whether the teachers demonstrated an increase in cognitive level post-workshop, which they did (equivalent to one cognitive level).

This study was rigorous and well-conducted.   Extensive measures were taken to establish inter-rater reliability, including preliminary analysis of journal entries that would not be included in the study, refinement of methods, and then analysis on the study group (K-fourth grade teachers).   Limitations noted by the authors included: small sample size (n=40) from one geographic area; the timeframe for the photographs was not actually pre- and post-workshop, but rather pre- and during- the workshop; and using ordinal values assigned to Bloom’s taxonomy for analysis typical of Likert-type scales was a novel, untested approach.   In addition, the retention of this knowledge of engineering, and whether the understanding transfers to students in the teachers’ classrooms, has not been studied.

The underlying assumption in the hierarchical nature of Bloom’s taxonomy is that higher level equals better learning, perhaps since learning through this model is considered cumulative.   I find the assigning of numerical values to artifacts coded for each cognitive level to be a useful technique that could be applied to many situations to facilitate quantitative analysis of the data.   I’m curious to dig into this technique and see if and how other researchers are using it, as the authors state it is a novel approach.

I like the integration of journal-ling as a learning activity because it brings the important writing aspect into a STEM activity.   The learning objective targeted by this activity was to “convey a broad perspective of the nature and practice of engineering”.   This type of activity could easily be facilitated (and perhaps enhanced) in an online environment through posting to a blog.   Looking forward to my Engineering for Educators course, this activity could be adapted to help achieve my third learning objective (to understand the engineer’s role in society, and inspire a desire in students to use engineering to solve problems that matter to people).

This article was valuable to me for a number of reasons, not the least of which is a reminder that I need to brush up and expand on my readily-available statistics knowledge.   I’ve taken a number of statistics courses throughout the years, but none recently and it shows as I try to follow along with the statistical methods used in this and other recently reviewed papers.   Google was my link to definitions and explanations.

This article provides insight into the effectiveness of a summer academy on increasing teachers’ ability to recognize and understand engineering in the world around us – an important step in preparation of teachers who will teach or even just discuss engineering in their classrooms.   However, it is important to bear in mind the teachers in this study were provided a substantial amount of professional development centered on engineering (Monday through Friday, 7:30 AM – 5:00 PM with additional homework including reading and assignments).   In Secondary Level Engineering Professional Development: Content, Pedagogy, and Challenges, Daugherty and Custer (2012) describe a number of barriers to successful implementation of engineering in K12 classrooms, including: teachers’ lack of mathematical skills needed to implement engineering activities, lack of background in engineering to maintain fidelity of the curriculum, the amount of time required for lesson planning, student mathematical background and motivation, resources, and institutional barriers.     Certainly these barriers are not insurmountable in all cases, but they must be considered and point to the multifaceted and nontrivial nature of this issue.

Daugherty, J. L., & Custer, R. L. (2012). Secondary level engineering professional development: Content, pedagogy, and challenges. International journal of technology and design education, 22(1), 51-64.
Duncan, D., Diefes-Dux, H., & Gentry, M. (2011). Professional Development Through Engineering Academies: An Examination of Elementary Teachers’ Recognition and Understanding of Engineering. Journal Of Engineering Education, 100(3), 520-539.

Providing case-based online science courses for gifted students.

I chose the article Describing Learning in an Advanced Online Case-Based Course in Environmental Science because it addresses differentiation for gifted students and problem-based online activities in the context of a science classroom (Missett, Reed, Scot, Callahan, and Slade 2010). In my own classes I spend quite a bit of time trying to improve differentiation for my students that find material or concepts too challenging. However, I can greatly improve in the area of offering better differentiation for students that need additional challenges.The opportunities for differentiation and adaptive learning using online classrooms appears to have endless possibilities.  This article was based on a study that “examined the learning outcomes of an online environmental sciences course using a case-based and problem-based model designed for academically advanced learners.‘  The project was titled “Project LOGgED ON (the Project)’ and was developed by the University of Virginia Curry School of Education and the Department of Environmental Science. It’s proposed purpose was to address the problem of “access to highly challenging science curricula for economically disadvantaged, rural, or otherwise underserved gifted and academically advanced learners’(Missett, Reed, Scot, Callahan, and Slade 2010). The study was designed to offer an alternative to the AP examinations for students that did not have access to these programs. The goals of “The Project’ were to:

(a) prepare students for advanced science studies by increasing knowledge and skill acquisition, (b) provide students with opportunities to communicate with peers, (c) write about advanced science topics, (d) work as independent learners, and (e) provide authentic experiences in studying science online.

Course designers used a case-based approach to teaching content by developing 16 cases, assigning students roles using genuine scientific organizations and were intended to give students a “perspective on the environmental problem at hand, to enable them to participate as one who endeavors to solve environmental problems, and to expose them to the role of an actual scientist grappling with environmental issues and problems’(Missett, Reed, Scot, Callahan, and Slade 2010).  Content was presented in a variety of ways (access to an expert video library, primary source references, and the use of open-ended questions) and “students were required to apply new  knowledge to evaluate the issue presented, to explain why it presented a problem, and to use their scientific understanding to defend and support a proposed solution to the problem’(Missett, Reed, Scot, Callahan, and Slade 2010).

The sample population included 138 self-identified students, from 14 states, ages 12 to 17 years old.  However, only half came from rural school districts, from school districts comprised predominantly of minority students, and/or from school districts with a significant population of students receiving free and reduced lunches. Of these, 60% were female and 40% were male. Students were encouraged to take the AP exam at the end of the course, free of charge to serve as a comparison. Only 30% of those that chose to take the exam received a 3 or higher on the AP exam. Not surprisingly, the study reported that “students who were independent learners with strong time management skills and were more active on the discussion boards had the most success with the course’ while students that were weak in these skills were most likely to drop the course (Missett, Reed, Scot, Callahan, and Slade 2010).

The study concluded it was a success because students who participated in the Project’s environmental science course “experienced learning, engagement, and challenge’.

Course work promoted “inductive thinking and the use of problem-solving skills as it called upon students to interpret data, analyze case studies, and solve complex real-world science problems.’ While these are noteworthy and desirable outcomes the Project did not serve its ultimate purpose of studying the effect of access to rural and minority populations, because the authors did not confine the study to those specific students, nor did they collect the necessary demographic data from the students that would make it possible to compare these populations.

I believe that this article serves as an excellent starting point for further inquiry. The need for alternative challenging coursework for advanced learners is an area that deserves attention. The curriculum design of this course used best practices from both science and the National

Association for Gifted Children and it appears that it had successful learning outcomes for the participants in the study. The AP exam data did not indicate that it could be used as a direct substitute for an AP course with positive outcomes on the exam, but this was not the intention of the researchers. I think one of the most interesting quotes in this paper was found in the conclusion and stated that “an interesting impression derived from this study is that the instructors played little, if any, role in the overall success or failure of the students. That is, learning and engagement resulted principally from student-to-student interactions, and without significant instructor facilitation’(Missett, Reed, Scot, Callahan, and Slade 2010). If learning success is primarily dependent on student-to-student interactions, perhaps it is the facilitation of student interaction between academically advanced students in remote, rural and underrepresented populations that should be the focus of further research.

Works Cited

Missett, T. C., Reed, C. B., Scot, T. P., Callahan, C. M., & Slade, M. (2010). Describing Learning in an Advanced Online Case-Based Course in Environmental Science. Journal of Advanced Academics, 22(1), 10—50.

Measuring College Students’ Readiness for Online Learning

In the article by Hung, Chou, Chen, and Own (2010), Learner Readiness for Online Learning: Scale Development and Student Perceptions, they attempt to identify what factors are good indicators for determining college students’ readiness for online learning. Hung, Chou, Chen, and Own (2010) develop and validate an instrument for measuring college student readiness, the Online Learning Readiness Scale (OLRS). Gender is also explored as a potential factor impacting college student readiness for online learning. Hung, Chou, Chen, and Own (2010) cite research that Tsai and Lin (2004) conducted on “…636 high school students and found that females were more likely than males to perceive the Internet as pragmatic and that males’ enjoyment of the Internet was greater than females’ corresponding enjoyment’ (p. 1083). Obviously male and females approach the Internet differently, but it does not prove that gender is an indicator of readiness for online learning. The research questions explored by Hung, Chou, Chen, and Own (2010) are below.

  • Could an OLRS model be constructed and validated through CFA (confirmatory factor analysis)?
  • What is college students’ readiness for online learning?
  • Does the gender of college students make any difference in their readiness for online learning?
  • Does the grade (i.e., level of accumulated academic credits) of college students make any difference in their readiness for online learning?

The study conducted by Hung, Chou, Chen, and Own (2010) included 1,051 college students enrolled in three universities in Taiwan that answered a questionnaire using a 5-point Likert-type scale. The questionnaire had five parts: self-directed learning, learner control, motivation for learning, computer/Internet self-efficacy, and online communication self-efficacy. The response rate was high at 87.6% and reasonably varied in the respondents’ characteristics. The make up of the participant population included 589 females, 462 males, 648 seniors, 321 juniors, and 82 freshmen and sophomores. The college students were enrolled in a variety of online asynchronous courses that included life chemistry (658 students), calculus (169 students), statistics (80 students), Taiwan ecology (79 students), and introduction to environmental protections (65 students) (Hung, Chou, Chen, & Own 2010).

What was surprising about the results of the study is that gender was not found to be an indicator or influence of readiness for online learning. Both the male and female college students responded to the questionnaire’s five dimensions similarly enough that there was no significant difference between the two (Hung, Chou, Chen, & Own 2010). What is interesting though is that women still seem more drawn to the online learning environment then men. There might be a learning preference difference among the genders that does not affect readiness, but affects the populations of online learning environments. The higher number of female respondents might also be attributed to the fact that females are more likely to respond to a questionnaire than males. This is something else to look into and consider in future research.

On the other hand what did affect learner readiness for online learning was grade level. Hung, Chou, Chen, and Own (2010) found that juniors and seniors exhibited more readiness for online learning than freshmen and sophomores. This is attributed to the older students having more maturity and experience than the younger students. The junior and seniors were better at self-directed learning, learner control, motivation for learning, and online communication self-efficacy. This means the juniors and seniors could manage and organize their time, were more motivated perhaps because they were closer to graduation, and were more comfortable with communicating online with their peers and instructors. The only factor that was equal was computer/Internet self-efficacy, which is not surprising in this day and age. All the students regardless of grade level were comfortable with their computer and Internet skills and knowledge (Hung, Chou, Chen, & Own 2010). What is curious about these findings is that no ages of the students are mentioned. All that is known is that they are enrolled in a Taiwan university. Hung, Chou, Chen, and Own (2010) assume that the juniors and seniors are older than the freshmen and sophomores. Does older mean mid to upper twenties for juniors and seniors? Maybe in Taiwan it is less typical to have students older than twenty something, but from my own experience in the United States it is common, especially in online classes to have students above the twenty something age. I think it is less about age and more about years of experience with the online learning environment that is important to consider. It would be interesting to know how many online classes the students had taken before being surveyed for this research. This seems to be a missing factor in the research. Prior experience should always be considered, especially when creating a tool to measure students’ readiness for online learning

Self-directed learning seems to be a key factor in determining readiness for online learning, especially among freshmen and sophomores (Hung, Chou, Chen, & Own 2010). Even though not even a tenth of the participants were freshmen and sophomores it is common sense that they would need more guidance from teachers how to manage their learning and how to develop self-discipline. Hung, Chou, Chen, and Own (2010) make this statement about freshmen that it is difficult for them “…to adjust their high school learning patterns to college ones, and even tougher to make the adjustment from their high school classrooms to virtual college classrooms’ (p. 1088). If an incoming college student does not know how to manage their study time and lacks self-discipline online learning may not be the best choice until these basic skills are developed with the help of the physical education systems. Online learning requires more internal and individual motivation than do face-to-face classes where there are more social and peer pressures to help motivate the student because they are more visible.

After all that is said and done the OLRS (Online Learning Readiness Scale) was designed and tested to help further the research in the area of determining student readiness for online learning. As Hung, Chou, Chen, and Own (2010) point out their research includes a decent sample size, but more research needs to be conducted to include a variety of online classes with varying subjects. Also more needs to be done to look into how grade level, age, maturity, and prior experience plays a role in readiness in online learning. What else is highlighted about the research is that freshmen may be coming to college without the online learning experience they need to be successful and ready for such a learning environment. K-12 schools need to consider incorporating more of the online learning environment characteristics into the classroom because it is becoming a bigger and bigger part of the college learning experience. Maybe high schools need to offer an introductory course to online learning, something short and sweet. Many colleges require a similar course, but maybe it would be beneficial to start at the high school level. It could become a new requirement for graduation. All in all interesting ideas about how to gauge students’ readiness for online learning, but more needs to be done to improve the OLRS before it can be reliably used by anyone.


Hung, M., Chou, C., Chen, C., & Own, Z. (2010). Learner readiness for online learning: Scale development and student perceptions. Computers & Education, 55(3), 1080-1090.