Category Archives: Article Review

The Remote Proctor and Online Academic Honesty

The online courses I have participated in have not required formal exams, but many subjects require formal testing like math and science. As distance learning is morphing more and more into online learning administering formal exams is becoming a concern. Before students would report to a testing center or have an assigned proctor, but with the convenience of online learning there is a demand that testing be just as convenient. The appeal of asynchronous online courses is that the student can complete the work from anywhere and anytime. So why should a student not be able to take the required formal exams from anywhere and anytime? By offering the convenience of taking a test from anywhere and anytime comes the concerns of academic honesty and student privacy. Convenience usually comes at a price and that price might be what some would consider an invasion of privacy. In order to make sure students are not cheating without a human proctor present requires, as Dunn, Meine, and McCarley (2010) point out, a technological innovation. This means that the solution to cheating on a test is resulting in “…what could easily be called the academic version of “Big Brother” into the online course environment” because the proctor is now a camera (p. 4).

In the articles by Dunn, Meine, and McCarley (2010) and Robinson (2013) the authors discuss the implications of remote proctors and academic honesty. Robinson (2013) argues that academic dishonesty is more common in the online learning environment because of the distance between student and instructor. This leads to feelings of isolation and then precipitates feelings of inaccessibility, which then leads students to not seek out assistance they need to succeed. Due to stress, fear of failure, and feelings of isolation students are more prone to be tempted to commit academic dishonesty in the online learning environment (Robinson 2013). This is what is prompting new technologies to be designed to prevent academic dishonesty when it comes to testing. Troy University in partnership with The Securexam® Corporation developed the Remote Proctor to help eliminate cheating on tests. Students purchase the remote proctor device, which has a camera, microphone, and biometric scanner. It is plugged into their computer via USB port and will lock the hard drive and Internet down so student cannot access information on their computer. The camera and microphone record the testing session and will report any suspicious behavior to be reviewed later (Dunn, Meine, & McCarley 2010). There are other ways to prevent cheating that are not as extreme as having a mini robot watch you. Instructors put time constraints on exams, make it so only one exam question is visible at a time, make exams without allowing students to go back, and requiring students to install lockdown browsers to prevent unwanted Internet browsing. Webassessor uses the built in webcam in laptops to conduct facial recognition and monitor the student visually (Dunn, Meine, & McCarley 2010). Then there is John Fontaine’s work. He is the “…senior director of technology evangelism for Blackboard Learning Management Systems [and] is currently developing technologies that create document fingerprints” in which a student’s writing is analyzed for patterns to develop a writing style fingerprint (Dunn, Meine, & McCarley 2010, p.192).

This all seems reasonable with the increase in online learning, but is remote proctoring as convenient as it sounds? Take a look at this list of environmental requirements when using Securexam® remote proctor device.  A student is basically supposed to be in a noiseless, bare walled, overhead lighted room. Sounds more like the student needs to invest in a cubicle. Granted when taking a test it is good to have no distraction, but based on the above-mentioned environmental requirements you could not take it in the comfort of your living room on the couch or relax in your bedroom or any room that has a poster up on the wall with writing.

The economic cost for the Securexam® remote proctor device is supposed to be equivalent to a textbook and can be resold by the student after use. I am not convinced that students need to purchase a device in order to ensure test-taking honesty. It seems like an economic ploy and a way for colleges to get accreditation for their online courses easier. I understand the need for accreditation, but I do not think the cost should fall on the shoulders of the students.

It was also stated by Dunn, Meine, and McCarley (2010) that students did not think the remote proctor by Securexam® was an invasion of privacy, but I think the biometrics might be going to far. The surveillance from the camera and microphone is uncomfortable to me. I do not understand how students did not make more of an uproar about the invasion of privacy. It is one thing to have the instructor watching me as I am taking a test, but to have strangers watch me and analyze the video for cheating for the instructor makes me uncomfortable. Also when others are viewing the video it comes at additional cost besides the device itself. This could cause online learning network charges to go up for students. Again I do not think the students should have to pay for the remote proctor. The convenience just comes at too high of a cost to the student. I personally would rather find a human proctor or go somewhere to take the test.

Also, I am not convinced that remote proctors are necessary to ensure academic honesty during online test taking. If the instructor has a policy regarding what is acceptable during a test students will probably be less likely to be tempted to cheat. It was found by Robinson (2013) that students had very different perceptions of what constituted cheating especially the gray area, but what impacted the perception the most was whether the instructor had a policy regarding academic honesty. All students seem to be aware of blatant cheating like having someone else take the test. Robinson (2013) states that students:

…believed that it was appropriate to use a book, reference sources, and class notes during an exam as long as the professor did not have an explicit policy stating otherwise. The same students, however, acknowledged that having another person take the exam, securing a copy of a test prior to the exam period, and text messaging to send and/or receive answers from another student was inappropriate irrespective of the presence or absence of a written policy. (p. 191)

As long as there is an explicit policy about what is and is not allowed during test taking students seem willing to abide by the rules. Of course there will always be someone who breaks the rules. Whether a human, a machine, or nothing proctors online test taking there is always the chance a student will find a way to cheat and push the boundaries of academic honesty. What seems like a solution to a problem of online cheating may only create more problems. There are economic and privacy considerations that need to be explored further. The convenience of anytime and anywhere of asynchronous online learning may not be that convenient when it comes to making testing just as convenient. This begs the question is anytime anywhere really as convenient as we think?

References

Dunn, T. P., Meine, M. F., & McCarley, J. (2010). The Remote Proctor: An Innovative Technological Solution for Online Course Integrity. International Journal Of Technology, Knowledge & Society, 6(1), 1-7.

Robinson, C. V. (2013). Academic dishonesty: A guide for digital instructors. In M. S. Plakhotnik & S. M. Nielsen (Eds.), Proceedings of the 12th Annual South Florida Education Research Conference (pp. 189-194).

Article Review 4: Teaching Presentation Skills Online

Kenkel, C. S. (2011). Teaching presentation skills in online business communication courses. MERLOT Journal of Online Learning and Teaching, 7(3), 412-418. Retrieved from http://jolt.merlot.org/vol7no3/kenkel_0911.htm

This article discusses the importance of finding ways for students to practice presentation skills in courses that are primarily online. Many business courses drop the communication skills aspect when they are moved to the virtual realm, and as Kenkel (2011) notes, this is problematic: “Given the rapid growth in online programs, this deficiency could have serious consequences for business graduates.” Good interpersonal skills are something that employers and managers rate highly, but often students do not get much training beyond a single basic speech course, if one is even required.

The author proposes that to successfully integrate presentational skill-building into an online course, an instructor needs to provide a detailed rubric and direct students to use feedback from an initial presentation to improve subsequent presentations. Part of that feedback should be peer assessment that can “verify and reinforce” faculty feedback. Rubrics, reflection and peer editing exercises long have been recommended for face-to-face classes as well, so this is nothing new.

As a case study, results were presented from the development of online options for students in management courses at a large Midwestern campus. The online students had to post three videos of presentations for instructor and peer feedback. Students were given the option of using a private campus-hosted service or public video hosting sites like Facebook and YouTube. Students had to share feedback with each other on a discussion board. A rubric was given so that students could rate themselves and each other, and they received feedback from the instructor based on the same rubric. I was surprised that the students seemed willing to provide detailed feedback when it is all on-record and not anonymous. However, the author explained that only the instructor feedback counts toward the students’ grades. The peer feedback just gives them more of an idea of how an “audience” reacted to their speech, and often helps to reinforce the instructor’s grade (if several people mentioned lack of eye contact, for example).

What I really appreciated about this article is that it didn’t just stop at mentioning some potential challenges or areas for future research. The author actually includes a list of helpful tips/lessons learned. For example, some of the challenges faced in developing a course with online options for presentations included finding tech support, getting students access, and making expectations clear from the beginning. The author notes how common problems were addressed, such as describing the equipment needed in the syllabus with information on library checkout options; having students identify a friend who can assist them as a camera person; and sharing successful videos from previous semesters so students have a better idea of what their own recording should look like.

My major concern is with the student feedback example that the author uses to deem the class structure a success. From an evaluation, “I learned the same, if not more than I would have learned in a traditional classroom setting and I think it made it easier to do speeches not having 20 eyes staring at you while you prepared and gave your speech.” I have taught speech courses for seven years, and in my opinion, the student has not learned the same or more as in an FTF class if they have not worked through anxiety regarding audience gaze. The different types of communication apprehension, and ways to address them, is something I spend the first full day (3 hours) discussing in my FTF classes. If you aren’t aware of public speaking’s status as something more feared than death, please watch this funny but informative video from CBS news: http://abcnews.go.com/WNT/video/facing-fear-public-speaking-improv-city-21565437

So, how would I address the potential shortcoming of not practicing eye contact and controlling gaze-induced anxiety? I would do the same thing one of my instructors at UAF did when I took a fully-online Business Training class. You record a video of you speaking, but there must be at least 5 other people in the room serving as the audience at the time. That also deters students from doing 50 takes until they get their speech perfect. You never know when your boss will call on you with short notice to talk about a project in front of new employees or investors. If you are truly trying to become a better presenter in the professional realm, you have to be prepared for a live audience.

Effective Engineering Pedagogy for Upper Elementary Teachers

Article Review 4 – Lori Sowa

We expect quite a bit from our K-12 teachers – and with the increased focus on STEM education, we are now expecting teachers to incorporate engineering into their classrooms.  While all teachers have taken science and math classes during their high school and college years, it is safe to say the vast majority of teachers have never formally studied engineering (unless they are recovering engineers who decided to make teaching their career).   Quality professional development opportunities must be provided if we expect meaningful engineering experiences to find their way into K-12.

In a recent paper, a team of engineering education researchers (Guzey, et al., 2014) provide a qualitative analysis of the results of a year-long professional development workshop on  integrating engineering content for third through sixth grade science teachers. The results are measured by analyzing posters documenting each teacher’s implementation of engineering activities in their own classrooms using a framework that identifies the required components of a successful engineering activity.  The professional development served 198 teachers and included 30 hours of face-to-face workshops (spread out in one or two-day increments over the course of an academic year) along with 16 hours of time spent within professional learning communities developed within their school districts to reinforce what they had learned and share ideas.  One the final day of the workshop, teachers presented their own implementation of engineering design projects in their classrooms through posters.

The framework the researchers used defined the necessary components of an engineering curriculum unit.  According to the framework, the engineering unit should:

  1. have a meaningful purpose and engaging context;
  2. have learners participate in an engineering design challenge for a compelling purpose that involves problem solving skills and ties to context;
  3. allow learners to learn more from failure and then have the opportunity to redesign;
  4. include appropriate science and/or mathematical content;
  5. teach content with student-centered pedagogies; and
  6. promote communication skills and teamwork.  (p. 141)

The teachers’ posters were electronically captured and then coded for each of the elements above.  The authors note that all of the posters included evidence of items 5 and 6 above (student-centered pedagogy and teamwork), thus these elements weren’t included in the coding.  Although not highlighted in the paper, I feel this is a substantial accomplishment in itself.  Forty-seven percent of the posters were found to incorporate all of the criteria above, with the remaining projects categorized based upon the missing components.

The use of the specific framework is somewhat limiting (I feel that successful classroom experiences can be produced that do not meet each goal above), but I think the framework does a reasonable job of outlining the elements of a robust engineering experience.  It also gives teachers a framework to measure their own lesson plans, and provides a pathway for improvement.  I will use the framework myself to re-evaulate the project assignments I give in my own classroom.

Once again, we see the focus on “learning from failure”.  Failure is a somewhat peculiar way to characterize this phenomenon, as many of the projects likely met the goals the first time (so not really a failure), but then students were provided an opportunity for improvement which allowed an overall improved design.  But I have seen true “failure” in my classroom with design projects.  I use a scaffolded approach in my freshman engineering course, starting with small design projects, leading up to a large, final design project.  In the second of three projects, I actually assign points for how well their design functions.  The project is to create a device that will extinguish a candle in exactly 20 seconds, and I take off points for each second outside of this range.  I always feel a little uncomfortable with this grading scheme, but there are other aspects to their project grade (final report, etc).  However, the level of effort usually goes up when their grade depends on the outcome.  I felt particularly bad with a recent group of students who, despite their good effort, failed to produce a design that met the criteria.  I allowed them an extra week (with a minor points penalty) to come back with an improved design, and was pleasantly surprised when they came back with an excellent design that met the criteria exactly.  I learned quite a bit from this exercise myself.

The focus on integrating engineering through science education, rather than as yet another, separate topic, is a valuable approach in a number of ways.  In recent conversations I’ve had about STEM education in K-12, a local school principal indicated that she thought that STEM activities were great – but were most likely to be implemented as after-school activities rather than full-class activities.  I have to disagree with this method of incorporation.  After-school activities can provide quality education experiences to those who choose to (or have the means to) participate, but full class activities reach all students.   Engineering can be a vehicle to apply math and science skills, but can also be used in a non-quantitative way, incorporating social and political aspects of technology and problem-solving.  Helping teachers incorporate it in this way is a challenge but is certainly not impossible.

Overall I believe this article highlights a successful model for professional development aimed at providing guidance for upper elementary teachers to include engineering in the curriculum, and uses a well-defined qualitative approach to measure the success of implementation.  However, the authors do not address what I believe is one of the most important aspects of this project.  Were the teachers given this feedback on their own projects?  The teachers are “designing” engineering opportunities for their students – where is their opportunity to learn from “failure”, improve their lesson design and try it again?  I’m sure much of this will happen on its own as teachers are constantly improving their teaching through experience, making adjustments, and trying it again next year.  But the teachers themselves could benefit from having some specific feedback using this framework. By publishing the research, it is shared with the research community, but I would like to see this aspect built in as a major component of the professional development.  It would be quite interesting to follow up on these particular teachers through a longitudinal study, looking at improvements in the implementation of engineering design projects over time.  This study also begs for follow up from both students and teachers in terms of measuring student learning gains, student attitude and motivational factors, and teachers’ satisfaction in implementing the projects, and their perceptions of the success or “failures” in the classroom.

Guzey, S., Tank, K., Hui-Hui, W., Roehrig, G., & Moore, T. (2014). A High-Quality Professional Development for Teachers of Grades 3-6 for Implementing Engineering into Classrooms. School Science & Mathematics, 114(3), 139-149.

Article Review #5, Bob Heath

Andrew See & Travis Stephen Teetor (2014) Effective e-Training: Using a Course Management System and e-Learning Tools to Train Library Employees, Journal of Access Services, 11:2, 66-90, DOI: 10.1080/15367967.2014.896217, http://dx.doi.org/10.1080/15367967.2014.896217

Therefore, it seems I have found the perfect article for my purposes.

Online instruction and e-Learning tools are increasingly being used in the academic setting for faculty to deliver course content; however, most libraries have yet to apply the advantages offered by these tools to employee training. This case study from the University of Arizona Libraries (UAL) presents the challenges of sustaining traditional training approaches and the steps to develop an online training program, including identifying specific competencies needed to create effective online training, an approach to prioritizing where to start your program, and requirements for training platform selection. (See and Teetor 2014)

I suspect that most academic libraries struggle with similar versions of this problem, limited permanent staff, multiple locations, extensive hours of operation, and many part-time student employees with frequent turnover, and schedules that do not overlap with supervisors.  The work consists of customer service, technology support, providing directions, basic research assistance, and building security.

Perhaps many instances of independent invention have occurred in academic libraries to address these issues.   Our solutions at Colby College Libraries include a variety of tactics.  We meet face-to-face at the start of each semester at each location as a staff.  New employees receive focused instruction from a permanent staff supervisor during their first shift.  Librarians likewise meet with as many student employees as possible and provide an hour of instruction on answering research questions — followed up by individual make up sessions to catch the rest.  We have a selective interview process for student supervisors and train them more extensively.  They assist with both training and administrative tasks in managing their respective staffs.  We created a peer mentor program where new employees are partnered with returning employees and so gain the benefit of their experience.  We also created a website to supplant our old training manual. We have enriched that sight with instructional videos created by student employees. We refer to that site when we answer employee questions to impress upon them that many answers are available to them through that resource.  Finally, we meet for lunch once a semester all service desk employees from all locations and while the focus is fun, we sneak some training or review into these sessions as well.  We like the authors of the article also systematically evaluate the employees’ job knowledge and retrain as necessary.  This yields good, but not great results and I am feeling increasing pressure to achieve great outcomes.

The authors first described a new position, a specific employee to create their online instruction.  They then describe the selection process for LMS.  They then describe the content areas of the LMS they use: “Checklist, Content, Quizzes, Dropbox, Grades, Classlist, Discussions, and Syllabus.”  Because these categories are facets of the particular LMS, I will not spend a lot of time summarizing the details of their curriculum.  However, their discussion of creating online content does bear some study.  They used the Desire2Learn LMS system, but for content creation, they describe three tools: Adobe Presenter, Articulate Storyline, and Panopto.

They evaluated the results of the new training on cost savings, test results, and observation of task performance.

  • “In terms of cost savings, online training will likely result in cutting F2F time in half instead of eliminating it completely.”
  • “Similarly, UAL employees who have used the online training have been just as successful in passing tests as their counterparts who received predominantly F2F instruction.”
  • “While there has not been an in-depth comparison of performance when trained F2F versus online, employees have proven just as capable and have completed this stage of training just as quickly, regardless of how they were trained.”

Again, these conclusions are conservative as with most academic writing.  However, to my mind as an Assistant Director whose business is the same business.  I think there is plenty to go on here.  I have shared this article with my permanent staff and my student supervisors.  We will be discussing it 10/10/2014 at our supervisors meeting.

The authors’ finally end with this conclusion: “While we have received feedback from trainees about their desire to have a greater degree of F2F interaction, overall the online program has proven to save time while achieving the same degree of effectiveness in preparing employees to work at service sites. We plan to address this need by adopting a flipped classroom approach to supplement online learning with F2F activities and workshops.”

Over the last 5-6 years, we, at my work, have approached and shied away from using the LMS system for these purposes.  I decided on my way into work this morning that I was done with the indecision.  I meet with our Instructional Designer today to review the objections that have been raised in the past.  To see if these objections still had any bearing on the matter – she convinced me that none are meaningful any longer.  In the morning, I will schedule a training meeting, next week, for my staff with this person.  Moreover, we will move aggressively into online learning in support of improved employee performance.  Another important conversation was had today with a new colleague an Assistant Director in an adjacent department.  We agreed to revisit a past initiative to create a career path for student employees in our library.  Several years ago, we did this hard work and had good success with it.  Alas, we lost track of it in our reorganization.  I think these two projects go hand in hand.

There that was the easy part.

Increased Conceptual Understanding Through Peer Instruction in Engineering Dynamics

Article Review 3 – Lori Sowa

Contemplating 21st Century skills and gaming theory through the videos and writings of John Seely Brown, I jotted down a number of “big ideas” to research: embracing change, learning to join, power of play, not having a defined endpoint, tinkering, demand-based learning.  I searched through a number of articles on the benefits of video games, passion, obsession, and even addiction – but kept returning to the idea of peer instruction, which I think is central in Brown’s theories. My dissertation research is forming around a number of faculty who are using a flipped classroom approach to STEM courses at the freshman and sophomore undergraduate level.  The main question becomes – how can we best structure the in-class activities to promote deep learning?  The goal is to have students learning from each other, but how can we structure the class to promote this?

In Teaching Engineering Dynamics by use of Peer Instruction Supported by an Audience Response System, Schmidt (2011) describes a study where he implements Mazur’s peer instruction (PI) method in two engineering dynamics courses at a University in Denmark.  A third course, taught using the same methods but without the PI discussion questions, served as a control.  The author (who was also the instructor for all three courses) used a number of exams (a pre-test of engineering knowledge, final exams, and the cohort’s mathematics exam scores) and a class survey at the end of the course to look at learning gains and students’ dispositions related to the teaching style.  The questions on the final examination were broken into two categories: traditional problem solving and conceptual understanding.  The author found that scores on the traditional problem solving portion did not vary significantly among the groups, but that the two classes that used PI scored better on the conceptual questions.

The study was overall well-conducted, as the author has a reasonable control group and made an effort to tease out the level of preparedness of the students in the study.  However, one variable that was not controlled for was the language in which the course was taught.  The program of study was highly international, so two of the sections (the control and one of the experimental groups) were taught in English, while the third course was taught in Danish.  The latter group scored better all around, which could potentially be due to the course being taught in their native tongue. In addition, I would have liked to have seen some more detailed questions on the student survey at the end of the course.  For example, one of the questions read “Give an assessment of your own preparation for classes”.  Since the author encouraged students to read ahead in the text, and their doing so would have likely influenced their performance in the PI activities, a more specific question such as “I prepared for class by reading the assigned sections” and using a Likert-scale rating system to indicate always, sometimes, rarely, etc. may have provided better data.  One of the common issues with flipped classrooms is the students’ lack of preparation before class, so it would be nice to quantify this (to the extent you can actually rely on this data).

In my mind the most impressive result of the study was the increase in the number of correct responses after PI discussions and before instructor intervention.  Figure 1 from Schmidt’s article (p. 418) shows the percentage of correct answers increased in almost all cases from the students’ initial response (the x-axis below, before PI) to their response after discussion with peers (y-axis, after PI), many times quite substantially.

Scmidt PI

(apologies – the figure is much more clear in the original text)

Another important aspect of student learning using PI is the awareness by students that they do, indeed, make mistakes and have conceptual misunderstandings. In regards to the students who engaged in PI rating their own understanding of the material lower relative to the control group (who had a higher level of confidence in their grasp of the subject matter), the author provides this explanation:

It is believed that the discrepancy between the students’ assessment of their own outcome and the examination score is related to the quality of the clicker method to expose misunderstandings among students.  By taking part in PI-teaching, the student faces the fact that he/she makes quite a lot of mistakes when interpreting new methods and ideas.  Thus, the student gets the impression that the knowledge gained is not as profound as the student receiving traditional lectures feels regarding his or her outcome: at a traditional lecture it is tempting for the student to be fully satisfied with all the lecturer’s nice explanations! (p. 421)

There is an outpouring of research that shows that making mistakes and experiencing failure are truly important in the learning process.  Many times, students are afraid of failure, and this can inhibit their ability to learn.  Schmidt (2011) also states that “…the goal was a safe study environment where the student had no reason to fear giving a wrong answer… [in] this way, it is believed that the most ‘honest’ answers and the best measure of the students’ conceptual understanding as possible were obtained” (p. 416). In searching for an article to reference the importance of not being afraid of failure, I came across this powerful TED talk video with a focus on peer instruction and learning from mistakes.

Once again, the idea of students (novices?) learning from other students proves beneficial to the overall learning process.  While this instructional method was used in a face-to-face scenario, the method could be adapted to an online medium.

Schmidt, B. (2011). Teaching engineering dynamics by use of peer instruction supported by an audience response system. European Journal Of Engineering Education, 36(5), 413-423.

Article Review 3: 21st Century Competencies

Voogt, J., Erstad, O., Dede, C., & Mishra, P. (2013). Challenges to learning and schooling in the digital networked world of the 21st Century. Journal of Computer Assisted Learning, 29, 403-413. http://onlinelibrary.wiley.com/doi/10.1111/jcal.12029/full

To delve more into the issues raised in this week’s readings about how the 21st Century classroom needs to change to benefit today’s learners, I searched for an article that discusses the competencies that today’s curricula should be targeting. I found a recent article by Voogt, Erstad, Dede, & Mishra (2013) regarding “information and communication technologies” (ICT). The focus of the paper is this list of competencies: “collaboration, communication, digital literacy, citizenship, problem solving, critical thinking, creativity and productivity” (p. 404, italics original). The authors note that these competencies are not unique to the 21st Century but “take on new dimensions in 21st century virtual interaction” (p. 405).

Many of the tools we have available today can be used in the service of building multiple competencies. Voogt et al. (2013) state that “Web 2.0 technology enables users to produce and share content in new ways” and “Digital camera and different software tools make it easier for students to show their work and reflect on it.” When I read this, I made a connection to how things are changing in the assessment of learning, and not just learning itself. I would have loved to see a deeper discussion in the article of how we alter our evaluation tools to determine progress on these competencies. The authors note that our assessments must change along with our teaching methods, but give no suggestions for how. With the current climate of controversy surrounding testing in K-12, we really need more people working on defensible, sensible assessment tools that keep up with change.

The memory that spurred the connection was from attending a conference presentation in which an evaluator described how, instead of using a traditional pencil-and-paper survey to assess childrens’ change in knowledge during a gardening program, she incorporated digital imagery. She compared before-and-after results of photographs students took when asked at the beginning and end of the program to take pictures of what “gardening” meant to them and what a “garden” looks like. I was fascinated by this methodology. Sometimes our learners may not yet have the vocabulary to explain what they’re thinking, but they can show us if we give them the tools.

Voogt et al. (2013) also tout the utility of “augmented reality” as a learning tool because it affords teachers “the ability to create activities not possible in the real world, such as interacting with a (simulated) chemical spill in a busy public setting,” which in turn “builds skills such as collaborative inquiry” (p. 406). My mind was immediately taken to the sci-fi example of Star Trek cadets participating in the Kobayashi Maru. No matter how descriptively written, case studies in books can seem a bit stale or distant. When you can take that case study and introduce moving elements like video or 3D representations, you are likely capturing more attention. I do like the idea of asking students to imagine a scenario themselves. BUT if you are trying to teach a specific lesson, it may actually control for a lot of noise by providing the visuals for them and getting students to focus on the problem-solving task.

The authors end with a set of recommendations, but they are mostly a reiteration of the importance of digital literacy and other competencies without any suggestions of how to ensure their inclusion or success in today’s curriculum. I think Voogt et al. (2013) provide a nice literature review related to the need for and benefits of aiming for the competencies they present. But I already agreed these things were important before this article. Folks have been writing about these ideas for quite a while (our reading for this week, written 7 years earlier, mentions some of the same benefits of going beyond traditional lecture). I’m hoping to find other articles in the future that provide more of a road map for how to move us closer to these goals.

Online Collaborative Learning

As an elementary and middle school student my teachers for group work and projects always put me with “the boys”. I can honestly say that I did not work with another girl or someone at my level ever except for maybe one time. I remember one project that I was paired with a girl, but she had learning difficulties, but at least she did her part of the work. The boys I was forced to work with usually did not do their part and I was there to cover up the fact they did not care about learning. The teachers probably hoped I would rub of on them. This made me despise group work and projects. Eventually I became so fed up with the structure of my education I decided to be homeschooled and take control of my education so that it would be fun again. I have since then become a fan of group discussions, cooperative learning, and collaboration. This is what drew me to Ku, Tseng, and Akarasriworn (2013) article, Collaboration Factors, Teamwork Satisfaction, and Student Attitudes Toward Online Collaborative Learning. Ku, Tseng, and Akarasriworn (2013) use Vygotsky’s Zone of Proximal Development to demonstrate “…that a learner cannot achieve an understanding of a new idea or concept unless he/she acquires help or feedback from a teacher or a peer (Vygotsky, 1978). In Vygotsky’s view, peer interaction is an important way to facilitate individual cognitive growth and knowledge acquisition, and the peer collaboration can help learners in problem solving” (p. 922). Vygotsky’s social development and learning theory illustrates that the teaching and learning practice of collaboration is important to the social aspect of the nature of learning.

The purpose of Ku, Tseng, and Akarasriworn (2013) study was “…to extend Tseng et al.’s (2009) prior research by collecting a much larger sample size to examine the degree of relationship between teamwork satisfaction and online collaboration factors. In addition, students’ attitudes toward online collaborative learning experiences were also investigated.” (p. 923). Data was collected over three consecutive years for an online instructional design course. There were 197 participants all graduate students most majoring in educational technology or school library education; 70% were female and 30% were male. Below are the research questions explored.

  • What are the factors that underlie online collaborative learning components as measured by the student attitude survey?
  • Is teamwork satisfaction related to the extracted online collaboration factors?
  • How much of the variance in teamwork satisfaction can be explained by the extracted online collaboration factors?
  • What are student attitudes toward working collaboratively in an online setting?

Collaboration is usually a highly beneficial teaching and learning practice, but it comes with some frustration. The online learning environment and the Internet in general offers many modes for collaboration. No matter what, communication is key to successful collaboration. The online instructional design class that was surveyed with Likert scale by Ku, Tseng, and Akarasriworn (2013) was conducted through Blackboard and it appears most of the communication by students and instructor was done through it too. In my own experience as a student who has used Blackboard it can be a blessing and a curse. I would find it limiting as a student if Blackboard was the only communication tool I could use. But I understand why it makes the study more reliable and fewer variables are introduced with the class being contained within Blackboard. I wonder though if the students were allowed to freely use other collaboration Internet sources if the other 40% would have like online collaboration better. One student wrote:

I find working collaboratively online much more difficult than in real life. I believe that collaboration is preferable when I can meet face-to-face. I prefer to be given assignments and just get the work done on my own in online classes, because it is so much less cumbersome. Trying to communicate with all members in a timely fashion is extra work, and if you have a weak member of a team, you feel both angry and responsible, because it feels like you have to include that person (responsible) but if they do not do the work you feel angry that you have to work so hard to include them. (p. 927)

While this student found collaboration online difficult other students found it improved their communication skills, broaden their ideas and perspective, and the final product turned out better than it would have than if it were created individually. In fact 73% stated they learned more as a collaborative group than they would have individually (Ku, Tseng, & Akarasriworn 2013). Ku, Tseng, and Akarasriworn (2013) found that the three factors that made the online collaborative learning environment successful were team dynamics, team acquaintance, and instructor support. These three factors all contributed to teamwork satisfaction.

I thought it was interesting that team acquaintances were one of the major factors in calculating the success and satisfaction with online collaboration. I have always dreading the getting to know you part of a face-to-face or online course, probably because I am an introvert. I never really considered how important this could be for the success of an online course. It is something I have been taking for granted. In order to collaborate effectively you need to have some kind of working relationship. It is ironic in a way because I always do community building activities with K-12 students and make sure they get to know me too. I guess it should not be any different for higher education.

One final aspect of the study that was interesting was the fact that 70% of the participants were female. Ideally the study would have had a 50/50 split. The results were very positive and I wonder if it was due to the majority of the participants being women. I am curious which gender had more negative comments or if it was even. Due to this variable and the positive results associated with it the study needs to be replicated again or a study with mostly men needs to be done to compare. It also would have been nice to know the cultural make up of the participants too. Culture and gender have always had an impact on the learning environment, but I never considered that one gender or culture would be more drawn to the online learning environment than another. Are women more likely to engage in online learning and online collaboration? How does culture affect online collaboration? Does it work better when there is a mix of cultures or when the group is homogeneous?

References

Ku, H., Tseng, H., & Akarasriworn, C. (2013). Collaboration factors, teamwork satisfaction, and student attitudes toward online collaborative learning. Computers In Human Behavior, 29(3), 922-929. doi:10.1016/j.chb.2012.12.019

Effective Online Models of Discussion-AR4

I was interested in Dixon’s (2014) article The Three E’s of Online Discussion, because of what we  discussed in our latest synchronous meeting. Over the years, I have been a part of many online classes that used both synchronous and asynchronous discussion with varying degrees of success. Being a somewhat introverted person, I often feel more comfortable with asynchronous sessions, because they allow me the freedom from time constraints and ability to think through a comment before posting it. I have also used both online synchronous and asynchronous discussions in my own classroom and was unsatisfied by the outcomes. In reading this article I was hoping to find an effective model for online discussion that would be applicable to my own secondary level classroom.

The author’s purpose was to produce a model of classroom discussion that addresses both fully online and blended classrooms that is “friendly and workable” for practicing teachers and instructors of grades 7-12 and early college courses. The “Three E’s of Online Discussion” are defined as “Experience, Engagement and Evaluation”(Dixon 2014). The initial step of “Experience” is to create an online community. The author gives specific suggestions for building online communities that include giving students the opportunity to ask each other questions, converse, and by having regular contact under clearly defined expectations of “netiquette”. Dixon (2014) proposes that students be given a “trial-run” period of discussion, before they are graded. He also suggests that this stage be used as a pre-assessment of student knowledge to later guide scaffolding of the course material.

The next phase of the model is the “Engagement” phase and is defined as the “introduction of new material in a form that allows students to absorb it, examine it, and answer questions about it”(Dixon 2014). This is similar to what must happen in classroom discussion, but in online discussion, once a statement is made, it can be read again, leaving a digital footprint. It is critical that students have an understanding of the consequences of making statements that  will remain visible and connected to them long after the course has ended. The educator’s role in producing engaged discussion should involve giving students practice asking each other “open-ended questions, asking questions that focus on higher levels of cognition, and asking probing questions using the Socratic method” (Dixon 2014). These questions should give students the chance to develop critical thinking skills by analyzing and synthesizing new information and by presenting it to each other to support their views (Dixon 2014). At the culmination of a successful discussion the educator should be able to step back and allow discussion to continue with only occasional prompts.

Finally, the Evaluation phase is used to measure the students’ clarity and comfort with the process of online discussion. According to Dixon (2014), effective evaluation requires clear expectations, in the syllabus, including types of discussion required for each section, the minimum number of posts required per week/month, the percentage of the grade reflecting participation in discussion and a rubric to guide grading.

This article should be considered an early attempt at producing a useful model, where little information is available to draw from. The author looked at two existing models created for traditional classroom discussion and one model from online discussion and tried to synthesize them into one generalized model for online and blended applications. This article gave helpful hints and ideas on creating course discussions, but in an attempt to keep the model general enough to make it useful to a broad audience, it lost much of its strength. An obvious area of improvement would be a test of the model.

Overall, this model offers a practical and flexible set of guidelines to aid in the design of online discussions. While much of the model might be considered traditional teaching practice, I feel that it is especially important to explore the “Experience” step in the model. In an online environmentstudents given adequate time and practice opportunities to build confidence and familiarity with their online community? This is an area where I believe most courses fall short. I think it is notable that this section of the model gives us information on background knowledge and can greatly improve the effectiveness of our scaffolding practices during the “Engagement” phase. The use of a generalized model in creating discussions can also give students a sense of familiarity that might shorten the needed period for “Experience”, when the model is used throughout a course or applied across content areas and grades.

This article inspired me to try online discussion again with my own students. I feel that by allotting more time and practice for my students to build an online community I would improve the outcomes in my own classroom. I think that more research into improving classrooms is needed at both the secondary level and higher education.

Works Cited

Dixon, C. S. (2014). The Three E’s of Online Discussion. Quarterly Review of Distance Education, 15(1), 1–8.

 

Successful Hands-on Labs for Online Classes

I chose the article by Reuter (2009) Online Versus in the Classroom: Student Success in a Hands-On Lab Class because of Owen’s question “Can substantial lab-based learning experiences be designed for online courses?” As a high school Biology teacher, I have asked myself the same question. It is hard for me to envision the type of online lab that can produce the same results as a hands-on field based lab, such as a traditional water quality biomonitoring project. Is it possible to have students conduct meaningful lab activities on their own? Is there something lost when students are asked to independently perform lab activities? Reuter’s article has caused me to rethink my previous assumptions, and indicates that it is indeed possible to create labs for online courses that give students the same, if not better, understanding of the material than their on-campus counterparts. The cost savings and flexibility that online labs could provide for students and educational institutions would be very appealing.  Rueter asks if it is possible, particularly for general education science classes that target non-majors, to offer a “hands-on, lab-based course that provides the tactile experiences that enhance student understanding of scientific concepts?”

Reuter’s study was conducted using a course titled “Soils: Sustainable Ecosystems” for an undergraduate class at Oregon State University. This was an entry level course that satisfied either the physical or biological science general education requirement. The study used 97 students and took place over two terms, across two years 2007-2008. Students were able to select the course format that they wished to participate in. Lab activities for online students required them to purchase a lab kit consisting of a “garden-style semiquantitative soil test kit”  along with common household materials. Hands-on labs primarily covered basic soil typing such as soil collection, coloring, texturing, bulk density, pH and nutrients, soil survey, and soil profile description (Reuter 2009). Detailed lab methods were given to students and accompanied by photographs and digital video. Students were required to complete each lab independently and photo-document their work. Assessments were the same for both groups, though course information was given in synchronous lecture format for students in the face-to-face classroom and online students were given readings to provide information. Reuters (2009) found “no difference in overall grade or lab assignment grades between course formats.” However, he did find significant differences between pre and post assessments that tested knowledge and skills from the lecture and lab content of the course. Online students outperformed on-campus students in both cases and showed a “42% grade improvement from pre- to post- assessment; on-campus students had a 21% improvement” (Reuter 2009).

I was surprised at the level of improvement found in the online classes. It is clear from this study that learning from independent, asynchronous labs is possible and can produce results that are at a minimum comparable to those found in synchronous on-campus lab activities. I have always thought that the collateral learning produced from a group lab activity would produce deeper meaning, however, Rueter’s results suggest that this is not necessarily true. Perhaps as Reuter (2009) suggests group work may allow work completion, but not necessarily retention of the knowledge needed to solve the problem independently when tested at a later time. I think that most educators would agree that retention is more important than mere completion.

Overall, I thought the study was well conducted, but could use further research and clarification in a few areas. One possible flaw to the research was a difference in age between the online and on-campus populations, (average age for the online class was 34 and 25 for the on-campus class). The author tested this possibility and found a weak correlation between overall grade results and the age of the student (R2 = 0.07)(Reuter 2009). This suggests that online students may have had greater background knowledge or experience in the area of soil science.The author also noted a difference in gender ratio (a higher female population in the online course) between the two groups, though he did not study this possible effect. Most interesting to me, was the idea that differences between the class settings might be a result of individual differences in the types of students drawn to online courses versus those that prefer to take on-campus classes. Reuter (2009) suggests that students that are successful in online classes need a greater level of maturity, time management, and ability to self-motivate and that these factors are greater measures of success than delivery mode. All three of these flaws could have been controlled by randomly assigning students to delivery method instead of allowing them to choose. While this study was not conclusive it opened up several questions for future research. Would students at the secondary level find the same success in online hands-on labs? Would strictly online labs give similar results?

Reuter, R. (2009). Online Versus in the Classroom: Student Success in a Hands-On Lab Class. American Journal of Distance Education, 23(3), 151–162. doi:10.1080/08923640903080620