Unit 2 Week 1: Article Critique
September 20, 2014
The article I selected for this assignment focuses on the effectiveness of assessment of an online learning course. The course was developed and implemented by the UK Open University, Milton Keynes, UK. The course focused on web site development and implementation. The assessment considered the content and design of the final project web site. The analysis did not include a test-based option. At the time of the study, expectations of web-based courses included the use of web-based technologies specifically.
Weller indicated that “Web-based courses differ from traditional face-to-face lectures and print-based distance learning courses, in the manner in which the technologies influence the pedagogy, content, interactions and overall student experience. Bates (1995) states that there is a tendency with new technologies to attempt to mimic the traditional classroom face-to-face method of teaching, or to add new technologies onto an existing model. This is particularly the case with the assessment component of a course. If one accepts that assessment must be designed to react to the course pedagogy, aims and objectives, then the changes wrought in these by the shift to a web-based environment should necessarily be reflected in the assessment.’ (Weller, 2010, p. 110) This perspective is a well discussed issue prevalent to online course development compared to the development of face to face courses. Careful considerations concerning the nuances of online learning are encouraged. Such nuances include intentional examination of measurement of learning objective and the building of existing knowledge with new knowledge.
The article cautions course developers to carefully follow university procedures for authentic and measureable assessment in line with policy and practice for all courses despite modality.
The course enrollment totaled 850 students listing many as late returners to education and first time university attendees. The course was developed initially as a test of sorts to work out the bugs for future course offerings. Several adjustments were made to future version of the course based on the findings of the assessment during the first iteration.
Findings included the following:
Be careful not to weigh group assessments but rather weigh individual performance within the group process. For this particular course Weller states “Students often feel they are ‘punished’ if the group does not work well, or others do not contribute. By assessing the analysis of group-work students could still score highly even if their group did not work cohesively.’ The philosophy for this course included “a strong group-work emphasis, particularly at the start of the course. This embodied the underlying course philosophy that the Internet is about two-way communication and not just a delivery mechanism.’ (Weller, 2010, p. 111)
Ensure that the technology used works for all students. If we are measuring student’s use of technology, they must be able to have the proper software and hardware to enable meeting the course assignments expectation. Some of the students in the study were unable to turn in completed projects due to problem HTML editors. Experience with the technology is a pre-requisite to taking online courses. While this is a typical procedure for online courses, how can instructors be sure that students are competent before the course begins? Usually, competency of this nature is not assessed until the first part of the course when students find they have trouble with site navigation or connectivity; as is sometimes the case for rural areas, especially in Alaska.
Plagiarism, a potential issue in all courses, even more so with online learning. Weller indicates “An issue of great concern to many with regards to web-based assessment is that of plagiarism.’ (Weller, 2010, p. 115) Weller goes on to emphasize a stair-step approach to developing summative assessments by asking students to turn in some of the project in draft form prior to completion for analysis. This approach was implemented after the initial study and reduced the possibility of plagiarism substantially.
Qualitative data from the students suggest that an experience with this type of assessment was positive over-all. However, there is no data that reveals a comparison to traditional testing. It is my conclusion that the data cannot be as reliable and valid as needed to make a determination between the two differing types of assessment. I would be curious as to how both content knowledge and application of concepts were assessed. Does a student-developed web-site assess both subject knowledge and application? How does the faculty determine what students know? Most would agree that web-site development is a fair way to assess application of concepts. I found the article did not include commentary on both.
I found the article useful as a reminder that it is important to consider fairness in assessment application, fairness in expectations for completion of assignments, fairness of student work ethics and fairness in terms of measuring application and course knowledge. I have the benefit of reflecting on the outcomes and recommendation of this article that will directly affect my work with online courses this year. Many of our assessments are not test-based assessments. I intent to share the findings of this article at our next faculty self-assessment process leading us toward accreditation.
Weller, M. (2010). Assessment Issues on a Web-Based Course. Assessment & Evaluation in Higher Education 27:2, , 109-116.