Author Archives: Alda

Article Review 4: Teaching Presentation Skills Online

Kenkel, C. S. (2011). Teaching presentation skills in online business communication courses. MERLOT Journal of Online Learning and Teaching, 7(3), 412-418. Retrieved from

This article discusses the importance of finding ways for students to practice presentation skills in courses that are primarily online. Many business courses drop the communication skills aspect when they are moved to the virtual realm, and as Kenkel (2011) notes, this is problematic: “Given the rapid growth in online programs, this deficiency could have serious consequences for business graduates.” Good interpersonal skills are something that employers and managers rate highly, but often students do not get much training beyond a single basic speech course, if one is even required.

The author proposes that to successfully integrate presentational skill-building into an online course, an instructor needs to provide a detailed rubric and direct students to use feedback from an initial presentation to improve subsequent presentations. Part of that feedback should be peer assessment that can “verify and reinforce” faculty feedback. Rubrics, reflection and peer editing exercises long have been recommended for face-to-face classes as well, so this is nothing new.

As a case study, results were presented from the development of online options for students in management courses at a large Midwestern campus. The online students had to post three videos of presentations for instructor and peer feedback. Students were given the option of using a private campus-hosted service or public video hosting sites like Facebook and YouTube. Students had to share feedback with each other on a discussion board. A rubric was given so that students could rate themselves and each other, and they received feedback from the instructor based on the same rubric. I was surprised that the students seemed willing to provide detailed feedback when it is all on-record and not anonymous. However, the author explained that only the instructor feedback counts toward the students’ grades. The peer feedback just gives them more of an idea of how an “audience” reacted to their speech, and often helps to reinforce the instructor’s grade (if several people mentioned lack of eye contact, for example).

What I really appreciated about this article is that it didn’t just stop at mentioning some potential challenges or areas for future research. The author actually includes a list of helpful tips/lessons learned. For example, some of the challenges faced in developing a course with online options for presentations included finding tech support, getting students access, and making expectations clear from the beginning. The author notes how common problems were addressed, such as describing the equipment needed in the syllabus with information on library checkout options; having students identify a friend who can assist them as a camera person; and sharing successful videos from previous semesters so students have a better idea of what their own recording should look like.

My major concern is with the student feedback example that the author uses to deem the class structure a success. From an evaluation, “I learned the same, if not more  than I would have learned in  a traditional  classroom setting  and I think it made it easier to do speeches not having 20 eyes staring at you while you prepared and gave your speech.” I have taught speech courses for seven years, and in my opinion, the student has not learned the same or more as in an FTF class if they have not worked through anxiety regarding audience gaze. The different types of communication apprehension, and ways to address them, is something I spend the first full day (3 hours) discussing in my FTF classes. If you aren’t aware of public speaking’s status as something more feared than death, please watch this funny but informative video from CBS news:

So, how would I address the potential shortcoming of not practicing eye contact and controlling gaze-induced anxiety? I would do the same thing one of my instructors at UAF did when I took a fully-online Business Training class. You record a video of you speaking, but there must be at least 5 other people in the room serving as the audience at the time. That also deters students from doing 50 takes until they get their speech perfect.  You never know when your boss will call on you with short notice to talk about a project in front of new employees or investors. If you are truly trying to become a better presenter in the professional realm, you have to be prepared for a live audience.

Weekly Writing 5: Situational Factors for Interviewing Class

Following the categories given by Fink (2013), I will describe several key features of the interviewing class I’ll be teaching in Spring 2015, including its specific context; what the department and society expects in terms of interview preparation; the nature of the subject matter and how it is situated within the field of Communication; the relevant characteristics of the learners and instructor; and the “special pedagogical challenge” driving the course (pp. 76-77).

The class will be held at the main campus of UAF, a 4-year institution in Fairbanks, Alaska. It is estimated that 15 students will enroll in the class. That is approximately the number of students needed for the class to “make’ its minimum. The class is a 200-level offered from the Department of Communication that will be open to all levels of undergraduate students. Class meets two nights a week for 1.5 hours per session, and the course will be delivered primarily via live classroom instruction with some video sessions possible if the instructor is traveling or otherwise unavailable in person.

The course will be focused on employment interviews, teaching skills relevant for the role of interviewee AND interviewer. According to Forbes (2012) online, the average person stays at a job only 4.4 years, and millenials are expected to stay for an average of less than 3 years. Thus, it is reasonable to assume that employment interviews will be a recurring situation most people will face throughout their adult lives. Based on my experience, interviewees expect the interviewer to be prepared, attentive and fair. Those of us who have been interviewers expect interviewees to be confident, engaged and knowledgeable.

Because employment interviews can affect one’s income and career progression, it is also reasonable to assume that society places a high value on “good” interviewing skills. The UAF Dept. of Communication, realizing the need for instructional support of this skill-building, successfully argued to fund a class on interviewing. UAF’s curricular goals are reflected in the proposal document submitted for the course, in which the head of the Dept. of Communication wrote, “I do think that it may have a positive affect on programs such as Nursing, Marketing, Justice, or Business to mention a few. Basically any program that has an ‘interviewing’ component or necessity will benefit from the course.”

Though many popular press articles try to convince job-seekers that there are “right” answers to interview questions, and that employees just have to ask the “right” questions to build a good workforce, I would argue that the subject matter is actually divergent. What questions are best and what answers go over well are largely dependent on the localized communication situation. Interviewing is “primarily cognitive” but there are some physical aspects related to nonverbal behavior that are also important. Eye contact, smiling, handshakes, and other physical behaviors are addressed as part of the skill-building. Interviewing as a field of study has seen various fads and new trend come and go; there are “competing paradigms” where questions can be behavioral, historical, hypothetical, etc. Sometimes it’s just a conversation; other times you are asked to perform a task.

The course is an elective and will be offered in the evening, so there is a chance that both full-time traditional students and working adults will be attending. The prerequisite is the basic English course, so students should have college-level writing abilities. However, their experience with interviews may vary widely, as can their reasons for enrolling. Some may be using it to bolster their degrees in professional communication or public relations; others may be taking it as professional development. I will have to survey the class at the beginning of the semester to learn more about their history and motivations. Prior experience is not necessarily a plus, as some students may have to un-learn some bad habits or myths.

As the instructor, I am bringing some experience to the table, having taught the course (same book and format) for two semesters at Purdue University; this will be my first time teaching it at UAF. I feel that this is in my “zone of competence” because over the past two years I have presented on this subject for Staff Appreciation Day and for 4-H leaders and youth in order to keep up my skills. Overall, I’d say my “challenge” is that students often feel overwhelmed by or apprehensive of interviews because the power is imbalanced and the stakes can be high. By having students practice the process step-by-step and critically reflect on what’s happening, I can help students build skills that will help them advocate for themselves and feel like they have more agency in the process.

Fink, L. D. (2013). Creating significant learning experiences: An integrated approach to designing college courses. San Francisco, CA: Jossey-Bass.

Meister, J. (2012). Job hopping is the new ‘normal’ for millenials: Three ways to prevent a human resource nightmare. Forbes. Retrieved from

Article Review 3: 21st Century Competencies

Voogt, J., Erstad, O., Dede, C., & Mishra, P. (2013). Challenges to learning and schooling in the digital networked world of the 21st Century. Journal of Computer Assisted Learning, 29, 403-413.

To delve more into the issues raised in this week’s readings about how the 21st Century classroom needs to change to benefit today’s learners, I searched for an article that discusses the competencies that today’s curricula should be targeting. I found a recent article by Voogt, Erstad, Dede, & Mishra (2013) regarding “information and communication technologies” (ICT). The focus of the paper is this list of competencies: “collaboration, communication, digital literacy, citizenship, problem solving, critical thinking, creativity and productivity” (p. 404, italics original). The authors note that these competencies are not unique to the 21st Century but “take on new dimensions in 21st century virtual interaction” (p. 405).

Many of the tools we have available today can be used in the service of building multiple competencies. Voogt et al. (2013) state that “Web 2.0 technology enables users to produce and share content in new ways” and “Digital camera and different software tools make it easier for students to show their work and reflect on it.” When I read this, I made a connection to how things are changing in the assessment of learning, and not just learning itself. I would have loved to see a deeper discussion in the article of how we alter our evaluation tools to determine progress on these competencies. The authors note that our assessments must change along with our teaching methods, but give no suggestions for how. With the current climate of controversy surrounding testing in K-12, we really need more people working on defensible, sensible assessment tools that keep up with change.

The memory that spurred the connection was from attending a conference presentation in which an evaluator described how, instead of using a traditional pencil-and-paper survey to assess childrens’ change in knowledge during a gardening program, she incorporated digital imagery. She compared before-and-after results of photographs students took when asked at the beginning and end of the program to take pictures of what “gardening” meant to them and what a “garden” looks like. I was fascinated by this methodology. Sometimes our learners may not yet have the vocabulary to explain what they’re thinking, but they can show us if we give them the tools.

Voogt et al. (2013) also tout the utility of “augmented reality” as a learning tool because it affords teachers “the ability to create activities not possible in the real world, such as interacting with a (simulated) chemical spill in a busy public setting,” which in turn “builds skills such as collaborative inquiry” (p. 406). My mind was immediately taken to the sci-fi example of Star Trek cadets participating in the Kobayashi Maru. No matter how descriptively written, case studies in books can seem a bit stale or distant. When you can take that case study and introduce moving elements like video or 3D representations, you are likely capturing more attention. I do like the idea of asking students to imagine a scenario themselves. BUT if you are trying to teach a specific lesson, it may actually control for a lot of noise by providing the visuals for them and getting students to focus on the problem-solving task.

The authors end with a set of recommendations, but they are mostly a reiteration of the importance of digital literacy and other competencies without any suggestions of how to ensure their inclusion or success in today’s curriculum. I think Voogt et al. (2013) provide a nice literature review related to the need for and benefits of aiming for the competencies they present. But I already agreed these things were important before this article. Folks have been writing about these ideas for quite a while (our reading for this week, written 7 years earlier, mentions some of the same benefits of going beyond traditional lecture). I’m hoping to find other articles in the future that provide more of a road map for how to move us closer to these goals.

Weekly Writing 4: Theory of Change

I watched the Eric Mazur video on interactive lecture in which he dropped pollen into an electrically charged circle and the pollen started to move clockwise. He then had students vote using classroom response technology. He had them vote on their own first, then discuss with their neighbors when the votes showed the crowd was split. They were actively trying to problem-solve and think through the possible answers. The peer assisted problem solving or “Peer Instruction’ is part of what he calls ConcepTests.

Similar to John Seely Brown’s example of architect workers’ projects in progress being visible to all, the thought progress of Mazur’s students is visible in progress through the multiple class polls. The students work in groups and try to capitalize on diverse ways of thinking to help explain to each other possible solutions and choose what they think fits.

Both authors seem to be promoting the idea that learning practices must change in the 21st Century to accomplish the goals of “better” learning environments. Specifically, we must abandon old ideas of students being passive vessels to be filled with knowledge through straight lecturing. To be ready to thrive in a working world that is increasingly organized into diverse and digitally connected teams, it seems logical that students are given the opportunity to learn in a simliar environment. As Brown (2006) notes in an illustration, this is “learning as enculturation into a practice” (p. 18).

In order for classroom models like this to work, though, we need to re-orient both students and teachers to what a classroom should look like and function like. As Brown (2006) also notes, teachers have to be trained “to produce activity-based learning rather than lecturing” (p. 19) which means changing the “epistemic frame” of courses such that teachers are mentors showing students “how to be” in a field (p. 20).

I am still pondering how this could be applied in my chosen field of interpersonal communication. Unlike “architect” or “physicist,” folks who study communication are not aiming for a specifically defined profession. Also, we all engage in communication every day whether or not we are studying “how to be” a communicator. It seems for my students it would be helping people to be better at certain life skills like empathy, explanation and negotiation, regardless of setting or profession. It is easy for me to imagine mentoring students through a problem-solving situation in which there is a known “correct” answer- mathemeticians agree on solutions for proofs; scientists agree on the laws of how electricity works. But in the social sciences, we are working with fuzzier concepts like what is ethical and respectful in a given social situation; more of a case-study model.


Brown, J. (2006). New learning environments for the 21st century: Exploring the
edge. Change, 38(5), 18-24.

Harvard Magazine [HM]. (2012, February 9). Eric Mazur shows interactive teaching [Video file].         Retrieved from

Mazur Group (2014). Home: Education. Retrieved from

Article Review #2: Cult of Personality

Daughenbaugh, R., Daughenbaugh, L., Surry, D., & Islam, M. (2002). Personality type and                     online versus in-class course satisfaction. Educase Quarterly, 3, 71-72.

Full paper available from as published in the Proceedings of the 7th Annual Mid-South Instructional Technology Conference (Teaching, Learning, & Technology: The Connected Classroom) in Murfreesboro, TN, April 7-9, 2002.

UAF has an annual Academic Leadership Institute (ALI) in which faculty and staff meet with the Provost/Executive Vice Chancellor for Academic Affairs and discuss case studies and other topics related to leadership in higher education settings. The book that ALI participants read for the most recent meeting was Susan Cain’s Quiet: The Power of Introverts in a World that Can’t Stop Talking. The group spent some time discussing how new technologies may help or hinder the comfort level of introverts in an academic setting. Personally, I love using e-mail and instant messaging, but hate being “live” in a video chat. I purposely include discussion board posts as assignments in my FTF classes under the assumption that quieter students may find typing a more palatable way to participate. So, I decided to see what some research says about personality and online versus offline learning.

Daughenbaugh et al. (2002) utilized the Kiersey Temperament Sorter, which is a 70-item scale they feel is comparable to Myers-Briggs and rates extroversion/introversion as one of four sets of individual differences. The second measure was a course satisfaction survey. Both instruments were filled out on the Web by a total of 146 students taking “introductory computer courses” from the same department at the same southern university. About half of the participants were undergrads and half were grads. Most (78%) were female and most (81.5%) were taking an FTF course. The assymetries in course type limit the findings. It would have been nice to see more even Ns for the FTF versus online students since that is one of the major comparison points. The authors tested for gender effects and found none.

The break-out of the two types were roughly 56% extrovert and 34% introvert (9% uncategorized; did not total to 100%). This is consistent with projections of proportions of introverts in the general population. The general hypothesis was that introverted students would exhibit a higher preference for online courses compared to extroverted students. However, the findings did not bear that out. The authors summarize, “The extroverts liked the involvement of the chat rooms, threaded discussion, and e-mail correspondences of the online courses” while “introverts, by contrast, had little participation in chatting or threaded discussions, though they did participate in e-mail more than any of the other participatory activities.”

The differences seemed to be driven not just by extroversion/introversion but those traits in combination with scores on intuition/sensing and judging/perception. The authors also compared students taking a FTF class with students taking an online class. Here were additional findings (p. 72):

  • We found that the intuitive, rather than the sensitive, personalities preferred the online course environment to more traditional, in-class situations.
  • The perception group expressed stronger preferences for the amount of student interaction than the judging group.
  • We found that in-class students expressed much stronger satisfaction with the in-class environment than did students who were in the online courses.

There are 16 possible result combinations for the personality instrument, similar to Myers-Briggs (ESTJ, INTP, etc.) For the sake of brevity I won’t further analyze this section.

The authors end by recommending that more research be done, that teachers pay attention to different learning styles that may be related to personality differences, and that classes incorporate “means to increase student interaction in online courses” (p. 72). Since the authors found that extroverted students really enjoyed and were active in the online environment, the authors’ conclusion seem to cater to them, with suggestions of group projects, face time, and even a “students-only” discussion board. As an introvert, all of those suggestions make me cringe. I think the authors are overlooking a huge question that needs to be answered: WHY weren’t the introverted students more engaged, given the supposedly less face-threatening environment of online discussion boards? WHY did they enjoy e-mail but not chats? If the current set-up is leaving them less satisfied, why on earth would you further alienate introverts by adding extrovert-slanted activities like group work?

I would also love to see this replicated for participants pursuing different subject matter. I found a discrepancy between the summary article and the original conference paper; The former claims these were students in introductory computer skills courses, while the conference paper claims they are students in various different courses (an acknowledged limitation of the study). Replicating this study with a group of undergraduate students all taking the basic speech course, some online and some FTF, might be a tighter design (at UAF the basic speech course curriculum is standardized).

Weekly Writing 3

I was excited to see a meta-analysis as the assigned reading this week. I was part of a research team that worked on a series of meta-analyses at Purdue University, and I gained a lot of respect for what the process has to offer as far as helping us get closer to the “true” effect in a population. The public is often confused by contradictory studies- how can one researcher find an effect, but then another researcher finds no effect? Is coffee good for us or bad for us? Aack! Well, when you draw a marble out of a bag of mixed colors, even if there are 20 red and 2 blue, you may get a blue one on your first try. So relying on any single “result” can be misleading. A well-done meta-analysis can help us be much more confident by looking at many samples from the marble bag to get a better idea of what is truly in there.

I don’t have a problem with the number of studies overall in the analysis. I would be thrilled to have at least 45. In the meta-analysis I worked on, we only had 33, but still found some meaningful effects to talk about. I am given pause with the idea of making assertions about a sub-group of the studies, though. We’re talking about trying to look at effects across only five to derive generalizations about K-12 education. If the five studies were extremely similar in design and setting, I would be less hesitant. But given the diverse description of the five, including one that was in Taiwan instead of the U.S., that’s just not enough for me as a researcher to be comfortable making assumptions across.

A great resource if you are interested in learning how to do a meta-analysis is Practical Meta-Analysis by Lipsey & Wilson (2001). In their introduction, they mention the wide range of number of studies that can constitute a meta-analysis: “A meta-analysis conducted by one of the authors of this volume, for instance, resulted in a database of more than 150 items of information for each of nearly 500 studies (Lipsey, 1992). We hasten to add, however, that meta-analysis does not require large numbers of studies and, in some circumstances, can be usefully applied to as few as two or three study findings” (p. 7).

So, I must disagree with folks who feel that 45 studies is not enough, because overall in this case I think 45 is reasonable given the inclusion criteria. But when we get down to a comparison of as few as three or five, I would want to see that those studies are all from the same population and had a comparable number of participants. For example, you may have three different studies that each took a sample of roughly 100 students from the same incoming freshman class at a university. Due to error and other reasons, each study finds a slightly different effect. I think that averaging across the three, even though it’s “only” three studies, would be useful in that case.


Lipsey, M. W., & Wilson, D. B. (2001). Practical Meta-analysis. Thousand Oaks, CA: Sage.

U.S. Department of Education, Office of Planning, Evaluation, and Policy Development (2010). Evaluation of evidence-based practices in online learning: a meta-analysis and review of online learning studies. Retrieved from

Weekly Writing 2: Finding nuggets

The memorable learning experience that came to mind for me was not a specific activity or assignment. Rather, it was a comment during a lecture that managed to transform how I listened, took notes, and shared what I learned from then on. During an undergraduate communication class, the instructor was discussing the fact that we are bombarded with a lot of information every day. We only remember long-term a percentage of what we originally hear in class. The instructor said his goal for us, and what he hoped we would come away with, were various “nuggets” of formative information that would stick with us long after we had forgotten the rest of the lecture that surrounded them.

I see reflections of each of the theories in this “lesson” that has stuck with me. There is cognitivism in thinking of nuggets as pieces of information collected for long-term storage. There is consctructivism in thinking of nuggets as the type information learners actively seek out, determine to be personally useful, and connect to prior knowledge. There is even an echo of behaviorism in the fact that my instructor hoped that by sharing the “fact” of nuggets, students would change how they listen to lectures.

This idea of nuggets, meant to describe other, more important lecture material we should be remembering, became a nugget in itself. Depending on your personal worldview, you may want to consider nuggets pieces of Truth, or of truths, or simply tools in a loosely organized toolbox. I personally found the idea of “collecting nuggets” to be motivational. I think we all have these collections in our brain in some fashion. We’ve got a pile of pieces of advice we plan to pass on to our children or others we care about. We have a pile of important facts about topic X or topic Y that we consider fundamental knowledge of that subject.

Nuggets could also be a way of preserving a sense of cohesion in the face of a fast-paced, challenging flow of online information. Siemens (2005) relates connectivism to insects trying to navigate across multiple pheremone paths. How do you decide what to pay attention to and what to discard, so you can claim somewhere in there you have staked out “your” path? The quote that stood out to me the most from the article on connectivism was that “Self-organization on a personal level is a micro-process of the larger self-organizing knowledge constructs created within corporate or institutional environments. The capacity to form connections between sources of information, and thereby create useful information patterns, is required to learn in our knowledge economy.”

For me, staying an active listener by giving myself the goal of gathering take-aways or “nuggets” to remember during individual learning experiences helps me eventually see patterns across the entire collection of experiences. What’s fascinating is, as Siemens (2005) points out, that computers can do a lot of the remembering and organizing for us now. If we take a cognitivist view of nuggets and see that as just another word for some organizing function of the brain, we can also see how technological tools like Evernote can organize those nuggets for us. Why remember an amazing cleaning tip or trick when my Pinterest board can memorize it for me? Well, I don’t know about you but I’ve got several files, online and offline, full of great ideas that I never went back and followed up on…


Siemens, G. (2005).  Connectivism: A learning theory for the digital age.  International Journal of             Instructional Technology and Distance Learning, 2.

Article Review 1: Constructivism and Adult Learners

Huang, H. (2002). Toward constructivism for adult learners in online learning environments.         British Journal of Educational Technology, 33, 27-37.

I’m particularly interested in the needs of adult learners, since that is the main audience for our non-credit outreach education at the Cooperative Extension Service. Thus, for this first article review, I kept an eye out for journal articles that related to learning theories, online learning, and adult learners. The article that caught my eye is one that looks at intersections between adult learning theory and constructivism. Since it was written in 2002 and the online learning landscape has changed much in the last decade, it should give us plenty to discuss as far as what is different and whether or not that is for the better.

To briefly summarize the article, Huang (2002) begins with a review of the differences of an online learning environment, including the capability for asynchronous communication and the benefit this has for adult learners who are often juggling multiple demands on their time. The author’s purpose in writing this paper was to “explore the impact of a constructivist approach in online learning for adult learners” based on the premise that the teacher’s philosophical approach to designing an online course will greatly impact how the students in that course interact (p. 28).

Huang (2002) summarizes the tenets of Constructivism by noting that multiple scholars “proposed that learners could learn activley and construct new knowledge based on their prior knowledge” where the learner takes on the responsibility of the process and the teacher is more of a guide, encouraging growth (pp. 28-29). The learner is making connections to other parts of their lives, and is doing so within a social context as well. That social context can be tricky; although learning with others helps us reflect and engage deeper, it also forces us to sometimes learn in a way that doesn’t come naturally. As the author notes, “collaborative learning is in conflict with individual differences” (p. 32). We have to do some perspective taking in order to meaningfully comment on our classmates’ online discussion posts. Introverted students cannot merely lurk and absorb; the instructor often requires that they engage through a certain number of written responses, for example.

I appreciated Huang’s summary, but felt that there could be more reflection on how the online learning environment differs from the learning environment that would have been the paradigm for Dewey, Piaget, Vygotsky others. Later on, Huang (2002) does note that “Through Web mechanisms, the learner can search actively and discover rich resource to solve problems or construct his or her own knowledge” (p. 30). This definitely speaks to the learner-directed aspect of Constructivism, but doesn’t touch on the social aspect and how that can affect how and what we “know.” The mechanisms for collaboration back in the early days that Constructivism were much more limited; mailing letters or attending conventions took much longer. One perhaps read books by others but rarely had the chance to speak with the author. Now, you Tweet the author and join in a discussion on his or her fan page, for example.

Huang (2002) goes on to highlight some tenets of adult learning theory, including the importance of helping adult learners see why the knowledge is important and how it can help them in their everyday life. The instructor must pay attention to what motivates adult learners, looking at the practical applications of the material and encouraging adult learners to use the new information to solve problems (p. 29). The process of reflection is much the same today as it was in the past; although we may be typing journal entries instead of penning them, the objective is identical. Think critically about what you’ve read, what you think it means, and how you can apply it in the future.

Another area I would have liked to see addressed is generational differences. When we’re talking about “adult” learners, we’re talking about folks beyond the “traditional” undergrad age of teens to 20s that can range as far as 100s. Great ranges in age, and ranges in economic opportunity, means that adult learners vary in their web literacy. I work with a volunteer group that has struggled to move from a paper-based newsletter to an online newsletter. Just the other day, I had a question from a member who was having a problem reaching the group’s web page. The member had tried to put in the URL bar. I had to explain the difference between an e-mail address and a web page address.

In sum, Huang (2002) touches on some interesting intersections between Constructivism and adult learning, and how the online environment can facilitate the self-directed, practical learning experiences that nontraditional students may demand. However, a decade later we are still working on how to provide a meaningful social experience in online venues, especially when not all adult learners start the class on even footing when it comes to web literacy and technological comfort.


Novices and Experts Navigating the Evaluation Learning Space

Benander (2009) compares novices and experts within the dynamic of student-teacher. This is a highly relatable paradigm since we’ve all been learners at some point. The difference in experience between novice and expert can also be observed in groups of workers with varying years of experience. I think there are parallels in the business world where veteran coworkers become mentors or “teachers” of the new workers.

Two years ago, I started pursuing a career as an “evaluator” and have been gaining experience through training, classes, and collaboration with agencies. The article by Cox (2004) about learning communities brought to mind the fact that I am part of a Facebook group for “novices” called Graduate Student and New Evaluator Topical Interest Group (GSNE TIG). I also have had many opportunties to meet with “expert” evaluators during the annual conference of the American Evaluation Association. Having an online community definitely helps combat the isolation that Cox (2004) mentions, especially for folks working in Alaska who may not have as many opportunities to network face-to-face with experts.

Often the teacher/learner dynamic in a conference setting is evident when novice evaluators attend “expert panels” to hear from folks who have been doing evaluation for decades. I’m sure those of you in other professional pursuits can relate to the idea of going to a conference and hoping to learn from and meet the “rock stars” of your field. Beyond what we can read in the guidebooks and manuals the rock stars have written, we can learn an immense amount from expert stories and tales of caution and success.

One of the reasons we gain so much from listening to experts is because they truly do have a different way of thinking and knowing the information of our field. As Benander (2009) notes, “novices and experts have different strategies for negotiating problems” and “experts have a different orientation…to learning about their subject matter” (p. 37). Listening to the advice and stories of experts helps novices grow in their abilities. But when you’re an expert, some things have become automatic or second-nature and you may forget how difficult it was at first for you to learn that skill. Thus, the author’s advice for experts to reflect on “what it’s like” for novices is definitely something to take to heart.

Benander (2009) also describes how “experiential learning is not just for students but can be a valuable tool to reflect on teaching and learning” because it helps you see the emotional side of learning and the differences in people’s learning styles. This reminds me of when I participated in an experiential evaluation class, where we had to choose a community organization and help them complete an evaluation of some aspect of their service. It was very beneficial to go through that experience and be both a learner (getting to know the organization, its history and people) and offer my expertise as to how best to construct surveys to gather the data they were looking for. I tried to constantly keep in mind the community partner’s perspective; while my focus might be “getting good data,” they are also concerned with “how is this going to be used” and “how might this change us.”

The experiential course also gave me some perspective outside of my everyday work. I consult with faculty at the Cooperative Extension Service on how to evaluate their outreach programming. Many new agents have not surveyed their workshop participants for changes in knowledge and behavior before. I was able to reflect on what I learned from working with another campus organization and carry over some of that knowledge to help solve problems for Cooperative Extension.

The experiential course I took part in was also a two-sequence course with a “cohort” of graduate students. Cox (2004) notes that “the community formed by a student cohort plays a key role in achieving better student learning outcomes for students in SLCs…” (p. 7). I definitely did feel a sense of community in that class, which surprised me because I was not part of the actual overall program that was offering the class (community psychology) and was merely trying to expand my professional knowledge. But spending two semesters in a row with the same fellow students helped us feel like a cohesive group, and paired projects helped us get to know each other and share ideas more freely.

On the teaching side, Bernander (2009) also recommends that teachers “use the student view of the electronic interface” of whatever content management system they’ve chosen (p. 39). As an evaluator buidling surveys, it is definitely critical to preview the survey and look at it just as the public would. It is even better if you have the time to try the electronic view from several different browsers, including a smartphone given today’s tech-savvy and on-the-go audiences.

Trying to be cutting-edge isn’t always a boon, though. I wanted to make a survey less boring and more “interactive” so I threw in a couple questions that were not multiple choice. One major detriment to that dataset was the fact that many users found the ranking question confusing because it asked them to “drop and drag” answer choices to reorder them. Many commented that they didn’t find it intuitive and would rather have a straightforward set of boxes where they typed in their rankings. That’s the sort of experience where perspective-taking may have headed it off, but at least through reflection I am learning from it now.

In closing, the authors mentioned here make compelling arguments for the value of experiential learning and learning communities for BOTH teachers and students. I believe this can also be applied to novices and experts in other similar dynamics, like new and veteran evaluators. Whether it’s in a formal college classroom or at an annual conference, we can all benefit from hands-on practice, focused reflection, and continued contact with others who share our learning goals.


Benander, R. (2009). Experiential learning in the scholarship of teaching and learning. Journal of  the Scholarship of Teaching and Learning, 9(2), 36—41.

Cox, M. D. (2004). Introduction to faculty learning communities. New Directions For Teaching & Learning, 2004(97), 5-23.

GSNE TIG (2014). In Facebook [Groups]. Retrieved Sept. 19, 2014, from