Thursday, January 20, 2011

Week 2: Workshop Considerations and Self-paced Tutorials

Application of the "ADDIE" Model 
The ADDIE (assess, design, develop, implement, and evaluate) model for workshop planning, as laid out by Veldof, was particularly identifiable in the Yelinek, et al. article in which a tutorial was created to help students navigate the school's online curriculum.  
 - Assess: Background information collected regarding the needs of the client (the school to be implementing the tutorial), the needs of the students who will be using tutorial, and what type of tutorial would be best for the given situation
 - Design: Design tutorial to best address client and student needs, while considering Gagne's nine-step model of instruction theory
 - Develop: Create the tutorial following their design, using their software of choice
 - Implement: Allow students to use the tutorial
 - Evaluate: Soliciting student feedback

The Johnston article about an information literacy tutorial followed a similar pattern: "assessment" was covered in the literature review, as well as an explanation of what "graduate attributes" are and why they are important; "design," "development," and "implementation" explained through the project outline and method section; and "evaluation" consisting largely of the results and discussion sections.  

In both articles, I was somewhat disappointed that the "evaluation" focused largely on the students' perception of the tools themselves, rather than how well the tutorials actually taught the students what they needed to learn.  While favorable student feedback is important - if users do not like a tool they are less likely to use it - I interpreted the "evaluation" phase of the ADDIE model to also include considerations of actual learning outcomes.  Since the authors' purpose in these articles was not to specifically demonstrate the ADDIE model, I can understand why this part of evaluation was undervalued.  However, since both articles were concerned with addressing the overall effectiveness of the tutorials, I still would have expected more emphasis on what the students actually learned, versus how well they liked the tutorials.  

Also, while improving and evaluating students' feelings of self-efficacy is important (e.g. whether students feel they improved their own understanding of information literacy), equally important is giving them feedback regarding their actual performance.  Yelinek, et. al. at least acknowledge that providing feedback and assessing performance are important parts of instruction, but they (nor Johnston) explicitly describe whether their tutorials provide this feedback.  Humans have a tendency to exaggerate our skills and attributes, therefore I think it is important to provide unbiased feedback to learners regarding how well they are actually performing on tasks and what they can improve on in the future.

When Can Online Tutorials Be Effective?
 According to Yelinek, et. al. teaching "procedural" skills (such as using software) can easily be accomplished through online tutorials, because it involves merely explaining the necessary vocabulary and then providing step-by-step instructions to complete a task.  To this, we may add some library-specific skills such as how to find a database or how to place an interlibrary loan request - procedural skills which I notice frequently appear in virtual reference interactions and therefore could be useful to online users.  More broadly, as noted by Johnston, online tutorials may also be a good alternative for teaching skills which cannot be covered in the classroom due to time restrictions, or when students are learning from a distance.

Online tutorials do not seem to support more abstract skills, for instance learning to construct an effective search or other types of general research-skills related activities.  Since these skills rarely have one right answer, I imagine they are more difficult to turn into purely step-by-step directions.  These types of skills require more time for development and refinement, as well as more human involvement in feedback and evaluation.  For example, it's one thing to have a tutorial explaining Boolean logic, but it's another to teach someone how to effectively apply it in their everyday searching.  Also, tasks which take a user away from the computer are likely difficult to incorporate into an online tutorial.  A tutorial to navigate the library stacks would be of little help to a student who must get up and leave the computer in order to find their book.  

Unless, of course, someone develops a mobile app to address this problem... in which case, I know several students would love to find their way out of the Hatcher stacks!

3 comments:

  1. I agree with your point about the "evaluation" step in ADDIE, Kim. There seems to be little attempt to objectively evaluate the effectiveness of these tools. I would have liked to see the Yelinek assessment compare the results of the students who had used the tutorial to the results of students who didn't use it. The fact that all students who took the tutorial answered 40% of the assessment questions correctly was not impressive, especially with nothing to compare that to.

    ReplyDelete
  2. I was really confused about their assessment, because they say, "all students answered 40 percent of the questions correctly; all but one of the students answered 25 percent of the questions correctly..." I am still scratching my head over this, and wish they had put a little more effort into explaining these results. (But the authors are students, so I have a hard time being too critical...)

    ReplyDelete
  3. You bring up a good point, isn't evaluation of how well the participants understood and learned the important part of a workshop? Makes me wonder what the authors consider to be effective--turnout?, level of participation?, ease of teaching the workshop? Hmm...

    --ReadWriteLib

    ReplyDelete