Sunday, November 18, 2012

Section 3: Evaluating, Implementing and Managing Instructional Programs and Projects



Eisner’s Connoisseurship Model of Evaluation
When considering which two methods of evaluation to investigate, I reflected on my teaching experience, first as an art teacher, and then as a librarian. The first evaluation process I investigated was created by ElliotEisner (1985) in what he termed “educational connoisseurship.” Eisner’s expertise-oriented evaluation was influenced by his background as an artist and is grounded in the contextual paradigm. Eisner emphasizes the importance of evaluator as judge and relies heavily on qualitative data, as opposed to methods such as CIPP that prefer quantitative data, accountability and control. When I taught art, it was often difficult to translate art success to administration who defined achievement in terms of figures and percentages. This method would be appropriate for evaluating visual art programs in which evidence of achievement would need an evaluator with an “enlightened eye.” A particular lesson that Eisner’s method could evaluate was a lesson that introduced conceptual art, such as students’ ability to illustrate abstract concepts (diversity, beauty, peace, etc.). The evaluator of the lesson would need a level of art expertise in order to determine whether or not the lesson was a success; therefore, an art teacher from another school or campus should fill the role as evaluator-judge. The collection of data would follow a naturalistic/qualitative approach in which the evaluator chooses student critiques, interviews and the art itself as evidence for learning outcomes.
 
CIAO! Framework for Technology Evaluation
This framework was created by Scanlon, et al. (2000) to evaluate education and communication technologies for learning. This framework falls in the contextual category, in which learning occurs based on what tools are available and seeks to evaluate learning in context of the lesson, rather than the educational technology alone.  Scanlon et al. cited many other methods of evaluation for creating the CIAO! method including the Parlett and Hamilton (1972) Illumination framework of evaluation because it did not separate the instructional innovation apart from the learning, and focused on the processes instead of the outcomes (goals & objectives). This evaluation method also uses mixed methods to assess whether or not technology has an impact on learning such as questionnaires, data, observations and tests.

CIAO! stands for context, interactions and outcomes, and the difference in this method and other methods is that it emphasizes changes in attitudes and perceptions as a result of the technology as well as the learning outcomes. Context refers to the technology’s purpose and how it fits into the lesson’s objectives. Interactions focus on the students’ interactions with each other and the technology. Outcomes refer to the students’ perceptions/attitudes to more fully describe the impact the technology had on learning. This method would be appropriate for evaluating any lesson that integrated technology such as blogs, social media and wikis.  Recently, my school received a grant to purchase class sets of iPods and I am responsible for their maintenance and distribution. This method of assessment would be valuable for assessing the effectiveness of their use in classrooms and the outcomes of the evaluation could provide data for future technology grants.




 
From the reading this week and from researching other methods of evaluations I have concluded that programs should not just be evaluated just by learning outcomes and satisfaction.  Many other questions need to be addressed when evaluating a program, especially in education. The reason is because stakeholders who regard the results of the evaluation need to see the bigger picture of a program’s effectiveness. Even if a program did not provide the end results it was intended for, the evaluation can answer questions that affect future decisions regarding other programs. The following are questions that would be beneficial for evaluating a program’s overall effectiveness:
  • Was the program cost-effective in terms of money, time and manpower?
  • Can the program’s effectiveness be re-produced in other settings or other campuses?
  • Is the program sustainable, and can the program have an effect over time?
  • Did the program have an impact on the population it was meant for?
  • Besides the intended goals and objectives, were there any other benefits that occurred because of its implementation?
Leadership requires one to 1) diagnose and assess the situation in order to solve the problem 2) adapt behavior to match whatever actions are required and 3) be a good communicator in order for others to understand plans and goals (p. 117). A situational leader has the ability to influence and motivate people, not matter the situation and is a requirement for successful management. 

This case of providing training during economic decline applies to me because I was asked to provide technology integration training to teachers on my campus at the beginning of this school year, and the purpose of the in-service was to provide some uses for free educational Web 2.0 applications to use as a result of budget cuts. These chapters provided me with insight on how I could improve my methods to include better resources and leadership skills to motivate my colleagues. For example, I would have selected resources more carefully because some of the applications I chose to include required cell phones, and some teachers did not feel comfortable because of questions of security.  Hersey and Blanchard described four phases of Situational Leadership--S1)Telling-the leader tells the team exactly what to do, S2) Telling-the leader still tells, but is "selling" what he wants team members to do, S3) Participation- leader works alongside team members and shares decisions, S4) Delegating-the leader passes on responsibilities and makes less decisions.  According to these phases, I directed my training by "telling" teachers what to do, and a better approach would have been to focus more on the "selling" of the tools that I introduced, as well as a participatory approach that allowed more time for input from the teachers.  Hersey et al. (2001) stated that influencing people requires, among other things to be able to assess the readiness level followers exhibit performing a specific task. Looking back, I would have made more provisions for different levels of teachers' technology efficacy and readiness to accept the technology. Also, as a result of this week’s focus on program evaluation, I see the importance of following up my training by questioning teachers about the effectiveness of the Web 2.0 applications I included in my presentation.


 

 
 



2 comments:

  1. Hi Kelly,
    How cool, I taught art also. And I totally agree with you that it was difficult to translate art success to administration who always defined achievement in terms of figures and percentages. Our art program was the first to go when the budget got tight. And my admen staff wonders why our test scores are lower now than when we had art… I suppose they can’t see the forest due to the trees.
    On another note, I think you bring up very good questions to ask for evaluation a program’s overall effectiveness. Your overall blog is very well done and so much easier to read and follow then the text book. I should have used your blog as a type of “cliff notes” for dummies on section 3. LoL
    You express your thoughts so clear, have you ever thought about writing books? I can tell you are very creative. Have a good Thanksgiving!

    ReplyDelete
  2. Hi, Kelly!
    Our art teacher has the same issue fitting his content into the prescribed evaluation criteria for our district. I'm glad that you think Eisner's method would be applicable in that situation. I'm still struggling with knowing which models work best with which types of instruction. I did a quick search, but did not find a simple way. It would be so great if there were some type of flow-chart for us to use to narrow down our search for the most effective evaluation to use according to the context, audience, etc.

    ReplyDelete