Sunday, February 19, 2017

Should teachers be evaluated? Yes, but it matters how.

Let’s be honest: being evaluated as a teacher isn’t exactly fun.  I generally associate the experience with a frowning supervisor at the back of the room scribbling away on a notepad.  It can feel invasive and unfair.  This person is going to come into my classroom for a single half-hour period and make judgments about the quality of my teaching for the whole term.  Oftentimes, the evaluator has his or her own favorite classroom indicators to focus on, that may or may not align with my own sense of what makes for good teaching and learning, or the ways I feel I need to grow professionally.  But just because evaluation isn’t always handled well in schools doesn’t mean it’s a bad idea overall.  In this post, I’d like to profile a few evaluation approaches I see as big improvements to the traditional approach described above, and then outline my own vision for effective teacher evaluation.

Get the students involved

The video “Measures of effective teaching: student feedback” shows how useful it can be to stop and ask the students themselves how we’re doing as teachers.  In this case, middle school science teacher Paul Ronevich, from the Pittsburgh Science and Technology Academy, reflects on his results from the Tripod student survey, a survey that’s been used by over 100,000 teachers across the U.S. Ronevich’s students gave him high marks overall, but they pointed out that he doesn’t always conclude his lessons in a clear and helpful way.  This feedback was practical, something Ronevich could apply in his classroom right away, and he did, with great results.  Students interviewed for the story said it felt great to be asked what they thought about their teachers.  And it makes sense- students are the ones who spend the most time with their teachers, by far.  Why not ask them what they think?

Adopt a clear, research-based framework


One of the best ways to enhance a teacher evaluation system is to rally around a proven system.  Robert Marzano’s teacher evaluation framework would be an excellent choice, in that it’s backed up by extensive research showing that its strategies improve student learning outcomes (Marzano, Toth, & Schooling, 2017).  If teachers are aware of how they are going to be evaluated long beforehand, and if they have support in learning and applying the indicators, the whole evaluation process starts to feel less intimidating.  The feedback can be a lot more specific and constructive, and teachers really do grow as a result.

The ideal system: it’s about balance
My ideal teacher evaluation system would take multiple factors into account.  No one metric can capture the quality of a teacher’s work, but careful consideration of multiple factors can offer a rich and insightful portrait.  I would look equally at student surveys, value-added measures (meaning student growth on standardized assessments), formal observation, and other evidence of student growth, to be selected by the teacher.  

The student surveys should be carefully designed and administered, based on relevant research, to elicit the most helpful kinds of responses.  The standardized testing data should be analyzed for growth as opposed to grade-level conformity, so that we don't inadvertently disincentivize working with students who are struggling academically.  The formal observations should take place at least twice a year to give continuity to the professional development process, and should include pre- and post-observation meetings, so that the teacher gets the most benefit from the process.  The observation format should be based on a research-backed evaluation format, like Marzano’s evaluation model.  For the other evidence of student growth, a wide range of learning artifacts should be accepted, to be appraised based on a descriptive rubric prepared by the school.

If carried out in a balanced way, teacher evaluation can provide invaluable feedback to teachers for their professional growth; it can enhance the prestige of the teaching profession and attract talented new teachers; and, most importantly, it can lead to better student learning.

Reference list
Marzano, R., Toth, M., & Schooling, P. (2011). Examining the role of teacher evaluation in student achievement: contemporary research base for the Marzano causal teacher evaluation model. Marzano Center. Retrieved Feb. 19, 2017, from http://sde.ok.gov/sde/sites/ok.gov.sde/files/TLE-MarzanoWhitePaper.pdf

Teaching Channel. (2017). Measures of effective teaching: student feedback [online video].

Saturday, February 4, 2017

Differentiating based on pre-assessment

Differentiation would seem to flow naturally from the practice of pre-assessment, because once you've surveyed your students' prior knowledge and found out that some of your students already know the material you're about to teach, and other students don't have the prerequisite skills to understand what you're about to teach, it makes you want to change your approach.

For my unit on engineering design, the pre-assessment will be an Edpuzzle quiz based on the YouTube video "Defining a problem: Crash Course Kids #18.1." 


The pre-assessment focuses on main concepts of defining a design problem, including identifying the criteria of success and constraints and determining whether the problem represents a want or a need.  It does so in an enjoyable format, since the video itself is quite entertaining and kid friendly.  The only drawback of working in this format is that, since the video was only three minutes long, and since I didn't want to interrupt it too much or it would no longer be fun to watch, the pre-assessment ended up with a total of only six questions.  The majority of the questions are short answer and will provide useful data, but I do worry a little about making too many instructional decisions based on so few data points.  I guess I can mitigate this concern by using my ongoing assessments to adjust my diagnosis of student needs.

Pre-assessments tend to reveal that most students have some superficial knowledge of the upcoming learning, while a few already know it well and a few others lack even a basic understanding of the concepts.  I will differentiate for those students who do exceptionally well on the pre-assessment by, for example, challenging them to extend their thinking on the first day's activity.  While the entire class will be making ReCap videos in which they identify their own example of a design problem and describe it in detail, I will challenge advanced students to develop a novel solution for the problem they have chosen.  In the same activity, I will differentiate for students who lack prerequisite knowledge by engaging them in small-group instruction.  Together, we will review each student's ideas and work to identify the criteria for success and constraints.  Once students feel comfortable with their examples, and I am satisfied that they understand, they will make their ReCap videos.

For students who possess a moderate understanding of the material at the start, but who need to be challenged to take that understanding to higher levels of cognitive complexity, I will differentiate via group roles.  I will give these students the role of peacekeeper in their teams.  The peacekeeper's job is to facilitate the collaboration, helping the team to evaluate each team member's design and make a synthesis of the best ideas.  The students from this segment of the class who cannot be peacekeepers (because there will only be around eight groups) will fulfill the artist role, creating a visual representation of the team's design synthesis.

After the pre-assessment, I have planned four ongoing or formative assessments and one summative assessment for the unit, to help me track students' progress and respond quickly to needs that arise.  As I mentioned above, at the close of lesson one, students will make ReCap videos explaining the concept of a design problem with an example of their choice.  In the next lesson, students will complete graphic organizers with information about the egg drop challenge.  Struggling students will be given a template to follow, while other students will choose what format to use for their notes.  After students have prepared their individual design ideas for the egg drop challenge, they will meet in groups and I will assess their ability to collaborate, solve problems, and use time effectively, using a 21st century skills rubric.  Students will complete self and peer assessments in response to each student's design presentation.  At the close of the project, students will prepare reflections in the format of their choice (video, article, live presentation) as a summative assessment.


CLICK HERE to see a mind map version of the above information on differentiation strategies and forms of ongoing and summative assessment

One goal I tried to keep in mind in designing differentiated instruction for the unit is to make sure that all students arrived at a basic level of competency.  As Carol Ann Tomlinson asserts throughout her work, differentiation does not mean leaving some students behind, it means helping all students to succeed.  For example, in the first lesson, the use of small group instruction will help struggling learners to demonstrate the desired learning in their ReCap videos.  In this same session, I plan to borrow a strategy described in the article "Differentiation: it starts with pre-assessment," and front load the next day's instruction for these students.  That way, when I announce the egg drop challenge in the following lesson, these students will already know about it and can be the experts, teaching their classmates about the details of the challenge.

Reference list
Crash Course Kids. (2015, July 7). Defining a problem: Crash Course Kids #18.1 [online video]. Retrieved Feb. 4, 2017, from https://www.youtube.com/watch?v=OyTEfLaRn98

Pendergrass, E. (Dec. 2013/Jan. 2014). Differentiation: it starts with pre-assessment. Educational Leadership 71(4). Retrieved Feb. 4, 2017, from http://www.ascd.org/publications/educational_leadership/dec13/vol71/num04/Differentiation@_It_Starts_with_Pre-Assessment.aspx