Wednesday, June 28, 2017

The future of teaching in five minutes



Reference List

Educational advantages of 3D printing in schools. (2017, May 11). Retrieved June 28, 2017, from http://www.blackcountryatelier.com/educational-advantages-of-3d-printing-in-schools/

The purpose of this article is to convince more schools to purchase 3D printers by explaining some of the many cool things you can do with them.  It's the kind of pep talk that makes me think that nowadays, if you can dream it, you can do it.

Lapowsky, I. (2015, May 8). What schools must learn from LA's iPad debacle. WIRED magazine online. Retrieved June 26, 2017, from https://www.wired.com/2015/05/los-angeles-edtech/

This article gives a vivid example of why it's important to have a solid vision for learning, before investing in tech.  The progressive vision is the key ingredient.

Poh, M. (n.d.). 8 technologies that will shape future classrooms. Hongkiat. Retrieved June 28, 2017, from http://www.hongkiat.com/blog/future-classroom-technologies/

This is the article that talks about online physics teacher Andrew Vanden Heuvel, who took U.S. students on a virtual field trip to the Large Hadron Collider in Switzerland, using Google Glass. 


Monday, June 26, 2017

iPads are not enough


In 2013, Los Angeles Unified School District started handing iPads out to its students.  In fact, it was unrolling one of the biggest investments ever in technology in schools, with the goal of getting an iPad into the hands of every student in every school in the massive district.  LAUSD planned to invest 1.3 billion dollars in the program, but it never really got off the ground.  From the start, the tech initiative was controversial; there were accusations that school leaders had made backroom deals with Pearson and Apple before opening the project to bids from vendors.  Then, once teachers and students starting using the iPads and the Pearson learning platform, it didn't work.  The platform had major glitches, so much so that most schools stopped using it.

Issie Lapowsky, writing for WIRED magazine online in May 2015, cited a few key takeaways from LAUSD's misadventure in large-scale tech investment.  First, start with a vision for what you want teaching and learning to look like at your school.  Then hire a vendor who can make the vision happen.  If you do it the other way around, with the vendor first, you end up buying whatever the vendor has on offer, whether or not it really helps the school achieve its long-term goals.

To add to Lapowsky's observations, it's worth asking why we want to invest in technology in the first place.  What is it that the technology will allow us to do that we couldn't otherwise?  Tech devices can enable students to participate more in their learning, to create, to share and collaborate in real time across a variety of traditional boundaries, and to engage with content specifically suited to their learning needs.  The right content accessed through tech devices can appeal to today's learners on a fundamental level, due to the pervasiveness of digital devices in our students' lives.  It's like if we can teach using tech, we're speaking students' native language.  However, it's common for schools to use tech to try to spruce up the same tired lesson plans and outdated objectives.  In order for digital devices to truly revolutionize education, we need a revolutionary vision for what we want students to do with the tech.

Lapowsky suggests that when you're defining your school's vision, you should ask teachers and principals themselves to participate in the process.  The Milpitas Unified School District, also in California, took this approach when they began a Chromebook program in 2012.  "Any time you control things from the top, you get compliance," said school superintendent Cary Matsuoka.  "We wanted to say: here's the model.  Come up with your version of it and go test it."  This grassroots approach has been quite successful, and Milpitas is being cited now as a leader in tech integration.

Lapowsky's article got me thinking about my own vision for tech in the classroom.  I want to use tech as a creative tool, for blogging, making videos, recording podcasts, and making art.  Students can respond to the class content with their own creations, thus encoding the learning on a deeper level and playing an active role in a conversation that goes beyond the school walls.  I also want to use tech to facilitate an inquiry approach to learning rather than a one-size-fits-all approach.  What I mean by that is that I want students to explore class topics and build their own "textbooks" online, by accessing quality resources (blogs, articles, videos) and contacting people who can help them  (classmates, students at other schools, experts in the field).  Another important guideline is to get the right tool for the job.  If we're going to be writing essays, we'll probably want laptops.  If we're going to be reading online, making a video, or using a learning platform like Khan Academy, an iPad would probably be appropriate.  Finally, whatever I happen to be teaching, I will always be teaching life skills like collaboration and critical thinking.  I'll plan lessons based on both content and 21st century skills standards.  That way, I'll be reminded to structure the lessons themselves in ways that foster student autonomy, rich feedback and revision, and creative engagement.

One way to make this vision a reality is through a flipped classroom.  I can use our class website as a space for posting articles, videos, blog links, photos, and sound recordings related to our learning.  Students can explore this content at home, and be tasked with finding additional resources to enhance their understanding.  Then, when they come to class, they can work on creative projects related to the content, and my role can become one of coach and facilitator, helping students to achieve their own goals.  Rather than following a packaged program, students help to construct their own learning experience.  Another way I can achieve my vision is by planning my instruction in meaningful units, rather than piecemeal lessons.  Each unit can address a real world problem and give students a chance to respond in creative ways.  For example, a unit on perimeter and area can result with students working in teams to put together proposals for the redesign of a poorly-used urban space in their community.  Their proposal could include a blueprint (to scale, with accurate measurements), a budget, and a marketing plan.  Students could use tech to research public spaces, draw the blueprint, calculate the budget (spreadsheet), and create their marketing materials.  Perhaps it would be possible to have students share their ideas with an architect online and get a response, via video chat.

If we take tech innovation as an opportunity to do something new in education, it's an exciting time to be a teacher.  The most important thing seems to be our underlying philosophy- are we content to carry on with business as usual, or are we dedicated to using digital technologies to their fullest potential?  It makes me think of film director J.J. Abrams' talk at TED, where he pointed at his Apple Powerbook and said, "It challenges me - it says, you know, what are you going to write worthy of me?"  It would be a decent question for us to ask ourselves as teachers.  What learning experience are we going to imagine that would be worthy of the phenomenal tools of creation, exploration, and connection that we have been given at this time in history?  19th century assembly line education isn't going to cut it anymore.

   
Reference list

Abrams, J. (2007, March). The mystery box (online video). TED. Retrieved June 26, 2017, from https://www.ted.com/talks/j_j_abrams_mystery_box?language=en

Lapowsky, I. (2015, May 8). What schools must learn from LA's iPad debacle. WIRED magazine online. Retrieved June 26, 2017, from https://www.wired.com/2015/05/los-angeles-edtech/

Robinson, K. (2007, Jan.). Do schools kill creativity? TED. Retrieved June 26, 2017, from https://www.youtube.com/watch?v=iG9CE55wbtY




Sunday, February 19, 2017

Should teachers be evaluated? Yes, but it matters how.

Let’s be honest: being evaluated as a teacher isn’t exactly fun.  I generally associate the experience with a frowning supervisor at the back of the room scribbling away on a notepad.  It can feel invasive and unfair.  This person is going to come into my classroom for a single half-hour period and make judgments about the quality of my teaching for the whole term.  Oftentimes, the evaluator has his or her own favorite classroom indicators to focus on, that may or may not align with my own sense of what makes for good teaching and learning, or the ways I feel I need to grow professionally.  But just because evaluation isn’t always handled well in schools doesn’t mean it’s a bad idea overall.  In this post, I’d like to profile a few evaluation approaches I see as big improvements to the traditional approach described above, and then outline my own vision for effective teacher evaluation.

Get the students involved

The video “Measures of effective teaching: student feedback” shows how useful it can be to stop and ask the students themselves how we’re doing as teachers.  In this case, middle school science teacher Paul Ronevich, from the Pittsburgh Science and Technology Academy, reflects on his results from the Tripod student survey, a survey that’s been used by over 100,000 teachers across the U.S. Ronevich’s students gave him high marks overall, but they pointed out that he doesn’t always conclude his lessons in a clear and helpful way.  This feedback was practical, something Ronevich could apply in his classroom right away, and he did, with great results.  Students interviewed for the story said it felt great to be asked what they thought about their teachers.  And it makes sense- students are the ones who spend the most time with their teachers, by far.  Why not ask them what they think?

Adopt a clear, research-based framework


One of the best ways to enhance a teacher evaluation system is to rally around a proven system.  Robert Marzano’s teacher evaluation framework would be an excellent choice, in that it’s backed up by extensive research showing that its strategies improve student learning outcomes (Marzano, Toth, & Schooling, 2017).  If teachers are aware of how they are going to be evaluated long beforehand, and if they have support in learning and applying the indicators, the whole evaluation process starts to feel less intimidating.  The feedback can be a lot more specific and constructive, and teachers really do grow as a result.

The ideal system: it’s about balance
My ideal teacher evaluation system would take multiple factors into account.  No one metric can capture the quality of a teacher’s work, but careful consideration of multiple factors can offer a rich and insightful portrait.  I would look equally at student surveys, value-added measures (meaning student growth on standardized assessments), formal observation, and other evidence of student growth, to be selected by the teacher.  

The student surveys should be carefully designed and administered, based on relevant research, to elicit the most helpful kinds of responses.  The standardized testing data should be analyzed for growth as opposed to grade-level conformity, so that we don't inadvertently disincentivize working with students who are struggling academically.  The formal observations should take place at least twice a year to give continuity to the professional development process, and should include pre- and post-observation meetings, so that the teacher gets the most benefit from the process.  The observation format should be based on a research-backed evaluation format, like Marzano’s evaluation model.  For the other evidence of student growth, a wide range of learning artifacts should be accepted, to be appraised based on a descriptive rubric prepared by the school.

If carried out in a balanced way, teacher evaluation can provide invaluable feedback to teachers for their professional growth; it can enhance the prestige of the teaching profession and attract talented new teachers; and, most importantly, it can lead to better student learning.

Reference list
Marzano, R., Toth, M., & Schooling, P. (2011). Examining the role of teacher evaluation in student achievement: contemporary research base for the Marzano causal teacher evaluation model. Marzano Center. Retrieved Feb. 19, 2017, from http://sde.ok.gov/sde/sites/ok.gov.sde/files/TLE-MarzanoWhitePaper.pdf

Teaching Channel. (2017). Measures of effective teaching: student feedback [online video].

Saturday, February 4, 2017

Differentiating based on pre-assessment

Differentiation would seem to flow naturally from the practice of pre-assessment, because once you've surveyed your students' prior knowledge and found out that some of your students already know the material you're about to teach, and other students don't have the prerequisite skills to understand what you're about to teach, it makes you want to change your approach.

For my unit on engineering design, the pre-assessment will be an Edpuzzle quiz based on the YouTube video "Defining a problem: Crash Course Kids #18.1." 


The pre-assessment focuses on main concepts of defining a design problem, including identifying the criteria of success and constraints and determining whether the problem represents a want or a need.  It does so in an enjoyable format, since the video itself is quite entertaining and kid friendly.  The only drawback of working in this format is that, since the video was only three minutes long, and since I didn't want to interrupt it too much or it would no longer be fun to watch, the pre-assessment ended up with a total of only six questions.  The majority of the questions are short answer and will provide useful data, but I do worry a little about making too many instructional decisions based on so few data points.  I guess I can mitigate this concern by using my ongoing assessments to adjust my diagnosis of student needs.

Pre-assessments tend to reveal that most students have some superficial knowledge of the upcoming learning, while a few already know it well and a few others lack even a basic understanding of the concepts.  I will differentiate for those students who do exceptionally well on the pre-assessment by, for example, challenging them to extend their thinking on the first day's activity.  While the entire class will be making ReCap videos in which they identify their own example of a design problem and describe it in detail, I will challenge advanced students to develop a novel solution for the problem they have chosen.  In the same activity, I will differentiate for students who lack prerequisite knowledge by engaging them in small-group instruction.  Together, we will review each student's ideas and work to identify the criteria for success and constraints.  Once students feel comfortable with their examples, and I am satisfied that they understand, they will make their ReCap videos.

For students who possess a moderate understanding of the material at the start, but who need to be challenged to take that understanding to higher levels of cognitive complexity, I will differentiate via group roles.  I will give these students the role of peacekeeper in their teams.  The peacekeeper's job is to facilitate the collaboration, helping the team to evaluate each team member's design and make a synthesis of the best ideas.  The students from this segment of the class who cannot be peacekeepers (because there will only be around eight groups) will fulfill the artist role, creating a visual representation of the team's design synthesis.

After the pre-assessment, I have planned four ongoing or formative assessments and one summative assessment for the unit, to help me track students' progress and respond quickly to needs that arise.  As I mentioned above, at the close of lesson one, students will make ReCap videos explaining the concept of a design problem with an example of their choice.  In the next lesson, students will complete graphic organizers with information about the egg drop challenge.  Struggling students will be given a template to follow, while other students will choose what format to use for their notes.  After students have prepared their individual design ideas for the egg drop challenge, they will meet in groups and I will assess their ability to collaborate, solve problems, and use time effectively, using a 21st century skills rubric.  Students will complete self and peer assessments in response to each student's design presentation.  At the close of the project, students will prepare reflections in the format of their choice (video, article, live presentation) as a summative assessment.


CLICK HERE to see a mind map version of the above information on differentiation strategies and forms of ongoing and summative assessment

One goal I tried to keep in mind in designing differentiated instruction for the unit is to make sure that all students arrived at a basic level of competency.  As Carol Ann Tomlinson asserts throughout her work, differentiation does not mean leaving some students behind, it means helping all students to succeed.  For example, in the first lesson, the use of small group instruction will help struggling learners to demonstrate the desired learning in their ReCap videos.  In this same session, I plan to borrow a strategy described in the article "Differentiation: it starts with pre-assessment," and front load the next day's instruction for these students.  That way, when I announce the egg drop challenge in the following lesson, these students will already know about it and can be the experts, teaching their classmates about the details of the challenge.

Reference list
Crash Course Kids. (2015, July 7). Defining a problem: Crash Course Kids #18.1 [online video]. Retrieved Feb. 4, 2017, from https://www.youtube.com/watch?v=OyTEfLaRn98

Pendergrass, E. (Dec. 2013/Jan. 2014). Differentiation: it starts with pre-assessment. Educational Leadership 71(4). Retrieved Feb. 4, 2017, from http://www.ascd.org/publications/educational_leadership/dec13/vol71/num04/Differentiation@_It_Starts_with_Pre-Assessment.aspx



Sunday, January 29, 2017

High-stakes testing: finding the right balance

Stanford researcher Linda Darling-Hammond talks at a TED conference about the trouble with too much high-stakes testing, and promising moves toward a more effective and meaningful standardized assessment system.

In this post, I'll compare my experience with high-stakes testing at an international school in Mexico City to reports about high-stakes testing at public schools in the U.S.  This comparison is especially relevant for me now, as it looks like next year I'll be moving back to the U.S. to teach.  To be honest, high-stakes testing is one of the things that worries me the most about shifting to the U.S. teaching environment, and I'm hoping that recent moves toward a more balanced approach to high-stakes testing, such as the performance-based assessment pilot program that came as part of the Obama administration's Every Student Succeeds Act (2015), will gain momentum.

A brief history of high-stakes testing in the U.S.
High-stakes testing has been around in one form or another for more than a century (Nichols & Berliner, 2007), but it really came to the fore with the passage of the Elementary and Secondary Education Act (ESEA) in 1965.  Under this act, all public school students had to pass basic assessments in reading and math in order to graduate.  The ESEA high-stakes testing requirements came out of U.S. concern over Russia's advances in the space race, for example the launching of the Sputnik satellite.  It was felt at the time that a national system of high-stakes testing would spur educational advances and help us compete with the Russians.

In time, the ESEA tests came under criticism for being too easy, establishing a floor for educational achievement rather than inspiring growth.  After the economic stagnation of the 1970s, the seminal report A Nation at Risk, published in 1983, warned that unless we overhauled our education system, we would lose our status as a global leader and innovator.  Since its publication, many eloquent and damning critiques have been written about the flawed logic of A Nation at Risk, but the report had its desired effect.  The government took steps to reform public education, including another increase in high-stakes testing.

Testing was expanded even further with the passage in 2001 of No Child Left Behind, the most sweeping, and many would argue intrusive, piece of U.S. education legislation ever passed.  The legislation required that tests be administered once a year in 3rd-8th grades, and once again in high school, and school funding was tied to the outcomes.  Growth targets needed to be reached, or schools would be in danger of losing funding, having to pay to transfer students to other schools, and eventually being taken over by state education agencies.  The Race to the Top revision of the No Child Left Behind Act, under the Obama administration, added incentives for states to tie teachers' pay to test scores.  Now, as No Child Left Behind has been left behind, and the Every Student Succeeds Act takes its place, the U.S. is a nation in which high-stakes testing plays a pivotal role in school funding, teacher evaluation and in some cases teacher pay, student promotion and graduation, and the public's perception of the effectiveness of our education system (Nichols & Berliner, 2007).

Colegio Peterson, Mexico City
I currently work at a bilingual international school in Mexico City, where I am the English coordinator of the primary section.  At our school, we administer a standardized test called MAP three times a year, in August, January, and May.  Students from pre-first through first grade are tested in reading and math, and second through fifth graders are tested in reading, math, and language.  Each round of testing takes up an average of four to five hours of class time.  One interesting feature of the MAP test is that it is adaptive- the computer will give students harder or easier questions based on the accuracy of their responses.  In this way, the program zeroes in on the student's current level.

The test results are shared with teachers, for the purpose of adapting instructional strategies to meet the diverse needs of their learners.  They are sent a basic report, that shares the students' overall scores and a rating of low, low average, average, high average, or high in each subcategory.  For example, the subcategories in reading are literature, informational text, and vocabulary acquisition and use.  To move beyond this basic report, students can sign into their NWEA accounts and attain information on each student's progress.  The information I personally find most helpful is the learning continuum, which shows you the skills that each student needs to learn next, in each subject.

The results of each campus are shared with the other campuses, with a comparison against U.S. norms and the norms of our conference (the Tri-Association), but the scores do not affect teacher evaluation or pay.  Truly, their primary purpose is to enhance student learning.  The testing does not seem to affect teachers' instructional focus.  Curricular focus is much more affected by department expectations, such as the use of our literacy curriculum, Core Ready, our curriculum for word work, Words Their Way, and our STEM-based approach to science instruction.  Student promotion is not affected generally, although MAP results are one factor parents, teachers, and administrators look at when evaluating a student's academic progress.  The test results also affect the way that teachers differentiate for learners, so students might be placed in a particular reading group because of the Lexile score obtained from MAP.  Anecdotal observation suggests that students feel a healthy amount of pressure related to the tests, but not too much.  They tend to express distaste for the testing process, while at the same time showing genuine excitement when their scores improve.

Interestingly, the MAP test has revealed inequities in student achievement.  For example, boys tend to outperform girls in math, and girls tend to outperform boys in reading and language.  Students born outside Mexico tend to outperform students from Mexico, when looking at the results overall.  Further investigation is merited to determine why these inequities exist.  In general, the MAP test would seem to be a useful measure of student progress and, most importantly, a data source for improving instruction.  

U.S. public schools
In writing about the current prevalence and impacts of high-stakes testing in the U.S., I am reporting on what I have read rather than what I have witnessed firsthand.  My own experience with high-stakes testing as a child consisted of taking the Iowa Test of Basic Skills once a year, and if my teachers or administrators were stressed about the test, I wasn't aware of it.  Personally, I enjoyed the testing experience as a novel, once-a-year challenge.  When I entered the teaching profession, I taught in the university, so I have not experienced high-stakes testing from a teacher's perspective in the U.S.

What I have read about it, though, gives me pause.  While the rationale behind high-stakes testing is to provide accountability; motivate schools via a system of rewards and consequences; and increase educational equity (High-stakes test, 2014), the reality is often very different.  What follows is a brief description of the effects of high-stakes testing in the U.S. in recent years, according to student learning, teacher evaluation and pay, and educational equity.

Student learning
An over-reliance on high-stakes testing would seem to both narrow the scope of student learning and limit its depth.  Because high-stakes testing has such sweeping impacts on students, teachers, and schools, the tendency is to target instruction toward the test, limiting the range of learning (WMHT, 2013).  Schools tend to administer practice tests to get students ready for the high-stakes tests (Kamenetz, 2015), and less time remains for instruction in science, social studies, music, art, research and writing, physical education, and world language studies, pursuits that cognitive science has shown to expand our cognitive capacity and basic intelligence.  The cognitive expansion sparked by this variety of subjects has been shown "[to raise] achievement and accomplishment in a variety of domains" (Darling-Hammond, 2015).

High-stakes testing, as it is currently undertaken in the U.S., can also limit the depth of learning.  The tests themselves tend to ask questions at the remembering and understanding levels of Bloom's taxonomy, at the same time that the workplace demand for higher-order thinking skills and communicative competency is skyrocketing, and the demand for routine cognitive and manual skills is decreasing, due to the use of technology to automate basic tasks (Darling-Hammond, 2015).

Strangely, although students are spending more time on high-stakes testing, their results on these tests have stayed the same or even gone down slightly.  Meanwhile, U.S. results on international standardized assessments has fallen.  While in the 1970s the U.S. led the world in education, currently we rank between 21st and 32nd on the various parts of the PISA exam, largely because this exam is calling for higher order thinking skills and students' ability to apply their knowledge to new problems (Darling-Hammond, 2015).

U.S. teachers have expressed an internal conflict between their desire to teach using student-centered pedagogies such as inquiry, discovery, and problem solving, and their belief that traditional methods are the best way to raise test scores (Bulgar, 2012).  In one survey 85% of teachers said that high-stakes testing undermines student learning (Darling-Hammond, 2015).  In short, too much high-stakes testing limits rather than motivates student learning, and the very format of most high-stakes tests in the U.S. tends to elicit a shallow understanding.      

Teacher evaluation and pay
In recent years, student performance on high-stakes tests has become an important factor in the teacher evaluation process, potentially affecting decisions related to compensation, tenure, hiring, and firing (High-stakes test, 2014).  While linking teacher evaluation to test scores may be intended as a motivator for better teaching, the motivator often has unintended and harmful effects.  One educational commentator argued, "They [high-stakes tests] can't count so much that you have teachers feeling that the last student they want to teach is a student that's challenged, because if that student doesn't get all the supports that he or she needs, then their career depends on it.  And when we need the best teachers in the most challenged schools, we're not going to get them as long as they feel that their job is in jeopardy" (WMHT, 2013).  In this example, we see the linking of teacher evaluation and test scores prompting teachers to avoid challenging teaching environments because of the risks involved for their careers.  Linking teacher evaluation to test scores has also incited some teachers to outright cheating on the tests (High-stakes test, 2014).

Whenever we put incentives in place to try to guide human behavior, we need to be careful that the incentives don't cause unexpected, negative reactions.  In the case of linking teacher evaluation and test scores, experience has shown that the incentive system doesn't work the way it was intended.    

Equity
Perhaps the most convincing argument in favor of high-stakes testing is that of educational equity.  Another commentator in the news program cited above (WMHT, 2013) shared that, as a teacher, she had seen students from minority backgrounds being passed through the system without appropriate instructional support or accountability.  Minority students have tended to be ill served by their schools, and schools with higher minority populations have tended to be ill served by state and federal government.  For this reason, she advocated for high-stakes testing.  It provides concrete evidence of the achievement gap, and thus makes it more likely that real, lasting change will happen.

Having said that, serious questions have been raised about the impact of high-stakes testing on traditionally underserved students.  The Glossary of Education Reform asserts that high-stakes testing results in a narrowing of the curriculum, diminishing the quality of education for the very students high-stakes testing was intended to benefit.  When teachers feel pressured to teach to the test, students of color and students from lower-income homes "may be more likely to receive narrowly focused, test-preparation heavy instruction instead of an engaging, challenging, well-rounded academic program" (High-stakes test, 2014).  In support of this concern, the Glossary of Education Reform also points out that high-stakes testing "has been correlated in some research studies to increased failure rates, lower graduation rates, and higher dropout rates, particularly for minority groups, students from low-income households, students with special needs, and students with limited proficiency in English."

Another concern with high-stakes testing and equity is that teachers may feel pressure to teach to the middle, rather than appealing to all of their students' needs.  Teachers in some states have their bonuses tied to the performance of a certain percentage of their students (40%, let's say).  An unintended consequence of this motivator is that teachers tend to direct their instruction to on-level students, leaving behind gifted students and students with special needs (WMHT, 2013).

Finally, state and federal government sends a mixed message about equity and accountability.  At the same time that government implements high-stakes testing requirements for the supposed purpose of increasing educational equity, they impose financial sanctions on schools that fail to meet the new requirements.  Typically, the schools failing to meet high-stakes testing requirements are schools in low SES neighborhoods.  These schools receive less funding and face other disciplinary measures, having a negative impact on the very students the new laws were supposed to assist.  Schools in low-SES neighborhoods are further undermined by an emphasis on charter-school funding at the expense of public education (Croft, Roberts, & Stenhouse, 2015).

Perhaps as a result of the above reasons, No Child Left Behind, with its over-reliance on standardized testing, did not actually close the achievement gap as it was designed to do (Nichols & Berliner, 2007).          

Finding the right balance: alternatives to the current approach to high-stakes testing
It's a basic tenet of research that the act of observation changes the phenomenon being observed.  In physics, it's called the observer effect.  In psychology, it's called the Hawthorne effect.  In education, it could be called the testing effect.  High-stakes testing is a form of observation intended to ensure  the quality of the educational process.  There's nothing wrong with observation in and of itself: in fact, observation is necessary to ensure that students are learning.

Unfortunately, this act of observation, especially as it has become more and more prevalent, has had a big impact on the phenomenon it's intended to observe.  In many cases, students are learning more poorly because of high-stakes testing.  The testing narrows the curriculum, pressures teachers to play it safe with traditional methods instead of innovating, and hurts the educational chances of underserved student populations, such as minorities, low-income students, special needs students, and students whose first language is not English.

One way to strike a better balance on high-stakes testing would be to de-emphasize the importance of the tests for major decision making, like school funding, teacher evaluation and pay, and student promotion and graduation.  We could give fewer tests, and consider the results in a more holistic manner, taking into account a host of other factors that go into educational quality.  This is the approach taken at the school where I work in Mexico City.

Another way to strike a better balance would be to follow the lead of researchers like Linda Darling-Hammond, who argue that we are testing for the wrong things.  Instead of asking basic questions on the remembering and understanding levels of Bloom's taxonomy, we ought to be calling on students to apply, analyze, evaluate, and create.  One example of this approach is the Graduation Portfolio System that's been adopted by several U.S. schools.  Under this system, high school students complete projects in scientific investigation, literary analysis, social science research, mathematical application, world language proficiency, and artistic performance.  They develop their work in light of a clearly described standard, and they revise and revise their work until it meets the standard.  They then present the work as they would a dissertation, with an expert panel of judges, often professionals from the community.  Kids from schools with the Graduation Portfolio System go to college at higher rates and graduate from college at twice the rate of the average American student (Darling-Hammond, 2015).  When asked why they enjoy more success in college, students from these schools tend to talk about the Graduation Portfolio System, and how it taught them life skills, such as how to receive and make use of critical feedback, how to persevere, and how to be resourceful.  This movement is in keeping with an international push among the nations ranked highest in educational performance, toward the nurturing of higher-order thinking skills to address real-world problems.  Darling-Hammond describes a shift that's beginning in the design of standardized tests in the U.S., toward more performance-based assessment, rather than just bubble filling with questions of lower cognitive demand.

Other alternatives include statistical sampling, as is used on the PISA test, rather than asking every student to take every test, as well as the use of big data, for example tracking student performance in computer-aided learning experiences, without the students even knowing that they are being "tested."  This data could be interpreted in a longitudinal fashion to draw conclusions about the quality of U.S. education, and there would be no need to intrude into the teaching and learning process with frequent high-stakes testing (Kamenetz, 2015).

If enough people with enough power come to the conclusion that high-stakes testing is out of balance in the U.S., they will not be at a lack for viable alternatives.  This is all about student learning, right?  So let's do what's best for our students and bring balance back to high-stakes testing.

Reference list
Bulgar, S. (2012, May-July). The effects of high-stakes testing on teachers in N.J. Journal on Educational Psychology, 6(1), 34-44.

Croft, S. J., Roberts, M. A., & Stenhouse, V. L. (2015). The perfect storm of education reform: high-stakes testing and teacher evaluation. Social Justice, 42(1), 70-92.

Darling-Hammond, L. (2015, June 29). Testing, testing [online video]. Retrieved Jan. 28, 2017, from https://www.youtube.com/watch?v=2G_vWcS1NTA


High-stakes test. (2014, Aug. 18). In S. Abbott (Ed.), The glossary of education reform. Retrieved Jan. 28, 2017, from http://edglossary.org/high-stakes-testing/

Kamenetz, A. (2015, Jan. 22). The past, present, and future of high-stakes testing. NPR online. Retrieved Jan. 28, 2016, from http://www.npr.org/sections/ed/2015/01/22/377438689/the-past-present-and-future-of-high-stakes-testing

Nichols, S. L. & Berliner, D. C. (2007, March 4). A short history of high-stakes testing. In Collateral damage: how high-stakes testing corrupts America’s schools. Cambridge, MA: Harvard Education Press.


WMHT. (2013, Jan. 30). High stakes testing and student success [online video]. Retrieved Jan. 23, 2017, from https://www.youtube.com/watch?v=czlZG8brjC0

Friday, December 16, 2016

Formative assessments for an engineering design challenge

As part of the unit I'm creating on engineering design (specifically about how to protect an egg from breaking when the egg is dropped from a third-story window), I'd like to consider how I can integrate formative assessment.  Multiple educational researchers have identified formative assessment as an  effective strategy, including Black & Wiliam (2001), who found an effect size of between .4 and .7 of formative assessment on student achievement.  This effect size is larger than most instructional interventions, and Black & Wiliam argue that a careful integration of formative assessment into teachers' daily practice could dramatically improve our schools.

The objective I'll focus on for this exercise is, "Design a solution to the egg drop challenge, in the form of a clear, labeled drawing, and speak persuasively about the advantages of the design." The lesson for this objective will consist of three main parts.  First, I'll introduce the activity, offer direct instruction on drawing the design and speaking persuasively about it, and give examples.  Second, students will work independently to draw their designs and plan arguments for why their idea is the best.  Third, students will meet in their project teams to share their ideas.  Each student will attempt to persuade the group that his or her design is the best.

Formative assessment #1- self-assessment checklist
Self assessment is well supported as a means of formative assessment.  Researcher John Hattie is well known for having conducted meta-analyses of hundreds of educational studies.  Based on this  research, he has formulated a list of the instructional interventions that have the most and least impact on student achievement.  Near the top of the list is student self assessment, or self-reported grading (ranked 3rd) (Visible-learning.org, 2016).  Hattie has said, "the biggest effects on student learning occur when teachers become learners of their own teaching, and when students become their own teachers."  A big aspect of students becoming their own teachers is learning to evaluate their progress or self assess (Victoria dept. of ed., 2010).

John Hattie's 2011 book Visible learning for teachers points out implications of his extensive research for classroom practice.


The self assessment I would like to use for this unit is a simple checklist.  I'll invite students to do a rough sketch of their design solutions independently, and then I'll project the following checklist to help them evaluate the feasibility and functionality of their designs:

My design . . .
1) requires only the available materials.
2) can be built in the available time and with the know-how of the group.
3) includes multiple safe guards (redundancy) against egg breakage, so that if one safeguard fails, the egg will still be protected.

Students will respond to the checklist using a T-chart in their notebooks.  One one side of the chart, they will write "yes" or "no" in response to each criteria.  On the other side, they will write a supporting statement.  For example, in response to question #1, the student might write "yes" and go on to say, "My design requires a plastic grocery bag, string, five popsicle sticks, and a toilet paper tube."

The purpose of this formative assessment is to ensure quality in students' designs, before they draw their final sketches and share their ideas with their group.  Based on students' response to the checklist, I will be able to reteach how to make our designs feasible and ensure a high chance of success.  This check-in will also raise the rigor of the team's discussion about each member's design, because they will have practiced applying criteria for success.

Formative assessment #2: survey
Another formative assessment I'd like to include before students do their final drawings is a survey.  I plan to show students three examples of design drawings.  I will ask them to evaluate each of the drawings based on its clarity.  Students' survey results could be displayed immediately, for example via a Kahoot survey.  The Kahoot app allows teachers to display survey data in real time, as students respond to the questions via iPad.

The Kahoot app allows students to complete quizzes, surveys, and more via iPads, with the results projected for the whole class to see in real time.

Completing a self-assessment checklist and a survey before students do their final drawings will take significant class time.  However, I'm comfortable with using time in this way, because I believe these formative assessments will make students more aware of the learning goals and will lead to better results.  As Black & Wiliam (2001) say, "Many of the initiatives that are needed take more class time, particularly when a central purpose is to change the outlook on learning and the working methods of pupils.  Thus, teachers have to take risks in the belief that such investment of time will yield rewards in the future, whilst 'delivery' and 'coverage' with poor understanding are pointless and even harmful."  I want students to understand that creating a design sketch is really about communicating clearly, and I'm willing to take the time to accomplish this goal.

By comparing example sketches, students will see clearly the traits that make some drawings easier to understand than others.  The survey will demonstrate to students that there is a great deal of consensus among people as to which traits are most effective for communicating visually.  It will also get them thinking about the traits they would like to incorporate in their own final drawings.

Formative assessment #3: 3 do's and don'ts about speaking persuasively
This formative assessment idea comes from the article "Ten assessments you can perform in 90 seconds," on the Teach Thought website.  The assessment consists of having students list three things to do and three things not to do in reference to a given topic.  In the case of the lesson I'm planning, students will be listing three things to do and three things not to do when speaking to persuade.


As explained earlier, once students have completed their design drawings independently, they will meet with their teams, sharing their designs, and trying to persuade each other that their design is the best.  To preface this activity, I might model a right way and a wrong way to persuade, without telling students which strategies to look for.  I could then ask students to reflect on the two role plays and decide for themselves which strategies were effective and ineffective.  They will share their thinking via the "do's and don'ts" assessment.  

Conclusion
Formative assessment gives students a chance to try, get feedback, and make improvements, all before grades have been assigned.  Wormeli (2010) points out that students can learn with or without  grades, but they cannot learn without formative assessment and descriptive feedback.  Most teachers, myself included, have tended to put the emphasis on summative or graded assessments, but Wormeli makes a convincing case that formative assessment deserves the bulk of our attention.  Summative assessment is "post learning," because by that point it's often too late to intervene.  Formative assessment, however, is assessment for learning.  I will implement formative assessment much more often in my teaching going forward.

Reference list
Black, P. & Wiliam, D. (2001, Nov. 6).  Inside the black box: raising standards through classroom assessment. King's College London School of Education.  Retrieved Dec. 16, 2016, from http://weaeducation.typepad.co.uk/files/blackbox-1.pdf

Teachthought.com. (2016).  Ten assessments you can perform in 90 seconds.  Retrieved Dec. 16, 2016, from http://www.teachthought.com/pedagogy/assessment/10-assessments-you-can-perform-in-90-seconds/

Victoria Department of Education and Early Childhood Development. (2010, April). Visible learning: what's good for the goose . . ." Retrieved Dec. 16, 2016, from http://www.education.vic.gov.au/Documents/about/research/ravisiblelearning.pdf

Visible-learning.org. (2016). Hattie ranking: 195 influences and effect sizes related to student achievement. Retrieved Dec. 16, 2016, from http://visible-learning.org/hattie-ranking-influences-effect-sizes-learning-achievement/

Wormeli, R. (2010, Nov. 30). Formative and summative assessment. Retrieved Dec. 16, 2016, from https://www.youtube.com/watch?v=rJxFXjfB_B4


Sunday, December 11, 2016

Understanding and applying standards: there's more to it than I thought

Introduction

This unit showed me that learning standards are at once more important and more helpful than I had assumed before.  My basic understanding of the standards up until now is that they let us know, in a general sense, what needed to be taught in a given year.  But until I completed the activities in this unit, I hadn't realized how much useful information is packed into each standard.  From now on, standards will be my starting point for planning, rather than an adornment once the planning process is already well underway.

Unpacking a standard

For me, learning to unpack a standard has meant learning to slow down.  In the past, I've tended to gloss over standards, picking out what seemed to be the topic statement and paying little attention to the rest.  However, this unit has shown me it's worthwhile to dig a little deeper into the language of the standard; there's a hidden richness for those who are patient enough to read carefully.  

I learned from the video "How to unpack a standard," from the Imperial County Office of Education, to analyze standards in terms of their verbs, concepts, and contexts.  By breaking standards down in this way, I found that standards often set higher expectations for students than I have tended to.  For example, the first standard in the unit I am currently planning says, "Define a simple design problem reflecting a need or a want that includes specified criteria for success and constraints on materials, time, or cost."  When I broke the standard down, my attention was drawn to the standard's verb: define.  In the past, I as the teacher have done the heavy lifting of defining the problems students needed to solve, including the criteria for success and the constraints.  But this standard says that studnets themselves should learn to define the problem, and that realization changed how I planned the unit.  Now the plan starts with a lesson on identifying design problems and describing them in detail.  This lesson will prepare students to determine the criteria for success and inherent constraints in the unit's culminating activity, the egg drop challenge. 

Backward mapping

Grant Wiggins, in his article "What is a big idea?" and his video lectures, makes the case that we should start the planning process by defining our big-picture goals.  What do we hope students will be able to do when they finish this unit, class, school year, or when they leave our school?  He points out that many teachers and educational systems seem not to know their own goals, or seem to be working from day to day without keeping the goals in mind.  He gives the example of critical and creative thinking.  Most educational systems hold the skills of critical and creative thinking to be among their major goals for students, but in the day-to-day classwork, students aren't often invited to practice critical and creative thinking.  At most schools, you can get straight As without developing these skills, he says, and I would have to agree.

Famous proponent of "backward design," Grant Wiggins, in a lecture from 2013.

My biggest takeaway from Wiggins' work (and the work of his co-author, Jay McTighe) is always to ask myself "Why?"  Instead of just covering content because it's in the standards, or doing an activity because it sounds fun, I need to ask myself, "Why this content?  Why this activity?"  If I start there, my planning and, consequently, my teaching will more likely focus on meaningful skills, taught in a way that enables students to transfer the skills to new situations.  Wiggins offers an example of how to teach a math unit on statistical measures like mean, median, and mode, with a sense of purpose.  He would start the unit with the question, "What is fair?"  because it's a question students are always raising.  "When you say what your Mom did wasn't fair, what do you mean?"  After facilitating students' reflections on questions of fairness, he would give them scenarios that involve fairness and can be addressed through mathematics (mean, median, and mode).  Students would come to see that the concepts of mean, median, and mode actually help us to answer the question, "What is fair?" 

Wiggins' example struck me, because as a 4th grade teacher I've covered mean, median, and mode.  I tried to make the unit fun by engaging the class in a simple ball game and keeping statistics on students' performance, and it was fun, but Wiggins' example gets closer to what kids really care about, and it means more in the end.  The example showed me that a backwards design approach can lead to more meaningful lesson ideas.

In terms of understanding standards and applying them to lessons, the backwards design approach has taught me to start with the standards.  Before, I started with what I thought were cool activity and project ideas and then went looking for standards that fit my ideas.  In embracing backwards design, I will start with the standards, the school's educational model, and my own big-picture goals for student learning.  Only after I've clearly defined the overall goals will I start generating unit and activity ideas.  In this way, I'll ensure a much closer match between the standards and our daily learning experiences. 

Writing objectives

To be completely honest, when I've planned lessons in the past, I've tended to use the standards as my objectives.  I just listed the standards at the top of my lesson planning template and started writing the lesson.  So it's a step forward for me to generate objectives for a lesson based on the standards.  In this unit, I've come to understand that objectives help you define the smaller steps students need to take in order to meet the standard.  The standard is complex and almost always requires multiple lessons to achieve.  Writing objectives makes you stop and think, "What exactly are the steps I will need to guide my students through in order to help them reach the expected outcomes?"  

For instance, in the standards for the unit I'm currently planning, students are asked to generate and compare multiple possible solutions to a problem based on criteria and constraints.  In writing objectives for this standard, I realized that to compare multiple solutions to a problem, students would need to practice sharing their solutions with each other and giving feedback.  They would need to develop a deep understanding of the best criteria to use to evaluate their solutions, as well as the constraints presented by the situation.  A standard I might have "covered" in a single lesson before suddenly became the source for at least two, maybe three or four, lessons.  The process of writing objectives showed me several intermediate steps I really ought to be teaching my students, and not just expecting them to pick up on their own.

One more thing

The video "Think alouds: unpacking standards" by Sarah Brown Wessling introduced me to Appendix B of the Common Core English/Language Arts standards.  In this appendix, you can find text exemplars and sample performance tasks, including for social studies and science.  These texts and performance tasks give an idea of the level of rigor expected by the CCSS, in terms of the kinds of texts students are expected to master, and the higher-level thinking that's expected of them in relation to all of their literacy activities.

Reference list

Imperial County Office of Education. (n.d.). How to unpack a standard [Online video file]. Retrieved Dec. 4, 2016, from https://www.mydigitalchalkboard.org/portal/default/Content/Viewer/Content?action=2&scId=100028&sciId=829

Wessling, S. B. (n.d.). Think alouds: unpacking the standards [Online video file]. Retrieved Dec. 11, 2016, from https://www.teachingchannel.org/videos/understanding-the-common-core-standards

Wiggins, G. (2013, Feb. 28). Understanding by Design (1 of 2) [Online video file]. Retrieved Dec. 11, 2016, from https://www.youtube.com/watch?v=4isSHf3SBuQ

Wiggins, G. (2013, Feb. 28). Understanding by Design (2 of 2). [Online video file]. Retrieved Dec. 11, 2016, from https://www.youtube.com/watch?v=vgNODvvsgxM

Wiggins, G. (2010, June 10). What is a big idea?  Big ideas: an authentic education e-journal.  Retrieved Dec. 4, 2016, from http://www.authenticeducation.org/ae_bigideas/article.lasso?artid=99