Why We Should Cultivate a Growth Mindset in Our Students

Notes from the Classroom


A cynic might say that a “growth mindset” expects students to work with diligence in an area for which they are not genetically gifted, practicing something in which they will never gain excellence.  

My journey with growth mindsets has been different.

I’ve found that it’s not about motivating students to necessarily work harder. Instead, it’s an effort to propagate a way of thinking and talking. It’s helping high-achieving students realize that it’s normal to face challenges, and that being challenged is an opportunity to push forward and grow.

Why Our Learning Zones Should Be Risk Tolerant

Daniel Coyle, author of The Talent Code, tells us that “to get good, it’s helpful to be willing, or even enthusiastic, about being bad.”

But what if I am a student who is not enthusiastic about learning from mistakes?

We all have the students who know how “to do” school. They’re the ones that get straight A’s without much effort.

For these students, mistakes are not an opportunity to learn. Instead, they’re a stamp of disapproval. So, we need to be conscientious about our feedback–giving feedback on things students can control, like their effort, challenge-seeking, and persistence.

This means feedback should also avoid praise for children’s “smartness.” At the same time, we can help students understand that effort is not simply doing something for a long time, or doing the same thing over and over; but that it is seeking out challenges, setting goals, making plans, and using creative strategies to achieve those goals.

Language Matters when We Give Feedback

We need to support our students with lots of growth-mindset language. Students need to be praised for taking risks. That might mean saying, “Thank you! You just stretched our learning today.”

In that way, we show that mistakes are building blocks to our learning. Students need to be praised for looking at situations in new and different ways, and thanked for giving the learning community the opportunity to explore their thinking.

Yes, We Need High Expectations

Grant Cardone, in The 10x Rule, reminds us that success is important to our self-identity. “It promotes confidence, imagination, and a sense of security and emphasizes the significance of making a contribution,” he writes.

It’s an important lesson, but one that has been twisted over the years.

So many of our students have a sense of entitlement. Many times “the target” is lowered in order to make the student feel “successful.” But is that success? I don’t think so.

Success is about setting goals, working hard–and then even harder–until you reach your target. It’s not enough to just play the game.

Perhaps most overarching is the idea that we are all unfinished human beings. There is always room for change. Even when you think you have reached the top of your game, a growth mindset person is continually looking to reach higher: not to please others but to just become a better human being.

Tina Luchow (@tluchow25) is a fifth grade teacher at Oakwood Elementary, in the Brandon School District. She is in her eleventh year teaching upper elementary students, fifth and sixth grade, in the areas of math, reading, writing, and language conventions. Tina has studied reading and writing workshop practice, conducted action research, and is a 2017 Oakland Writing Project Teacher Consultant. Tina attended Baker College for her undergraduate degree in education, and Marygrove College for her Masters in the Art of Education with a focus in Reading. Unlike your average perfectionist, Tina understands that “good enough” falls around the 50th consecutive attempt to hang a poster completely level. How does she do it? An unwavering commitment to the sole source of her strength: yogurt, granola, and tea.

Good Teacher, Bad Data

Literacy & Technology Notes from the Classroom Professional Learning

shutterstock_268996268I think it’s safe to say that there’s a bit more mathematical calculation in your normal English classroom pedagogy than there was, say, five years ago.

And you know what? That’s a good thing—a great thing if you’ve found meaningful ways to use the data gathered from formative and summative assessments.  

But data can also be pretty misleading.

The idea of using data to improve instruction has always been presented as a simplistic and elegant solution: gather data that shows which students miss which questions and, voila!, you know  where to direct differentiated instruction, to help every student reach mastery of the learning goals. 

To wit: An easy question about the tone of an author yields 90% of your students who correctly identify and explain the tone, but the second tone question on the same assessment—testing the same learning goal but providing a much more challenging passage—reveals that only 50% of your class can really decipher tone when the going gets tough (or the tone gets subtle).  

This is really fantastic information to have! Ten percent of your kids need to go back and review their notes and probably do some formative practice. But there’s another 40% who need to work on applying their newfound skill. They clearly know what tone is, but at some point when the tone isn’t smacking them in the face, they actually aren’t that great at recognizing the trait in writing. The needs of these two groups are different, but now you know whom to direct to which formative task!

The Signal, The Noise, The Headache

51Ui-zv3m7L._SX324_BO1,204,203,200_The funny thing about data, though, is that numbers aren’t as clear and objective as all those charts and bar graphs would have us believe. If you don’t want to take an English teacher’s word for that, get ahold of Nate Silver’s excellent book The Signal and the Noisewhich reveals just how difficult it can be to get data to tell you the truth.  

Or, for that matter, believe your own experience, since I’m fairly certain you’ve also experienced the sort of data debacle I’m about to describe.  

A few years ago, my professional learning community rewrote all of our assessment questions so that they were clearly labeled by learning goal. When we tested a student’s ability to support an argument using textual evidence, the question might look like this:

Using Evidence: Using at least one quote, explain how Jon Krakauer establishes Chris McCandless’s desire to live a more primitive lifestyle in Into the Wild.

Now everything should be clean and easy to parse—if kids get the question right, they have mastered the use of textual evidence. If they get it wrong, they have not. And if they can explain Krakauer’s methods but fail to use a quote, we can presume they’re halfway there.

So would it surprise you to learn that my PLC ended up getting incredibly muddled data from this question? And that we eventually had to rethink how we were interpreting much of the data? Here are some of the issues that we encountered:

  • How can you tell when a student lacks a skill versus when they lack vocabulary? Three of my stronger students asked me what primitive meant—in my first period alone!
  • Did all the students recognize the implicit meaning of the verb explain? Have you been clear about what various verbs (contrast, analyze, challenge) demand of them in an assessment?
  • How do you decide whether a student just hasn’t written enough? And what should the takeaway be when students can vocalize an answer that is thorough and accurate?
  • How much should you be concerned when a student’s example is the one you’ve already used in a class discussion? What if that brand of example shows up on every single assessment a student takes?
  • If you give the students one passage to focus on, is a correct answer an indication of mastery of this skill or only partial mastery (since on their own they might not have been able to select the relevant part of the text from, say, an entire chapter)?

Any of these are good reasons to have a careful data discussion in your PLC. But let’s just take that first one—lacking a skill versus lacking vocabulary—as an example. 

I couldn’t write off as a trivial minority the students who asked the question (what primitive meant)—these were the grade-concerned kids who were good about asking questions. If they didn’t know the term and said so, then there was a good chance that A LOT of the other kids also didn’t know the meaning of primitive. They just didn’t bother to ask.  

Is Data Doomed?

All of a sudden, our data about this fundamental writing skill seemed really murky. And this was a learning goal we thought was pretty transparent and objective!  There was a sudden temptation to go back to the more instinctive, less numbers-driven approach to gathering feedback about students.

Even though gathering good data in English is tougher than it seems, it is both possible and essential for effective instruction. I’ll revisit my own case study in my next blog post, in order to elucidate a few of the counter-measures my PLC took to help avoid “fuzzy” data points.

In the meantime, think about the next assessment you give to students. Whatever data you take from it, ask yourself whether more than one “theory” about the kids’ performances on it would fit the data you’re staring at.

Michael Ziegler

Michael Ziegler (@ZigThinks) is a Content Area Leader and teacher at Novi High School.  This is his 15th year in the classroom. He teaches 11th Grade English and IB Theory of Knowledge. He also coaches JV Girls Soccer and has spent time as a Creative Writing Club sponsor, Poetry Slam team coach, AdvancEd Chair, and Boys JV Soccer Coach. He did his undergraduate work at the University of Michigan, majoring in English, and earned his Masters in Administration from Michigan State University.  

Standards-Based Grading (Part 2)

Formative Assessment Notes from the Classroom Oakland Writing Project

shutterstock_180805136In writing about standards-based grading, I’ve described how grades should reflect learning, and assessments should be connected to standards. Rick Wormeli, an expert on the subject, also reveals that in order to coach a student to achieve academic standards, we must use descriptive feedback.

Descriptive feedback tells students what they accomplished toward a particular standard, and what else they need to accomplish to meet the standard. This feedback should be given consistently to all students, and it holds the role of formative assessment (tasks completed on the path to mastery) in education.

Continuing my work with the Galileo Leadership Consortium, I met another expert on the subject. Dr. Ellen Vorenkamp, from Wayne County RESA, helps me use formative assessment in my classroom. Formative assessment, I’ve learned, should be aligned with data. And it should always be planned and used timely and purposefully.

Vorenkamp offers five pillars of formative assessment. These pillars help me to assess what  methods I am already using, and what methods I need to add in my classroom. They are:

  • Pillar I:  Clear Learning Targets
  • Pillar II:  Effective Questioning
  • Pillar III:  Descriptive, Actionable Feedback
  • Pillar IV: Students as Self-Assessors
  • Pillar V:  Students as Peer-Assessors

How This Looks in My Classroom

In my classroom we write claim (thesis) statements for argumentative and informative writing. I will focus here on argumentative writing.  

For 8th grade, the academic standard is, “Introduce claim(s), acknowledge and distinguish claim(s) from alternate or opposing claims, and organize the reasons and evidence logically.”

Pillar I, the learning target, is: Students will write a claim statement that includes the main topic of the argument, a summary of evidence, and an opposing side.  

Students are introduced to this task with a learning chart and examples from me and amy 3past students. They then write a claim statement and turn it in. This is a great spot for formative assessment. I read the students’ statements, and I offer them the stages of development toward this learning target, as well as student samples of each level. And from earlier work in establishing a growth mindset in my classroom, students understand that they can update their work to show more mastery of that standard.  

The stages of development in my rubric, along with descriptive criteria, are:

  • 1 (working to meet standard)
    • unknown topic or argument
    • written in question form
    • includes extraneous or unrelated information
    • argument is not logical
  • 2 (mostly meeting standard)
    • straightforward – includes topic and evidence but no opposing side OR
    • represents both sides equally
  • 3 (meeting standard)
    • includes topic, evidence, opposing side – needs some word clarity
    • includes more details than is necessary for a claim
    • creative structure (evidence first)
    • multiple pieces of evidence listed for supporting side
  • 4 (exceeding standard)
    • includes topic, evidence, opposing side – clear
    • includes multiple pieces of evidence for both sides
    • argument is clear

Here, I have taken my instruction and student practice through Pillar I: assigning a clear learning target, Pillar II: effective questioning by clarifying the difference between levels of achievement, and Pillar III: descriptive, actionable feedback by telling students what they have accomplished and what they still need to accomplish.

The Use of Exemplars

A shift I made is to make these levels clear to students with student examples of each level of achievement. With this shift, I can take on Pillar IV: students as assessors, as they assess their work compared to the achievement levels and exemplars I have shared.

With this learning target, I still need to add in a Pillar V: students as peer reviewers. So I now give students my own work. Here, students are asked to apply the rubric to my example, and to give me a step to improve my work. Students indicate my score by holding up their fingers. Quickly, then, students discuss with a partner their reason for this score, and a next step for my work.

By practicing the work of peer assessment in this way, students can gain comfort with the practice. Later, students can move comfortably into the roles of self- and peer-assessors, with clear targets for achievement, because they know that these formative assessments are not a judgement, but rather a process to guide their learning.

All of these steps are just a small shift in my classroom. Yet they allow better achievement of academic standards. What small shift will you make to address all five pillars of formative assessment?

pic 2Amy Gurney is an 8th grade Language Arts teacher for Bloomfield Hills School District. She was a facilitator for the release of the MAISA units of study. She has studied, researched, and practiced reading and writing workshop through Oakland Schools, The Teacher’s College, and action research projects. She earned a Bachelor of Science in Education at Central Michigan University and a Master’s in Educational Administration at Michigan State University.

Standards-Based Grading (Part 1)

Consultants' Corner Formative Assessment Oakland Writing Project

shutterstock_297479735For the next two years, I will be part of the Galileo Leadership Consortium, which works to advance teacher leadership. In this role, I am asked to learn that I can do better for the profession.

Sometimes all we have to realize is that we only have to know that we can do better. And one of the things that the consortium believes we can do better is grading. We believe that grading doesn’t have to be punitive, and that it can actually relay information about what a child has learned. To help you understand and implement these beliefs, I will share the information I have learned from experts, and how I have used that learning in my 8th grade Language Arts classroom.

Wormeli’s Views of Grading

The first expert I encountered was Rick Wormeli. He is best known for his research on 21st-century teaching practices. Five minutes into a day with him, though, and you realize that he has used all of the research that he presents.

From Wormeli, I learned three things:

  • Teachers are ethically responsible for the grades which they report.

This means that grades that are padded—with scores for neatness, following directions, turning work in on time—are grades that do not represent the amount of learning the student has achieved.

  • Grades should accurately report the amount of mastery that a student has on the standards of that subject.

This means that an A in Language Arts represents that the student has achieved an A level of mastery in the reading and writing of narrative, argument, and information. An A level of mastery is typically coined by the terms “Exceeding Mastery.”

  • Assessments should be connected to standards.

This means that all activities that we ask of students should be related to the learning they should be achieving. So, every assessment should be connected to a standard, so the student’s grade can reflect the achievement of that standard.

What This Looks Like in My Classroom

So, what does standards-based grading mean in literacy instruction, and what does this look like in my classroom?

I don’t have a definitive answer, but I have some experiences that I will share.

Shift 1: I connect daily lessons to standards, and present these connections to students.amy 1

At the beginning of each lesson, I name my teaching point. Students and I refer to our Self-Evaluation Standards page (you can click on the thumbnail image to the right). On this page, we underline key words and compare how our work achieved that standard. As you can see, the columns represent achievement levels. Students can mark a date and a level they think they achieved. From here, students can see their growth towards achieving each standard.

Shift 2: For every standard that I teach to kids, I create a rubric of achievement.amy 2

I share the rubrics with kids along with exemplars (thumbnail on the right) of each achievement level. Students can see where their work aligns with that of their peers and the way work should look. The rubric and examples help kids to work to meet the standards.

Shift 3: I insist that students walk away each day having learned something.

In order to quantify this, I have started giving quick and varied formative assessments. For one amy 3concept, it may be a post-it of key terms, with verbal descriptive feedback. For another concept, it may be written descriptive feedback about how to move forward in achieving a standard (an example is on the right). For any assessment, students may re-try the work, in order to show more learning. This helps kids learn and apply the material, but it also shows them that they are active learners, and learners who are successful.

Every day in my classroom looks a little different. And I am using different strategies to reach all of my learners. But in doing so, I know what standard that students have achieved and what standard they are working on. We also have a great community that wants to work and grow and show their best work. And all this came from three small shifts.

pic 2Amy Gurney is an 8th grade Language Arts teacher for Bloomfield Hills School District. She was a facilitator for the release of the MAISA units of study. She has studied, researched, and practiced reading and writing workshop through Oakland Schools, The Teacher’s College, and action research projects. She earned a Bachelor of Science in Education at Central Michigan University and a Master’s in Educational Administration at Michigan State University.