Research & Data

Find an overview of the AARI intervention framework, elements for successful implementation, and professional learning supports HERE.
The AARI framework is built on a strong foundation of literacy and classroom culture research. Access the AARI 2.0 bibliography HERE.

Data is the New Bacon

Notes from the Classroom Oakland Writing Project

data-miningOver the summer, the literacy researcher Nell Duke tweeted that she saw a shirt that said “Data is the new bacon.”

Both she and I are vegetarians, but I can understand the sentiment. In the education world, data is king–for better or worse. I think this shirt was trying to say that data is everywhere, and everyone loves it (just like bacon). But we should also be careful, because like bacon, too much data can be bad for your health.

In my new role as ELA Curriculum Coordinator for my district, I am responsible for our continuous school/district improvement initiatives, and our multi-tiered systems of support. These two areas, in particular, require data in order to make instructional decisions, progress monitor, and reflect on those decisions.

Why Use a Data Protocol?

In our district, we use a data protocol modeled after Bruce Wellman and Laura Lipton’s book, Got Data? Now What? For those who are unfamiliar with the term, a data protocol is a structured way to look at and talk about data. As such, they help shape our conversations, which in turn helps shape our thinking. A data protocol, in particular, allows us to talk about data in a safe and structured way that brings all voices into the conversation.

When thinking about data, it’s useful to get out of the mindset that data has to be numbers. Data sets can represent anything you want to think deeply about. Take, for example, a new data warehouse that we rolled out at the beginning of the year. This is an online repository for data on student demographics, assessments, behaviors, and so forth; it maintains all the data in one place, and allows various reports to be run from that data.

This warehouse turned out to be a good data set for our teachers. After teachers received training in the data protocol, they used the data warehouse platform itself as a data set to be analyzed.

Intellectual “Hang Time”

shutterstock_125340167I am an avid practitioner of yoga, and often my yoga teachers will tell the class to stay in the pose, to give it some “hang time” before we move onto the next pose. That’s the hard part, though. When it’s hard and you want to get out, that’s where the work happens. The same is true when talking about data–the power is in our ability to give ourselves intellectual “hang time.”

A data protocol allows us to do this. It also prevents us from moving too quickly to judgment and action, or from looking at too much data at once (the “too much bacon is detrimental” stuff).

I coach a Formative Assessment for Michigan Educators team in my district, and I recently had the amazing opportunity to attend a workshop facilitated by Bruce Wellman, which was called “Using Data to Mediate Thinking.” Throughout the day, Wellman reiterated that power and deep understanding emerge only when we’ve allowed ourselves the time to observe the data without evaluation and just be uncertain, because “uncertainty is the foundation of inquiry and research.”

The Three Phases

Wellman and Lipton’s data protocol is broken into three phases:

  • Activating and Engaging. This is where participants bring experiences and expectations to the surface and voice predictions and assumptions about the data.
  • Exploring and Discovering. This is where groups analyze the data. It is a time for observation without judgment. This is where that intellectual “hang time” really comes into play, as groups must resist the urge to jump to conclusions and try to take action.
  • Organizing and Integrating. This is where groups identify areas of concerns, determine causation, and begin developing theories of action.

Like bacon, data is great, but too much of it or rushing through it can be a problem and won’t yield the solutions we need to improve teaching and learning. Allowing ourselves a dedicated time and way of talking about data can help us resist those tendencies.

screenshot-2014-09-26-at-12-44-07-pmJianna Taylor (@JiannaTaylor) is the ELA Curriculum Coordinator for the West Bloomfield School District. Prior to this role, she was a middle school ELA and Title 1 teacher. She is a MiELA Network Summer Institute facilitator and is an Oakland Writing Project Teacher Leader. Jianna earned her bachelor’s degree from Oakland University and her master’s degree from the University of Michigan. She also writes reviews of children’s books and young adult novels for the magazine School Library Connection.

Bad Data, Good Data, Red Data, Blue Data

Notes from the Classroom

shutterstock_148016636Back in part one of this post, I explored a problem that my PLC had while attempting to gather accurate data from student assessments. 

This post, while still recognizing some problems with data, is more upbeat, and provides some reassurance that you can (and should!) continue to gather and use data.

In my last post, I described how the word primitive was a foreign term even to some “A” students, and how this proved to be a problem for an assessment on the use of textual evidence. Beyond such content-specific vocabulary, there’s a secondary issue of assessment lingo: we ask kids to examine, analyze, compare, and evaluate, but few teachers directly instruct exact meanings for these terms.

The solution here is simple, but often bothers English teachers: define terms for the kids. When students ask you what you mean by contrast, you should be willing to explain that for them every time.  

Why? Because the term itself isn’t the skill you want data about. If diction is the learning target, independence is obviously an expectation. For all the other assessments, though, you’re damaging your own data if you don’t make sure the students understand every word.

Aim Small, Miss Small

Here’s one of the most regular, self-inflicted data failures we bring upon ourselves: writing questions that attempt to assess too many things at once.  

If I’m writing a short-answer question for an assessment about a passage’s tone, my expectation is for:

  • complete sentences;
  • a clear response to the question;
  • a quote (embedded and cited) to help prove the answer is correct;
  • and an analysis of the quote to tie it all together.  

Even without getting into partially correct responses, you can see where my expectations have created six (!) potential point reductions. 

But what have I done to my data if I take off one of two possible points for, say, not including a quote? If students paraphrased the text effectively and were right about the tone of the passage, then they’ve actually provided me two separate pieces of data about two different learning goals; they have mastered tone analysis, but they are deficient in using textual evidence to prove their arguments.  

shutterstock_258993743When we conflate the two and give them a ½ on the question, we have provided ourselves a sloppy data point. And by the time we’ve graded a set of 120 of that assessment, we might come to a wrong-minded, broad conclusion that sets the class back needlessly. Do they even know what tone is? Or are they just averse to quotes?

Consider that tone example once more. Does the question need to be rewritten? Maybe not.

As long as you’re willing to grade the assessment question for only the core skill (tone or textual evidence, but not both at once), then it can provide you some excellent data. 

Writing questions that address one clear skill is ideal. But sometimes a question that entails multiple skills can be highly useful—as long as you aren’t attempting to score it for every skill at once.  

Post-Assessment Interventions

Logic suggests a problem with this, though. Even if I narrow the learning target I’m assessing, it doesn’t clarify the problem’s source. Did a student choose her quote poorly because she doesn’t know tone, or because she lacks the ability to choose textual evidence well?

The solution to this, I think, is the post-assessment tool box most teachers already put to use. Conference with students for a couple minutes. They can speak effectively to where things went wrong, and data then becomes highly reliable.

When you don’t have time for one-on-one conferencing, having students self-reflect while you go over the assessment as a class can be just as useful. Ask students to make follow-up marks that you can look over later (“T” meaning “I didn’t understand the tone that well,” or “Q” for “I didn’t know what quote to select.”). This might seem like an inelegant solution, but think about what you’ve created: a robust data set that includes your initial impressions of their skills, alongside a self-evaluation where students have provided input on exactly what skill failed them.  

There are obviously dozens of other solutions to the problem of inexact data, but I think the simplest takeaway is to be vigilant about communicating what you want your students to know, and explicit about how your grading rubric measures each learning goal in isolation.

It takes time to fix these sorts of systemic problems. But I’d argue that it amounts to less time than we spend reviewing concepts in class that we’ve misidentified as problematic, having listened to the lies of Bad Data.

Michael Ziegler

Michael Ziegler (@ZigThinks) is a Content Area Leader and teacher at Novi High School.  This is his 15th year in the classroom. He teaches 11th Grade English and IB Theory of Knowledge. He also coaches JV Girls Soccer and has spent time as a Creative Writing Club sponsor, Poetry Slam team coach, AdvancEd Chair, and Boys JV Soccer Coach. He did his undergraduate work at the University of Michigan, majoring in English, and earned his Masters in Administration from Michigan State University.