Skip to main content
how-to-grade-reading-comprehension-Interpretive-rubric.jpg

How to grade reading comprehension?

August 20, 2018

How do you grade reading comprehension?

This has been a hard question for me to respond to for quite a few years now. I have felt somewhat torn about what kind of reading skills I have wanted to assess in order to fit my vision for a proficiency oriented course. Finally, I have a plan, and I feel confident that it will support my students' progress on the path to proficiency.

I unveiled my new reading assessment rubric during a Facebook LIVE on Friday. With many teachers adopting the SOMOS Curriculum this year that have no background in Comprehension Based teaching or Standards Based Assessment, I knew that I needed to bring some clarity around my vision for assessment. If you'd like to understand my complete vision, please search #newtosomos and #assessment in the SOMOS Curriculum Collaboration group.

Today, I want to share with you one small piece--and one that I have been wrestling with for a few months: assessing reading and listening comprehension (the Interpretive mode).

GRADING THE PRESENTATIONAL MODE

Soon after I made the switch to Standards Based Assessment (well, as much as one can in a traditional grading system), I began evaluating all of my students' productive work (presentational writing and presentational speaking) using this Proficiency Targets rubric. The rubric, which is an adapted work of a rubric that Crystal Barragan created, evaluates student production in reference to the language used to describe proficiency sub-levels from the ACTFL Proficiency Guidelines. Because the Proficiency Guidelines describe real-world performance, this rubric is really a performance rubric (similar to ACTFL's Performance Descriptors, but broken down into sub-levels of proficiency). To me, it was a really easy decision to evaluate production in this way because student performance in the presentational mode in the classroom setting very closely mimics their performance in the real world (their true proficiency).

Still following? Phew!

GRADING THE INTERPERSONAL MODE

I don't evaluate beginning students in the Interpersonal mode, so that rubric was easy because I didn't make one.

GRADING THE INTERPRETIVE MODE

The interpretive mode, though...well...that one was trickier. I saw two routes:

  1. Basing the rubric on the Performance Descriptors. This would be the best predictor of real world interpretive proficiency, and I like that. This would require all interpretive assessments to involve authentic resources, since the Proficiency Guidelines and Performance Descriptors all describe interpretive ability in relation to authentic materials.
  2. Basing the rubric on traditional reading comprehension rubrics. This would evaluate the same reading skills that L1 teachers are looking to develop, and I like that. This would allow interpretive assessments to involve teacher created texts.

There are many interpretive assessments in SOMOS that involve teacher-created texts, so going with Option 1 would require a total overhaul of the assessments in SOMOS 1 and 2. And while it would be a labor...I am totally game for that labor if I felt that it would best support my curricular goals. Having already gone the Performance Descriptors/Proficiency Guidelines route with my Productive rubric, I have felt very drawn toward Option 1. I would love for my students to see their progress toward the goal of being able to interpret authentic resources with ease, understanding even nuances contained within. However, the stress that comes with student interaction with an authentic resource--especially in a testing situation--caused me to think twice before making the switch.

What is the goal of my program?

PROFICIENCY

How do my students get there?

PROFICIENCY ORIENTED INSTRUCTION

How can assessments best support the goal?

  1. Providing opportunities for students to demonstrate progress toward the goal
  2. Encouraging students to continue the journey
  3. Giving me the information I need to plan future instruction and interventions

I have decided that using authentic resources as the basis for all of my interpretive assessments does NOT best support the goal of proficiency. While certainly #authres-based assessments can allow students to demonstrate progress, I think that the inevitable stress that comes along with knowing that their ability to figure out what this resource means--even a well-chosen resource--undermines what I know to be true about language acquisition. I have decided to keep #authres in their current role in the SOMOS curriculum, as sources of comprehensible input and intrigue--appearing frequently in low-stakes roles.

How can I assess learner comprehension of teacher-created texts?

Now that I have recommitted to my decision to use teacher-created texts as the basis for my interpretive assessments, I needed to bring clarity to how I recommend using them. In the past, I gave vague answers that involved some mix of depends-on-how-many-questions-they-missed and depends-on-which-questions-they-missed and depends-on-how-they-answered-the-questions. Phew! I did not have a duplicatable model for interpretive assessment...so I set out to create one!

Use this interpretive rubric to assess reading and listening comprehension in world language courses

As you can see, this rubric evaluates comprehension from four angles:

  1. Comprehension of individual words and phrases
  2. Comprehension of concepts (main ideas and details)
  3. Ability to cite textual evidence to support conclusions
  4. Ability to infer (interpret meaning of unfamiliar words based on context, extract information not explicitly stated in the text)

I feel confident providing this rubric to teachers using my curriculum because it evaluates progress within the framework of comprehension based instruction. In comprehension based courses, students develop what Terry Waltz calls 'micro-fluency' in her fantastic manual, TPRS with Chinese CharacteristicsIn the classroom context, our students seem to almost skip over the Novice proficiency level in the interpretive mode altogether. (Keep in mind that ACTFL's Proficiency Guidelines do not describe performance in the classroom setting, but in the real world! Students in comprehension based programs do NOT skip over the Novice level in the real world.) Because of this micro-fluency, I think that evaluating student interpretive comprehension using rubrics that are aligned with more traditional L1 reading comprehension rubrics better communicate student progress toward the goal of proficiency and better inform my instruction.

Soooo...how is your vision for a proficiency oriented language course realized? Do your assessments support that goal? How? What lingering questions are YOU wrestling with? How does my vision for interpretive assessment match yours, and what are the points of departure?

Let's talk!!

Join our newsletter

Subscribe to our newsletter and get instant access to 150+ free resources for language teachers.

Subscribe Today