Thursday, July 21, 2016

SE2R Comments in Grade 9 Science

I have been distracted from blogging but am trying to catch up and am back to trying to deprivatize my practice more. And so, I am resolved to blog about my assessment practices (including failures and questions) and hope that I can spark conversation, collect some feedback, and crowd source some ideas. So I have one request - if you are reading these entries, please share them with someone else and/or comment at the bottom and join the conversation. :) The first entry from this series can be found here.

Our wonderful Instructional Coordinator, Assessment, Kristen Clarke, organized some assessment-related book chats on Twitter in the latter half of the school year. I was able to participate in two of them and helped moderate a third. One of the books we discussed was Mark Barnes' Assesssment 3.0: Throw Out Your Grade Book and Inspire Learning. Mark is the pioneer of the Teachers Throwing Out Grades movement in education (check out #TTOG on Twitter).

[We also discussed Rethinking Letter Grades (which inspired mot of my Overarching Learning Goal and Learning Map blog entries) and Myron Dueck's Grading Smarter Not Harder (which inspired this entry related to reformatting tests and using learning goals).]

Barnes' book is largely about his feedback process that allowed him to go gradeless in his classroom. By giving students an appropriate avenue to find out what they had done and could improve on the traditional need for grades virtually disappeared. Reading his book has inspired me to make further efforts to reach for the same goal - a classroom of students who want to learn and grow (not students who want to do what they think I want and get marks). I want to build a community of students with a growth mind set that, therefore, believe in themselves as a learner and can reflect appropriately on their own work and the work of others.

Barnes' feedback method is referred to as SE2R. This means that every time he gives feedback he follows this pattern (and thus, teaches his students to do the same):

  • Summarize (what has the student done to meet the requirements of the specific assignment)
  • Explain (what mastery of skill/learning is shown)
  • Redirect (indicate what lessons should be reviewed to master concepts/skills not yet mastered)
  • Resubmit (encourage student to review and rework and give directions for resubmission)

I tried to put these ideas into practice a couple of times in the following weeks to test them out. Here is what I tried and how I did it:

  1. I put a paragraph writing question onto Edmodo as an assignment for students to answer that I told them would be on the unit test. Students were told what the learning goals were and were told that this was a chance to answer the question and get feedback before the test.
  2. I was able to practice writing comments that followed the SE2R model. Here is an example of what I wrote
    • :
  3. I created a "comment bank" using the SE2R model and based on the learning map I had created for the course. I sometimes had to modify comments to make them better fit an individual student work, but in the end I saved myself time AND was able to be more consistent and focus on the important areas only.

Sunday, July 10, 2016

LGs vs SC vs Task Requirements

Discussing assessment in the education world often leads to more questions than answers and creates a lot of discourse. I have learned that the questions and disagreements are a necessary part of change and that we need to acknowledge and tackle them, not shy away from them. It is easy to feel defensive when our practices are questioned or challenged. It is easy to feel like we are being told that what we have been doing was wrong.

I have also learned that when someone asks "why" they are usually genuinely curious OR they are trying to get me to think about what I am doing. It has taught me to always ask myself why.

I have brought this up because one of the things that has come up a lot recently is the discussion of comparing learning goals (LGs), success criteria (SC), and task requirements. It can be easy to write a a statement that may not clearly fit into one category or the other. But it is also important to try to understand and recognize the difference.

Here is my understanding of each:
LGs are statements that describe what a student should know and do that is specific to a lesson or set of lessons. In Ontario this will often be based directly from the overall expectations of the curriculum.

SC describe what the learning will look or sound like when the student is meeting expectations. They should be based on agreed upon statements formed with a course team but when employed in class may often be co-constructed with the students.

Task requirements are things that a student is asked to do that does not fall under that course's expectations but are necessary to help the teacher focus on evaluating the learning without distraction. These can not be evaluated but a students may be asked to resubmit work when they are not used/followed.

I will be the first to admit that through my journey this year there are times where these things were misused or mixed up. This exploration has allowed me to gain a much better understanding of them and why they are important and have forced me to rethink the what, why and how of my classroom assessment.

Through my use of overarching learning goals, learning maps, learning goals, etc I am hopeful to be able to spend more time giving feedback in the future and less time giving grades. By continuing to build my assessment literacy I can continue to build my students assessment literacy.

My hopes for the future include students who...

  • take ownership of their learning
  • can self- and peer-assess
  • question what we do and why we do it (purposefully)
  • enjoy class more
  • focus on and prioritize learning not marks
and include my hopes to...
  • bring focus to all classroom tasks and evaluations
  • shorten written evaluations
  • grade less
  • discuss more
Comments and questions are welcome and encouraged. Use the comment feature below or connect with me on Twitter :) 

Thanks for reading!

Thursday, July 7, 2016

Math Learning Map Journey

I have been distracted from blogging but am trying to catch up and am back to trying to deprivatize my practice more. And so, I am resolved to blog about my assessment practices (including failures and questions) and hope that I can spark conversation, collect some feedback, and crowd source some ideas. So I have one request - if you are reading these entries, please share them with someone else and/or comment at the bottom and join the conversation. :) The first entry from this series can be found here.

Today I am hoping to share the process and evolution of my experiences working toward a learning map for a math course. I think this is a specific journey worth deprivatizing because it has been a complicated one that involved a lot of lengthy discussions.

This entry requires you to know what an Overarching Learning Goal (OLG) is: I described it in an earlier entry as big ideas that are written as board spectrum learning goals that marry the "know" and "do" that we hope a student leaves a course with. I have discussed OLGs in a number of other entries since this one that can show you some of my journey through understanding and using them as well.

Starting in early 2015 I dove into OLGs and started working with a colleague to write some for a math course. At the time we were both teaching Gr 10 Academic ("Mathematical Principles") so we tried to tackle it. This was both of our first attempts at the process for any course so it was an exploration of the process itself and a discovery that every time we tried to do it we wanted to make different decisions. [One thing I would critique us on looking back was our neglect of the front matter of the curriculum].

Here is a look at the 2 different sets of OLGs we landed on in our two attempts.

In 2016 I had the opportunity to gather with some math and assessment colleagues from around the board to take a real look at designing OLGs and a Learning Map (LM). [I showed a sample LM for my Science course in this entry earlier if you would like some context] A LM takes the OLGs and describes what the learning should look like at each level. This map can then be used for many purposes.

Over the course of many discussions with colleagues at my school, one of our math resource teachers (@MashelleKaukab), and the above mentioned gathering we went through a process of unpacking the MFM 1P (Gr 9 Applied - Mathematical Foundations) course and the Math Processes (Ontario math curriculum front matter). It involved a lot of debate with well-reasoned points - and a lot of learning! Oh how our brains hurt at the end of that day!

Our team decided to create "skeleton" OLGs that focused on the processes that could then theoretically be used to finish OLGs for any course at any level (perhaps with rewording needed). Here is where we landed:

Our team left that meeting still feeling like things were a work in process but I am sharing our draft of our work hoping that you will contribute to the discussion by providing feedback. Please visit a copy of the document here.

The hope is that this map will become the foundation for every decision, evaluation and report completed for the course. The hope is that it will be the backbone of my backward design for my course.

Thank you for reading and for joining the discussion!
Happy summer!

Monday, June 27, 2016

Triangulating Evidence - Formal Interview Attempt

I have been distracted from blogging but am trying to catch up and am back to trying to deprivatize my practice more. And so, I am resolved to blog about my assessment practices (including failures and questions) and hope that I can spark conversation, collect some feedback, and crowd source some ideas. So I have one request - if you are reading these entries, please share them with someone else and/or comment at the bottom and join the conversation. :) The first entry from this series can be found here.

I this entry I would like to share my first formal attempt at integrating conversation (via an interview) into my class.

As part of the Peel Teacher Assessment Working Team (PTAWT) we discussed triangulated evidence and worked together to create something to use this semester. At the time my Grade 9 students were in the middle of their Ecojar labs and had been asked to write a lab report and to do some research about the impacts they were studying in the local ecosystems. It was getting late in the semester so decided it would make sense to take the research part out of the report and try it as an interview.

I worked with a couple of others at the PTAWT meeting and we checked the Overarching Learning Goals (OLGs) that the assignment aligned with, pulled out the specific learning goals, and then created a Google Form to use to record the interviews. It was a good experience and it gave me another chance to use my Learning Map to make sure we were doing things that would align with our plan. Below are the questions that I used:

1. Student Name (in the future I would do this with a drop down menu so I can reuse and use an add-on to get the data to go into a student file instead of an assignment file to make reporting easier)

2. Wondering?
What are you wondering as a result of working on this lab?

3. Observations
Would you change the observations you chose? Why or why not?

4. Research - Pros/Cons
What have you found out about your impact in Ontario? What are we doing? Do things seem like they may change in the future?

5. Research - Appropriate?
What sources did you use to find your info? Why did you choose them? (What have you found out about your impact in Ontario? What are we doing? Do things seem like they may change in the future?)

Sunday, June 26, 2016

Learning Goals & Reflection on Quizzes in 1P Science

I have been distracted from blogging but am trying to catch up and am back to trying to deprivatize my practice more. And so, I am resolved to blog about my assessment practices (including failures and questions) and hope that I can spark conversation, collect some feedback, and crowd source some ideas. So I have one request - if you are reading these entries, please share them with someone else and/or comment at the bottom and join the conversation. :) The first entry from this series can be found here.

In a more recent entry I blogged about the basics of using learning goals (LGs) in my Grade 9 applied science class. You can find this entry here. In today's blog you will find a little more about what I have tried to do with the LGs, particularly related to quizzes.

Here are some of the things I wish to share and/or reflect on:

Challenges of writing in student-friendly language:
It is not always easy to maintain the integrity of the vocabulary necessary for the course while making statements accessible to all students. I try to give them a chance to review the LGs and to ask for clarification if needed, but it is not always easy to get them to admit that they do not understand. And sometimes it makes more sense to revisit the goals at the end of day 1 when the vocab has been introduced through the lesson.

LGs vs success criteria (SC)
It was pointed out to me at one point that the statements I was using were better suited as SC than LGs. The more I have learned and discussed A&E with colleagues I am inclined to agree. I would now zero in on the overall expectations in the curriculum to help with LG writing more and let the types of statements I was writing become the success criteria. But what I was doing did serve these students pretty well as it (but having real learning goals would have benefited me a lot when determining grades for them, but so would editing and knuckling down to use the Learning Map that I started to create for the course as well). I discuss these ideas in this entry.

Quiz layout and using learning levels:
I was inspired by Myron Dueck's book Grading Smarter Not Harder to change the format I was using on written evaluations. I now group things based on learning goal. In addition I no longer give marks on them. Instead I have a grid with the learning goals at the top and I have students reflect on where they think they are (and then I show them where I think they are) and I give feedback within each question. This has led me to write evaluations that much better reflect what I want them to know and do (and I test what is valued without over-testing topics)

Now the unit test is essentially a "re-do" opportunity:
I use the unit test as a chance for students to show me what else they have learned and to give them a different way to show me the same learning goal. I make sure that the goal is tested using a different style of question (i.e. maybe it was a graphic organizer on the quiz and the next time they label a diagram). This was definitely time consuming the first time through, but it was worth it (and creating a brand new (good) unit test is time consuming anyway).

It is much easier to evaluate using levels now than when I gave "marks":
I have been trying to awhile now to think about levels when I mark. For instance, if I am looking at a Gr 11 physics test and the student has problem solved I want to know what level of knowledge and skill they have demonstrated. I then assign a mark based on this level (instead of "taking marks off" for mistakes made, which can be quite arbitrary). Going to levels has removed the idea of getting 1 out of 4 meaning 25% when really it showed the student is starting to get it, but isn't there yet.

Students need better feedback from me than I am giving:
It is still really easy to just circle things, use check marks and question marks. This is not good enough. There is not a level of description being provided when we do this and many students are not asking about them. I know this is something I need to work on, and I also think that by adjusting the LG vs SC aspect mentioned above that this could get easier as I wouldn't have as many things that I think I have to give feedback on.

Most of my 1P students were doing better at midterm than they thought:
There was a lot of surprise in the room that they were doing well. Many of them had never felt like they did well in science before (although they all like the subject). I did notice some of them thinking "oh I can try less now" but it also spurred some of them on to try harder. Some may think this means the course was made "easy" but I believe that they were actually showing knowledge and skill in the course. My practices had allowed me to remove the "noise" from evaluation (i.e. grammar was much less of a distraction) and it was easier to identify students to have conversations with to prove that they knew more than they had written down.