Monday, June 27, 2016

Triangulating Evidence - Formal Interview Attempt

I have been distracted from blogging but am trying to catch up and am back to trying to deprivatize my practice more. And so, I am resolved to blog about my assessment practices (including failures and questions) and hope that I can spark conversation, collect some feedback, and crowd source some ideas. So I have one request - if you are reading these entries, please share them with someone else and/or comment at the bottom and join the conversation. :) The first entry from this series can be found here.

I this entry I would like to share my first formal attempt at integrating conversation (via an interview) into my class.

As part of the Peel Teacher Assessment Working Team (PTAWT) we discussed triangulated evidence and worked together to create something to use this semester. At the time my Grade 9 students were in the middle of their Ecojar labs and had been asked to write a lab report and to do some research about the impacts they were studying in the local ecosystems. It was getting late in the semester so decided it would make sense to take the research part out of the report and try it as an interview.

I worked with a couple of others at the PTAWT meeting and we checked the Overarching Learning Goals (OLGs) that the assignment aligned with, pulled out the specific learning goals, and then created a Google Form to use to record the interviews. It was a good experience and it gave me another chance to use my Learning Map to make sure we were doing things that would align with our plan. Below are the questions that I used:

1. Student Name (in the future I would do this with a drop down menu so I can reuse and use an add-on to get the data to go into a student file instead of an assignment file to make reporting easier)

2. Wondering?
What are you wondering as a result of working on this lab?

3. Observations
Would you change the observations you chose? Why or why not?

4. Research - Pros/Cons
What have you found out about your impact in Ontario? What are we doing? Do things seem like they may change in the future?

5. Research - Appropriate?
What sources did you use to find your info? Why did you choose them? (What have you found out about your impact in Ontario? What are we doing? Do things seem like they may change in the future?)

Sunday, June 26, 2016

Learning Goals & Reflection on Quizzes in 1P Science

I have been distracted from blogging but am trying to catch up and am back to trying to deprivatize my practice more. And so, I am resolved to blog about my assessment practices (including failures and questions) and hope that I can spark conversation, collect some feedback, and crowd source some ideas. So I have one request - if you are reading these entries, please share them with someone else and/or comment at the bottom and join the conversation. :) The first entry from this series can be found here.

In a more recent entry I blogged about the basics of using learning goals (LGs) in my Grade 9 applied science class. You can find this entry here. In today's blog you will find a little more about what I have tried to do with the LGs, particularly related to quizzes.

Here are some of the things I wish to share and/or reflect on:

Challenges of writing in student-friendly language:
It is not always easy to maintain the integrity of the vocabulary necessary for the course while making statements accessible to all students. I try to give them a chance to review the LGs and to ask for clarification if needed, but it is not always easy to get them to admit that they do not understand. And sometimes it makes more sense to revisit the goals at the end of day 1 when the vocab has been introduced through the lesson.

LGs vs success criteria (SC)
It was pointed out to me at one point that the statements I was using were better suited as SC than LGs. The more I have learned and discussed A&E with colleagues I am inclined to agree. I would now zero in on the overall expectations in the curriculum to help with LG writing more and let the types of statements I was writing become the success criteria. But what I was doing did serve these students pretty well as it (but having real learning goals would have benefited me a lot when determining grades for them, but so would editing and knuckling down to use the Learning Map that I started to create for the course as well). I discuss these ideas in this entry.

Quiz layout and using learning levels:
I was inspired by Myron Dueck's book Grading Smarter Not Harder to change the format I was using on written evaluations. I now group things based on learning goal. In addition I no longer give marks on them. Instead I have a grid with the learning goals at the top and I have students reflect on where they think they are (and then I show them where I think they are) and I give feedback within each question. This has led me to write evaluations that much better reflect what I want them to know and do (and I test what is valued without over-testing topics)

Now the unit test is essentially a "re-do" opportunity:
I use the unit test as a chance for students to show me what else they have learned and to give them a different way to show me the same learning goal. I make sure that the goal is tested using a different style of question (i.e. maybe it was a graphic organizer on the quiz and the next time they label a diagram). This was definitely time consuming the first time through, but it was worth it (and creating a brand new (good) unit test is time consuming anyway).

It is much easier to evaluate using levels now than when I gave "marks":
I have been trying to awhile now to think about levels when I mark. For instance, if I am looking at a Gr 11 physics test and the student has problem solved I want to know what level of knowledge and skill they have demonstrated. I then assign a mark based on this level (instead of "taking marks off" for mistakes made, which can be quite arbitrary). Going to levels has removed the idea of getting 1 out of 4 meaning 25% when really it showed the student is starting to get it, but isn't there yet.

Students need better feedback from me than I am giving:
It is still really easy to just circle things, use check marks and question marks. This is not good enough. There is not a level of description being provided when we do this and many students are not asking about them. I know this is something I need to work on, and I also think that by adjusting the LG vs SC aspect mentioned above that this could get easier as I wouldn't have as many things that I think I have to give feedback on.

Most of my 1P students were doing better at midterm than they thought:
There was a lot of surprise in the room that they were doing well. Many of them had never felt like they did well in science before (although they all like the subject). I did notice some of them thinking "oh I can try less now" but it also spurred some of them on to try harder. Some may think this means the course was made "easy" but I believe that they were actually showing knowledge and skill in the course. My practices had allowed me to remove the "noise" from evaluation (i.e. grammar was much less of a distraction) and it was easier to identify students to have conversations with to prove that they knew more than they had written down.