Thursday, November 6, 2008

Improve the improvement process

The recent Tasmanian School Report Cards attempted to accurately communicate school improvement but failed on at least three points

  • The community read the reports as being measures of performance (see the previous posting), and thus
  • Excellent performance was hidden by results of "Trend Downward' when a performance indicator had declined slightly (a fraction of a percent)
  • And the reports focused on 'measurable' items without fully explaining
    • That the improvements were based on measures of different cohorts, eg, this year's Kinder group is different from last year's, however
    • That the different cohorts were presumably assumed to be equivalent (which is highly unlikely in small cohorts such a staff (Staff Attendance) and Kinder (School Readiness)
    • The statistical limitations arising from small samples sizes in many schools, particularly in small schools
    • Why the measures were individually and collectively valid for inclusion in the school report card (staff attendance?)
    • Why the things measured were sufficiently significant to be included (staff attendance?)
    • The quality (how current, comprehensive, and complete) of the data and the limitations on the data available
      • Staff attendance was measured as a percentage of total staff attendance in only two successive years.
      • Presumably a single staff member with a emerging chronic health problem could 'cause' a significant decrease in staff attendance.
    • The highly specific (narrow) nature of the data used in some measures.
      • Readiness for school was only measured for late Kinder students whereas readiness for school is an ongoing and daily for issue in relation to some students
    • The interaction of most of the indicators
      • attendance, retention, literacy, numeracy, student satisfaction, parent satisfaction and readiness for school all interact with each reinforcing the positive or negative effects that emerge for individual students.
    • What other measures were not included (and perhaps why)
      • The Report Cards did not include any information on the schools' (improving?) provision for students with special needs, disabilities, disorders... in the cohorts being reported.
      • Similarly they did not include the school's provision for students with behaviours of concern and for families in distress (thus requiring support) were not reported yet these are some of the major constraints on schools and on student and staff success and well-being.
      • The need to deal with problematic student behaviour is such that it determines aspects of the actual organisation of many schools. It may also consume a large proportion of the resources available ... resources that could be used to provide higher quality education.
      • Certainly problematic student behaviour is far more significant than staff attendance in every school with which I am familiar. And some limited data is available in this area. Why as it not included?
      • The report cards did not contain any contextual information related to, say, the percentages of students with additional needs (behavioural or special needs)
Since the School Report Cards will become the direct focus of initiatives to achieve real improvements, it is important that the data reported is comprehensive, valid and useful in relation to the overall success of the school, its students, staff and community. With questions over the current form (language, content...) of the Report Cards and the possibility that key data has been omitted, the next step must be to learn from this experience and act quickly on what that learning reveals.

There is clearly a need to improve the improvement process. By doing so, the system will model the very actions and strategies that it is hoping to promote in its schools. If it fails to do so, it runs the risk of alienating the very people upon whom it is dependent for achieving the improvements it desires.

Useful starting points for improving the improvement process might include
  • Collating, summarising and reporting the same data at various departmental levels: cluster; learning service, whole of system
  • Inviting schools to respond by reporting how, and to what extent, they make sense of their own report cards. As one recent corespondent wrote:

    "We spent some time y'day on our school report, personally I don't think they're going to be a big deal. It's too hard to draw worthwhile conclusion about your own school from them, let alone any real comparisons with other schools."

No comments: