External Support and Internal Improvement Teams
External Support and Internal Improvement Teams
The aim of school improvement is to ensure that learners get the opportunity to maximise their potential. As part of the strategies used to ensure all students can benefit from high-quality teaching, the last section outlined how, within the school, support, mentoring and coaching through the adoption of a professional community of learners’ approach can be effective. School improvement initiatives can also be assisted by external support and through closer monitoring of their performance against expected practice. However, it is generally accepted that the best combination for sustainable school improvement is both support and performance monitoring.
Judging School Performance
There are many reasons for wanting to be able to judge school performance: Most countries are interested to know how well their education systems are performing and the impact of new education or school reforms on students’ outcomes; Accountability requirements at government level are established to provide the raft of stakeholders with evidence of the value of the state’s investment in education and, in today’s tight financial times, the education spend is particularly under the radar. National monitoring of school outcomes ensures that the education system has the information it needs for intervention when a school starts to fail (Barber & Mourshed, 2007). School performance data, whether taken from student exam results or reviews of school performance, can be used to provide judgements of school performance and to stimulate school improvement.
Schools want feedback on what is working well and what needs to be improved at their school and parents and students also want to know the performance level of their schools especially when it comes to making educational choices (Gorard, 2010). The school performance data that is generated by exams, not only informs parents to make choices about selecting schools and courses for their children, but is also used as evidence in the school review process. Inadequate school performance can lead to different types of actions being taken, from more recommendations for inclusion in school improvement planning, compulsions for the school to produce an action plan to mitigate these, and even service sanctions i.e. where the government may impose restrictions on an institution’s activities including cessation of programmes or restrictions on enrolments. In spite of the widespread use of national or international exam results and the use of school performance review reports, measuring school performance is a complex concept and, as was indicated previously, it is obvious that simply reporting student outcomes cannot be taken as a measure of school effectiveness: To describe a school as ‘effective’ implies that it has done something more than simply recruit able students who would have done well even if not taught well. Consequently, many schools described in the school effectiveness research made use of value-added models, such as Dumay, Coe, and Anumendem (2013) note. However, the questions remains, how can effectiveness or improvement be measured and what counts as evidence? (Aksit, 2007).
Measuring Improvement
There are a number of different ways of judging school performance or monitoring the quality of teaching and learning, such as examinations and school performance inspections (Barber & Mourshed, 2007), student attendance, student enjoyment of learning, and the value added (Gorard, 2010). Examinations, for example, and particularly the standard national and internationally benchmarked ones, test students’ knowledge, understanding, and skills, providing objective measures of actual outcomes (Barber & Mourshed, 2007). Countries like the USA and the UK have established policies to judge education outcomes based on test scores (Aksit, 2007). However, the performance of schools cannot be accurately assessed only in terms of the students’ attainment in national or international exams (Gorard, Hordosy, & Siddiqui, 2013).
Unlike examinations, school reviews assess the performance of a school against a set of indicators or criteria. School inspections measure both student outcomes and the school processes, and provide critical reports identifying specific areas of strength and opportunities for improvement. School inspections also enable systems to measure some of the more complex desired outcomes of a school system, which are difficult or impossible to measure in examinations (Barber & Mourshed, 2007), such as support and guidance, leadership and management and the impact on students’ progress. In New York, Qatar and Bahrain, all schools are to be reviewed by external reviewers in a cycle of time and performance reports are published for the public. Nevertheless, Barber and Mourshed (2007) argue that publishing performance reports will enhance the improvement of good schools further, though inadequate schools seldom improve for this reason alone.
Usually schools are made accountable for their students’ achievement, on the assumption that they are responsible for the largest share of their students’ academic achievement (Tortosa-Ausina, Thieme, & Jimenez, 2013). This approach also assumes that the underperforming schools are able to take actions to improve student performance (Anderson & Kumari, 2009). Some countries employ standardized test scores for holding schools accountable (Kupermintz, 2003), by what is seen as a rigorous external accountability system. Schools do best when they compare their performance against standards (Fullan, 2000), therefore, external accountability systems generate data for schools to know their level of performance so that they can improve them accordingly (Scheerens, Bosker, & Creemers, 2001). Mausethagen (2013) indicated that accountability has reduced the opportunities for teachers to develop caring relationships with their students and argued that the amount of time that teachers connect with students is reduced as a consequence. It seems that some forms of accountability shift schools from teaching for learning to teaching for testing.
Structures and Systems
In many top-performing systems, responsibility for assuring the quality of performance is separated from the responsibility for improving the quality of the performance (Barber & Mourshed, 2007), so there is often an internal and external structure to assist school improvement. However, although it is known that one cannot improve what one does not measure, it is also believed that inaccurate performance indicators are misleading (Barber & Mourshed, 2007).
Both external and internal teams can be important in effective school improvement. The internal improvement team works as a group of critical friends who observe the school practice, facilitate reflections on school performance, ask questions, probe for justification and evidence, measure the progress, provide support and guidance, and set up the school to meet accountability requirements (Sutherland, 2004). The improvement team acts like a critical friend in the sense of monitoring and evaluating the school’s performance, challenging the school in its decisions, and putting pressure on the school to improve according to its staff’s capability.
In contrast, the external team is responsible for ensuring good quality learning in the school (Barber, Stoll, Mortimore, & Hillman, 1995), and in most countries this would either be from a government quality assurance agency (such as QQA in Bahrain) or an independent quality agency or accredited company. To build school capacity for sustainable improvement, a combination of monitoring and effective intervention is essential to ensure good learning is taking place across the school according to Barber and Mourshed (2007). A mentoring system is important to assist teachers in implementing the key components of the intervention (Aksit, 2007). It is clearly evidenced that initiatives initiated externally to the school can be more effectively implemented when it has got external support (Fullan, 1985). School improvement initiatives require significant effort in monitoring implementation, informing all the school stakeholders, linking multiple school improvement projects, and providing the necessary support to everyone to make the desired progress (Fullan & Miles, 1992). External support stimulates and reinforces improvement especially at the initiation stage (Fullan, 2007).
To sum up, the combination of pressure along with high support is essential (Fullan, 1985), whereas pressure without support leads to unwanted behaviours such as teaching to the test, and drilling students on examination questions (Barber & Mourshed, 2007). Several studies link the external support with school improvement (Fullan, 1985), but if a school does not know how to improve its performance, or how to build its capacity for improvement, then pressuring it will not lead to improve learning (Barber & Mourshed, 2007). The purpose of the external support is not only to provide support and guidance, it is also to monitor the progress, judge the performance and plan for improvement.
Dr. Ahmed AlKoofi
Aksit, N. (2007). Educational Reform in Turkey. International Journal of Educational Development, 27(2), 129–137.
Anderson, S., & Kumari, R. (2009). Continuous Improvement in Schools: Understanding the Practice. International Journal of Educational Development, 29(3), 281–292.
Barber, M., & Mourshed, M. (2007). How the World’s Best-performing School Systems Come Out on Top. London: McKinsey & Company.
Barber, M., Stoll, L., Mortimore, P., & Hillman, J. (1995). Governing Bodies and Effective Schools. DFE.
Dumay, X., Coe, R., & Anumendem, D. N. (2013). Stability Over Time of Different Methods of Estimating School Performance. School Effectiveness and School Improvement, 25(1), 64–82.
Fullan, M. (1985). Change Processes and Strategies at the Local Level. The Elementary School Journal, 85(3), 391–421.
Fullan, M. (2000). The Three Stories of Education Reform. The Phi Delta Kappan, 81(8), 581–584.
Fullan, M. (2007). The New Meaning of Educational Change. New York: Teachers College Press.
Fullan, M., & Miles, M. B. (1992). Getting Reform Right: What Works and What Doesn’t. The Phi Delta Kappan, 73(10), 744–752.
Gorard, S. (2010). Serious Doubts About School Effectiveness. British Educational Research Journal, 36(5), 745–766.
Gorard, S., Hordosy, R., & Siddiqui, N. (2013). How Unstable are “School Effects” Assessed by a Value-added Technique? International Education Studies, 6(1).
Kupermintz, H. (2003). Teacher Effects and Teacher Effectiveness: A Validity Investigation of the Tennessee Value Added Assessment System. Educational Evaluation and Policy Analysis, 25(3), 287–298.
Mausethagen, S. (2013). A Research Review of the Impact of Accountability Policies on Teachers’ Workplace Relations. Educational Research Review, 9(0), 16–33.
Scheerens, J., Bosker, R. J., & Creemers, B. (2001). Time for Self-Criticism: On the Viability of School Effectiveness Research. School Effectiveness and School Improvement, 12(1), 131–157.
Sutherland, S. (2004). Creating a Culture of Data Use for Continuous Improvement: A Case Study of an Edison Project School. American Journal of Evaluation, 25(3), 277–293.
Tortosa-Ausina, E., Thieme, C., & Jimenez, D. P. (2013). Value Added and Contextual Factors in Education: Evidence from Chilean Schools.