Photo: Cindy Cui

In February 2017, Marc Garneau made headlines as the third fastest-improving school in the GTA [1] on the Fraser Institute’s highly anticipated annual “report card,” a list of high school rankings by province. Using “objective, publicly-available data such as average scores on province-wide tests,” also known as standardized testing, the Fraser Institute ranks schools on their basic mathematical and English proficiency [2].

While standardized tests can be a reasonable indicator of education quality, relying on them alone is dangerous. Despite this, parents habitually use these rankings to determine where to send their children to school; unfortunately, the report’s faulty presentation and lack of transparency regarding its methodology make it easy for parents to be misled by the data.

The basics of the Fraser Institute’s data processing are simple: each school is given a rating out of 10, which is then used to determine an overall rank out of the 740 schools on the list. Often, readers assume that a mark below 5 signifies a “failing” grade, indicative of exceptionally poor school performance. In reality, the Fraser Institute designs its school ratings so that the average rating is 6. Hence, a rating lower than 6 merely indicates that the school performed poorly in relation to the other schools in Ontario; it is not indicative of a school’s actual performance on standardized tests.

Although relative ratings are useful when comparing schools to one another, they can be deceptive when used to judge a school’s actual performance. This is because ratings are based on other schools’ performances, rather than being a literal representation of student results. Therefore, a difference of one or two points between ratings is not directly proportional to the difference in teaching quality between schools.

A plot of each school’s rating, out of 10, against its overall provincial ranking produces a trend with a small slope of -0.0077 in the middle region. This indicates that there are many schools here grouped together with little variation in rating. For example, St. Francis Secondary School, with a rating of 6.8, is ranked 259 out of 740 Ontarian schools, while Chatham-Kent Secondary School, with a rating of 6.3, is ranked only 358. Because these schools lie within the large block of average-scoring schools, their Fraser Institute ratings only differ by 0.5, but their rankings are separated by 100 schools. This distribution is problematic because a minimal difference in ratings results in a deceptively substantial disparity in ranking.

The way the Fraser Institute presents its rankings is also flawed. Ontario offers schools of varying size, with students from diverse socioeconomic backgrounds; these discrepancies are not clearly indicated on the institute’s “report card.” While the website does offer options to filter through the data based on specific criteria, the process is far from straightforward. There is no distinction made between schools that have more resources, such as higher per-student funding. More affluent schools, such as private schools, teach students whose parents can offer more academic support. Unsurprisingly, these schools tend to have higher scores [4]. On the other hand, schools with a larger population of ESL students score lower on standardized literacy tests, and inevitably receive a lower rank—even if they maintain a high quality of teaching. Given these variations, and the relative nature of the ratings, the Fraser Institute should be more explicit about the variables they account for and rank private and public schools separately.

In addition, parents and readers should consider that there are other factors which influence the quality of a school. The Fraser Institute’s ratings often inaccurately reflect the quality of teaching at schools—in fact, schools have been known to sacrifice precious class time to concentrate on standardized test preparation [3]. In these cases, the quality of education and emphasis on learning might actually be compromised, as schools are more focused on achieving higher EQAO scores than they are on actually teaching students relevant skills. Moreover, the “report card” ratings are certainly not indicative of a school’s atmosphere or extracurricular opportunities. Since they are based purely on test scores, the Fraser Institute is unable to account for specialized programs or advanced-level courses in its rankings.

Too often, readers take the Fraser Institute’s rankings to be the ultimate determinant of a school’s quality, and choose schools accordingly. Parents have even bought property solely for the sake of sending children to select institutions [4]. With such high stakes, it is essential that the Fraser Institute ensures its “report card” readers are aware of its underlying methodology. Currently, only select details about the institution’s processes can be found online, and the overall procedure is not available for public viewing. As a result, parents overestimate the significance of the Fraser Institute’s rankings—an unfortunate mistake.

Clearly, there are many pitfalls to the Fraser Institute’s highly-touted school ranking system. The “report card” is one of many tools available to gauge the quality of a school, and should be viewed as nothing more. Readers should be wary when forming conclusions, as the report is not necessarily representative of a school’s teaching and overall quality. The Fraser Institute must therefore make it more apparent that its ratings only reflect a school’s relative performance on standardized tests—not its overall calibre. Until such improvements are made, the Fraser Institute will continue to fail us.