Are we asking the right questions about differentiation in Ontario higher education? A look at the HEQCO report (Part 1)
When thinking about systems of higher education three books that I often return to are Burton Clark鈥檚 The Higher Education System, Diana Crane鈥檚 Invisible Colleges, and Steven Scott鈥檚 Seeing Like a State. All three provide insightful prisms through which to view HEQCO鈥檚 most recent discussion of differentiation.
HEQCO for at least the last half-dozen years has been ruminating about differentiation. So has the province, but for even longer. Depending on how one counts, at least five provincial committees and commissions have taken up the question of differentiation since the late 1960s. That works out to about one commission per decade.
Some proffered sound advice. Some ducked. But, like the weather, in the end no one has done anything to restructure the system or, as some well-informed cynics might say, even accept that there is a system. The track record is one of advice proffered and advice denied. Will HEQCO do any better? A puckish observer might with some reason say that is a sequel of a sequel of a sequel, of another sequel, and so on.
A 鈥渟trong central hand鈥 in Ontario?
The first section of the report is almost a separate report. It is a hortatory exegesis on the virtues of centralized plans and systems. This is where Clark, Scott, and others who have thought about the role and efficacy of systems come into play. HEQCO鈥檚 vision, which, by the way, arises from the data and analysis in the rest of the report, is remarkably simplistic: there is the state and there is the university, either singularly or collectively. There are no other interests, stakeholders, or socio-economic forces in the equation.
The report says, correctly, that an 鈥渆xclusively or overly centralized approach has little chance of success鈥 but its vision of what systems are, how they work, and how they affect institutional behaviour is centralized and exclusionary. After transubstantiation, the deepest mystery may be the belief held by successive Ontario governments and apparently now HEQCO that it is possible to put less skin into the higher education game and at the same time claim larger stakes. There is a lot of retro-think in the report.
The report in accurately exposing the disadvantages of homogeneity calls for a 鈥渟trong central hand鈥 as a necessary means of counter-acting the tendency of 鈥渋nstitutions [to] drift towards homogeneity more than . . . strive for diversity.鈥 This is a hard pill to swallow. Why? There are three reasons.
First, this report and its immediate predecessor 鈥 : A report on where we are and where we are going 鈥 describe a system that is diverse in institutional performance if not in form. This report goes even further in that direction.
Second, although the concession is somewhat back-handed, the report itself confirms what virtually everyone else has known for some time: so far have done little or nothing to change the status quo.
Third, and most seriously mistaken, is to blame the universities for homogeneity. One has to look no further than the mechanics of the graduate expansion program, the Access to Opportunity Program, and the funding program to accommodate the double cohort to see that Ontario has persistently promoted homogeneity. The report itself describes the province鈥檚 funding formula and tuition policy as 鈥渇orces of homogenization.鈥 Ontario鈥檚 universities have not drifted; they have been fiscally manipulated. A more sophisticated appreciation of higher education systems, Clark鈥檚, for example, would reveal the differences between institutional drift and responsiveness to legitimate forces other than the state.
Having said that, I think that the report misses a key question that arises from its own data and its description of the funding formula. How is it that the report and its predecessor report can identify extensive differentiation of performance in the face of 鈥渇orces of homogenization鈥?
What the funding formula does 鈥 Homogeneity or heterogeneity?
Two facts about the are either often overlooked or misunderstood. The first is that the formula funds programs; more specifically It funds degree programs. It does not fund institutions. The operating grant that a university receives is based almost entirely on the sum of funding generated by each degree program. Thus a nominally homogenous formula can and does result in heterogeneous grants. The second is this statement that has appeared in every Operating Grant Manual since the formula鈥檚 inception:
It should be noted that the distribution mechanism is not intended to limit or control the expenditure of funds granted to the institutions, except in the case of specifically-targeted special purpose grants.
In other words, how a university spends its operating grant need have nothing to do with how that income 鈥 grant and fees 鈥 was generated by the formula. Adding this to programs like the Ontario Student Opportunity Trust Fund and the , both of which required matching funding from the private sector, and looser regulation of international student tuition fees one should ask why the report concludes that the degree of differentiation is 鈥渟urprising.鈥
It should not be a surprise at all. It in practical effect belies the conception that a system can be built on an overly simplified and centralized two-sided relationship between state and institution.
Indicators: Questions that need to be asked
鈥淲here we are and where we should go?鈥 sounds something like 鈥渢he Once and Future King.鈥 I really admire the work that Martin Hicks and Linda Jonker have done in this report and its predecessor. Another of their , on teaching loads and research output, was equally splendid. It could have played a larger role in this report. Theirs is an uphill struggle in a jurisdiction that is data poor and sometimes secretive about data that exist but are not accessible. Thanks to their efforts we do indeed know a lot more about where we are, even if the report overall is a bit fuzzy about why we are where we are.
Instead of discussing the indicators one-by-one, let鈥檚 look at them from the perspective of the report鈥檚 five over-arching observations. The first one is addressed below, and a subsequent post will discuss the others.
Are these the right data?
There are two different ways of thinking about this question. Are these the right data to validate inductively the type of system and 鈥渃lusters鈥 that the report proposes? Or are these the right data to inform deductively the design of a new system that does more than rationalize and formalize the status quo?
The answer to the first version of the question is more often 鈥測es鈥 than 鈥渘o鈥 but there are some surprising omissions. For example, with regard to equity, data about institutional spending for need-based financial aid would be a useful means of validating the 鈥渆quity of access鈥 of each cluster. We know where under-represented students 鈥 some of them 鈥 are now, but we do not know much at all about how they got there.
What were their choices? What offers did they receive? In other words, we need to know more about choice of access. The absence of data about yield rates is an omission, albeit one that the report acknowledges. Nevertheless, it would be helpful from a system differentiation perspective knowing what the 鈥渞ight data鈥 should be.
The second version of the question is the more complex because the report does not tell us what other system models, if any, were considered. There is curious footnote that might imply that the system proposed in the report is like the California system. It isn鈥檛. Would a Canadian version of the work? We don鈥檛 know, but as an analytical exercise it could have been done.
The report talks about sustainability. Question: using the proposed 鈥渃luster鈥 model, how volatile has the nominal system been? If the same data sets were used to recalculate the institutional assignments for, say, 2011, 2006, 2001, and 1996, would the results be the same? If they were different a question could follow about factors that change institutional performance.
A more specific version of this question would ask about the capacity of the 鈥渞egional鈥 and 鈥渕ostly undergraduate鈥 clusters to have accommodated, for example, all increased demand for access generated by the double cohort? In terms of capacity, what is the statistical fit between the proposed clusters and the surplus teaching capacity identified by the earlier report on teaching and research?
A big category of data missing, regardless of which version of the question one asks, is about the fiscal status of universities in each cluster. In other words, what is the statistical relationship between fiscal differentiation and 鈥減erformance鈥 differentiation? Using COFO-UO data, the average and institutional distributions of income and expense would fit the wonderful five dimensional displays already used in the report could be compared.
Since HEQCO now prefers 鈥渞egional鈥 instead of 鈥渋n between鈥 to categorize some universities, are more data needed about the rate of students studying away from home? The UK and the United States, jurisdictions not unlike Canada and Ontario in terms of university education, have much higher rates of this form of student mobility than we do. (Thanks to Alex Usher for .)
Why? Is 鈥渞egional鈥 a synonym for what 鈥渃atchment鈥 meant previously for colleges? To what degree does the definition 鈥渞egional鈥 depend on mobility? What if the Ontario rate was higher? In terms of equity of access, are there variations in mobility? Is there a possibility that 鈥渆quity of access鈥 per se could rise but with ghettoization as an unwelcome by-product?
The next post will discuss other questions raised in the report such as How robust are the data? Do the data reveal, and is this study about, university performance? Does size matter?