A Look at the HEQCO Report on Differentiation Part 2: Indicators and Data

A previous post discussed how the recent HEQCO report, The Differentiation of the Ontario System: Where we are and where we should go?, is the latest instalment of an on-going debate in Ontario about differentiation. This post addresses questions the report itself raises about how its conclusions were drawn about differentiation, including the indicators and data used.

 

How robust are the data?

It is not usual in indicators that compare systems to conflate 鈥渆quity鈥 and 鈥渁ccess鈥 as the report does. Access is normally about broad demographic and sometimes workforce demand. Taken in that context, as George Fallis , Ontario does not have either an access or a capacity problem, nor is it likely to for some time.

Equity is a different matter. More different than the report admits.

Whether or not demand for access is being met, whether the system is growing or shrinking, it is still possible that students from certain backgrounds have less (or more) choice of access than others. In other words, access is about participation writ large, while equity is about the proportional composition of participation whatever the motivation. Do under-represented students get into the programs that they want? Do they get into their first or third choice university?  These crucial questions are more about equity and choice than access.

The UK has recognized this distinction between equity and access for some time, much to the discomfort of some members of the . (The UK, by the way, has excellent data and indicators of access by under-represented groups.) In terms of robustness, it would be better to treat equity and access separately. If equity were addressed separately more dimensions could be added to it.

A primary example is the growing shift of emphasis from access to attainment. Ontario has very high rates of participation. The rate of high school graduation of underrepresented groups may now be impeding participation in higher education more than admission. In terms of rates of employment and post-graduate earnings 鈥 both factors in the proposed indicators 鈥 attainment may soon become the better measure of equity.  For first generation students, whom we know are persistent to graduation, this may already be the case. Also, if equity were addressed apart from access, data might be developed to indicate equity in faculty appointment and promotion.

None of the data are robust enough to explain the re-classification of universities from 鈥渋n between鈥 to 鈥渞egional.鈥 鈥淩egional鈥 could have a number of different meanings within the context of a system taxonomy. Economic impact is one that quickly comes to mind.

There are differing views about universities and regional development. One holds that it is the infusion of graduates that drives regional economies. Another holds that universities attract graduates from other jurisdictions to their regions, and that infusion in turn that bumps up regional economies, regardless of where the graduates were educated and whether or not the regional university employs them. The latter appears to have the greater impact.

And there are different views about university 鈥渟pillovers鈥 as instruments of innovation and economic growth. Maybe 鈥渞egional鈥 within the context of the report simply means from where students come. The University of Windsor鈥檚 concept of regional, as an example, goes beyond that. 鈥淩egional鈥 might indeed be the right categorization but more data and analysis are needed to validate it.

 

Do the data reveal, and is this study about, university performance?

There are two ways to parse this question. If the question is principally about performance, the answer is in most cases 鈥測es.鈥 When, however, 鈥渦niversity鈥 is added as a qualification, the answer becomes problematic. It becomes problematic for two reasons. First, it may be too highly aggregated, as Diana Crane and Steven Scott might suspect. Second, it may confuse institutional with supra-institutional.

Let鈥檚 give some thought to aggregation first. It is well-known that students more often than not select programs before they select institutions. Employers think in terms of programs far more often than they think in terms of institutions. Indicators of equity, access, rates of graduation and employment, average earnings, and 鈥済ood experience鈥 all, therefore, would be far more robust if calculated at the program level by institution.

Here Martin Hicks and Linda Jonker probably hit road-blocks that on some days must have them tearing their hair out. The Ontario Graduate Survey is full of data that could be cross-referenced and analyzed at the program and institutional levels, even with low response rates. But the data are for the most part not accessible. Only highly aggregated data from the surveys appear in the report.

The report is right about yield rates: they would be far more robust than applicant to registrant rates especially if calculated by program as well as institution. The report is also right that 鈥渁pplications鈥 is not the same as 鈥渁pplicants.鈥 However, it is beyond rational belief that universities do not know their yield rates, and disaggregate them by program.

Would any of this change were the system more centrally planned? Would performance funding provide sufficient incentives to make data more accessible? Or is this the sort of 鈥淓aton鈥檚 doesn鈥檛 tell the Bay鈥 healthy strategic institutional behaviour that the Operating Grants formula deliberately allows, and which explains the degree of differentiation already achieved?

Regarding availability of data, and at the risk of sounding like Donald Rumsfeld, there are data that exist and are accessible, data that exist and are not accessible, data that exist in theory but not in actuality, and data that have yet to be imagined. The report is, of course, correct to say that at some point one has to move ahead and use the best data that are available even if one would prefer to have better data. But this works only as a provisional answer. Depending on what 鈥渟ystem鈥 finally means to HEQCO, particularly if the concept is meant to be dynamic and responsive to a wider array of influencing factors, the question of what the data reveal and whether or not they measure university performance will have to be revisited.

What about the supra-institutional? Some indicators have much greater meaning at the system level than the institutional level. This is what Steven Scott as states not 鈥渟eeing鈥 what local institutions 鈥渟ee.鈥 For example, if we were to ask a university registrar if internally there is any regularly calculated statistic that measures 鈥渆quity of access,鈥 the answer would much more likely be 鈥渘o鈥 than 鈥測es.鈥 Identifying information about the socio-economic and ethnic background of applicants resides, if it resides at all, with supra-institutional agencies, OSAP, for example.

The Student Assistance Guarantee and the new 鈥 which is very smart and long overdue 鈥 make such a financial need statistics redundant at the institutional level, unless an institution chooses to define financial need more generously than provincial policy does.  Some Ontario universities do.

A university may wish to be more 鈥渞esearch intensive鈥 for purposes of reputation and ranking, but the report eschews ranking. Fair enough, but to an institution 鈥 as opposed to the system 鈥 the indicator is useful only to locate the institution within one of the 鈥渃lusters鈥 proposed by the report.  In terms of provincial fiscal policy about investments in research, the aggregate research performance counts far more than where the research originated institutionally. In other words, the indicator has a lot of supra-institutional utility but little institutional utility. Here we bump into a fundamental issue that the report skirts.

The Bovey Commission (1984) was the first to discuss differentiation and, in particular, research intensity specifically. The commission鈥檚 report categorized universities in terms of research intensity much as the HEQCO report does. It went further and devised a fiscal metric for research intensity that was scalable. Institutions could move from fiscal category to fiscal category insofar as research intensity was concerned.

HEQCO鈥檚 conception of a system so far appears to be static instead of dynamic. There is no discussion of indicators or other equations that would permit a university, using research intensity as the most probable example, to move from one 鈥渃luster鈥 to another. That is something that the data and indicators do not 鈥渞eveal.鈥 This could be because the conceptual part of the report is less developed than the data and analysis part.  Whatever the explanation, for a future report HEQCO might be wise to sort the data and indicators into institutional and supra-institutional categories.

 

Does size matter?

Every report has to have at least one shaggy dog story. This is it. The reader begins with the expectation of what could be a discussion of an essentially important question about economy of scale, unused capacity, and efficiency. But the story ends with a correct but minor discussion of the differences between sheer numbers and percentages that most readers could have figured out on their own.

But in begging the question, the example of international enrolment at Algoma and Toronto emphasizes its importance and demonstrates that size and economy of scale do matter . . . a lot.

First, is the percentage that really makes a difference in system terms the share that each institution contributes to accommodating international enrolment? In other words, the indicator is about demand, in which case the data should be presented and interpreted in terms of response to demand, which neither the reported sheer numbers nor the percentages do.

Second, is the significance of the example more about economy of scale than student demand? One of the four goals set by the report for reforming the system is, after all, financial stability. What the data in question show, in addition to demand from international students, is that Algoma is heavily reliant on a single source of income over which neither it nor the government has control. The remaining 1,100 or so domestic students at Algoma in terms of provincial subsidies are very expensive. At either 1,100 or 1,360 Algoma is far away from any conventional definition of economic scale. For that matter, so are the University of Toronto and York University. We often forget that over a certain size universities begin to lose economy of scale.

So, yes, size does matter. If financial stability is a goal of the new system, HEQCO still has a lot of work to do. The Americans are already onto this with the .

 

Audience orientation?

This part of the report is a polite and diplomatic disclaimer. Fair enough. But it is significant because of two things it doesn鈥檛 say. The word 鈥渟ystem鈥 does not appear. Most of what is said is about what Burton Clark and other students of higher education systems would call a 鈥渕arket,鈥 in this case students making choices among programs and universities.  The discussion of systems with which the report opens, and the kind of system envisioned, do not make allowance for forces other than government to influence the shape of the system and in turn institutional behaviour.

If such an allowance were made, Martin and Linda could have concluded that a selection guide is needed and, additionally, if what Nobel Prize winner Michael Spence has said about the high degree of imperfection in the higher education market were taken into account, that such a guide would promote differentiation by correcting the imbalance of information available to students.

Finally, look what creeps in at the very end: 鈥減rogram approvals.鈥  If the reference to programs is not an inadvertent semantic throw-away, it casts a very different light on the entire exercise, which to this point has been oriented entirely to institutions.

Share this article: