The Innovation Journal: The Public Sector Innovation Journal, 7(1), 2002, article 2.


Partners in Progress: (PDF)

S&T Productivity in NRC and Canada's Research Universities


Hassan Masum1 and Jack Smith2

1 - Department of Computer Science, Carleton University, Ottawa, Canada.
2 - Leader of Corporate Strategy, National Research Council, Ottawa, Canada



As with most countries, Canada's two key players in basic scientific research are universities and government research organizations. Through private sector spinoffs and partnerships, both these players have commercialized a great deal of valuable research in recent years, to the benefit of Canada's economy, society, and reputation. This paper discusses issues involved in assessing R&D performance, and surveys sources and statistics for several major research universities and NRC. We demonstrate that NRC's performance compares favorably with that of Canada's research universities relative to its funding, show that both parties would benefit from an increased level of collaboration, and suggest methodology for future work in this area.

Keywords: NRC, Canadian Universities, measuring R&D, IP Commercialization, research collaboration.


1. Introduction: NRC and Canada's Universities in Context

Canada's National Research Council (NRC) is Canada's public national science and technology organization, with laboratories, 3000+ personnel, and academic and industrial collaborations across the country. On a relatively modest budget, NRC has demonstrated a strong capacity for both basic research and technology prototype development.

The research universities of Canada occupy a position that is in many ways complementary to that of NRC. They are a diverse group of institutions, each of which combines teaching, research, and public service functions in a unique way. One fundamental difference from NRC is the large student presence on campus, although NRC does provide opportunities to many postdocs, visiting scientists, and industry researchers in its facilities.

Traditionally, university performance has been strongest in the early-stage basic research area, where fundamental scientific and technological principles are derived and tested. While NRC also does a fair amount of basic and strategic longer-term research, it has during the past decade strengthened its skills in the later-stage development of prototypes and commercialization of research results. With this and the geographical diversity of its institutions in mind, it is natural that NRC should strengthen its relationship with the university research establishment, to the mutual benefit of both parties.

Accordingly, many NRC managers and personnel have been working toward closer ties with their university colleagues. In early May 2001, senior NRC management discussed these issues with the University Advisory Group (UAG), a group of Research VP's from various Canadian universities. Representatives from Industry Canada, CFI (the Canadian Foundation for Innovation), NSERC (Natural Sciences and Engineering Research Council of Canada), and related agencies were also present. (Many of these stakeholders also discussed related issues in other meetings and the CAURA conference.)

In preparation for this meeting, we did comparative research into the R&D capabilities and performance of both NRC and Canadian Universities, and developed a slide show and report based on the results. Our goals were:

  • To describe the existing linkages between NRC and Canadian universities, and suggest avenues for future collaboration.
  • To objectively assess the R&D performance of NRC and selected Canadian research universities, in both a Canadian and global context.
  • To demonstrate the capacity of NRC as a proven competent innovator -- an organization universities would want to partner with to mutual advantage.

This article summarizes our findings. We discuss the issues and research sources involved, talk about general issues to do with measuring R&D performance, investigate commercialization in and collaboration between NRC and several universities, and summarize relevant resources. One hopes this information will be of long-term use to decision-makers in the R&D research and management fields. (As a side note; several members of University ILO's (Industry Liaison Offices) and Research Offices expressed interest in reading this paper even before it was complete. We believe that there is a shortage of practical and current research in this area in Canada, which this document and accompanying data may begin to address.)


2. Measuring R&D Inputs and Outputs


In order to compare NRC with Universities, or NRC with other national research organizations, one must first think about how to measure the inputs and outputs to the R&D process.

Unlike physical manufacturing processes, measuring the production of intangible goods and services is often subjective and ambiguous. How does one decide which time and money allocations should be attributed to the R&D function versus other functions? What are the best proxies to use for the "output" of NRC? Unlike the provision of well-developed goods and services, government and academic R&D output may not directly translate into products or sales for a period of several years, and the ultimate commercial user may be several steps removed from the initial research group.

As unique societal institutions, universities also have distinct points of view on this question. Since they perform teaching and public service functions in addition to their research role, it is difficult to directly compare costs and benefits with a research-only organization. As well, universities have a strong "public good" character, providing basic research that is explicitly meant to benefit society as a whole and not a single corporate entity. Given such a mandate, a university research officer may with some justification argue that it is misleading to compare research outputs primarily in commercial terms.

Given all these issues, we see that measuring R&D is a tricky process. Nevertheless, it's important to address the question as fully as possible, since R&D is one of the cornerstones of a modern economy. While realizing the inaccuracies involved in any such estimate, one still wants to get a feel for the relative productivity level of different sectors and institutions, in order to discover, reward, and propagate best practices.

Although previous work on comparing universities and governmental research labs appears to be quite limited, the more general problem of measuring R&D is one of much current interest. [Boer 1999] is a very readable overview of tools and techniques for valuing technology R&D, written from a managerial perspective. [Oslo 1997] discusses statistical methodology for collecting and analyzing technological innovation data; the analysis of such data forms a key part of measuring technological R&D. OST (Observatoire des Sciences et des Technologies) are experts in Canadian bibliometrics; the use of bibliometric data to assess R&D output is especially common in academic settings. The Science, Innovation, and Electronic Information Group at Statistics Canada is also actively exploring statistical methods for assessing R&D inputs and outputs, in all three performing sectors (university, government, industry). Other links to measuring R&D can be found in the "Further Links" section below, and in the References.

Let's take a closer look at some specific input and output measures:


Inputs - Headcount

Measuring headcount for NRC is relatively easy. Accurate records are available for the number of FTE's employed in various lines of business, as well as for temporary workers in the form of guest workers, postdocs, co-op students, and so forth (see e.g. [NRC Performance Report 2000]).

Although one might wish to partition the workers into "research" and "support" personnel to get a better idea of primary research workers and proportion of support staff, it is acceptable to consider them together for a first pass. This is in fact similar to StatCan's methodology for counting R&D personnel in industry; such personnel are counted by measuring the aggregate headcount of the smallest corporate entities primarily organized for R&D, i.e. possessing their own budget and staff. In the most recent survey on R&D in Canadian industry, the breakdown of R&D personnel was approximately 65% professional, 25% technicians, and 10% other (e.g. administrative); see [StatCan Industrial Research & Development 2000].

Measuring headcount for universities is more problematic, for two primary reasons. First, Canadian universities in general do not have the same kind of centralized data-keeping on their internal operations as NRC does; it can hence be necessary to look up and count the number of professors by faculty or even by department.

Second, how does one account for the contributions of graduate students to the university research effort? As anyone who has spent time in universities knows, graduate students are responsible for a substantial proportion of research activity. However, they may or may not be regarded as employees, depending on their involvement in research projects and individual scholarships. As well, their very modest financial incomes would highly skew the statistics on per-researcher productivity if they were included.

This latter issue in particular needs to be addressed in any comparison of university research versus government or private-sector research. One potential method is to derive a standard "conversion ratio" to convert between grad students and researcher FTE's (full-time equivalents); which would be a ratio describing the productivity of the average grad student as compared to the average researcher. (Whether or not such a conversion ratio is feasible depends on several factors, such as the variance of productivity and the size of the error in measuring the long-term benefit from relatively short grad student careers.)


Inputs - Budget

Next to personnel, budgets are the most important R&D inputs. This includes both operating and capital budgets: operating budgets can be further broken down into salaries, equipment maintenance, travel, etc; capital budgets include long-term physical plant and equipment, and (depending on one's categorization) endowment investments for universities.

Again, NRC has good aggregate data available in the [NRC Performance Report 2000] and other budget documents. The best source for Canadian University data is the annual report [CAUBO 2000]; this is the best publicly available consolidated data on Canadian University finances.

A fruitful avenue for further research could be to go through the NRC budget in a little more depth, and extract aggregate figures for salaries, equipment capital expenditures, operating research dollars, and so on. In particular, this could be done on a per-researcher (not per-FTE) basis, so that one could get an accurate feel for both capital and operating research funds available to the average front-line NRC researcher. This naturally leads to the idea of comparing operating funds of government and university researchers, including researchers, technicians, professors, and grad students; although such a comparison would be difficult due to differences in available statistics and organizational structure and goals, it would presumably shed light on how adequate funding is for these two complementary approaches of working toward innovation outputs.

The main difficulty with comparing operating budgets of universities and NRC is the allocation of university budgets among their different functions of teaching, research, and public service. Since universities are not just research organizations, it's necessary to decide e.g. what proportion of a professor's salary should be counted as a "research" expense: 20%, 40%, 60%? The same question arises in attributing other operating expenses and capital expenses among functions: is a library a teaching expense or a research expense?

These issues have been recognized by Statistics Canada in their most recent survey of R&D in Canadian Universities, and are dealt with at some length in the report [StatCan R&D in Higher Ed 1998-99]. After discussing the problems related to both attribution between teaching and research and measuring indirect costs, they suggest a general formula that takes several variables into account, in order to estimate an equivalent research expenditure value; this formula provides a way of going from raw data on university expenditures broken down by function to a (substantially smaller) budget that the university would require if it did only research.

We came up with a similar but simpler formula (simpler since we wanted to minimize the number of potentially error-inducing variables, and did not have StatCan's access to primary data). The formula and accompanying data is available in the form of spreadsheet data. Essentially, it looks like this:

Science&Tech R&D Budget = CS * (Sponsored Research + CR * (General Operating + Ancillary + Plant))

where the amounts on the right are derived from the figures given for each university in [CAUBO 2000], and CS and CR are derived coefficients for the amount of science research as a percentage of all research and the percentage of faculty time devoted to research respectively. While the formula gives a reasonable ballpark figure, it could benefit from additional refinement; one can look at StatCan's methodology for ideas on this front. As well, one could do a regression analysis of this derived formula against other concrete statistics, like NSERC funding for each university (both absolute values and per student or researcher capita); this could give valuable insight into the relative efficiency of universities in terms of winning grants, as well as validating the formula.

One other important note: in Canada, many grants do not include amounts for indirect costs associated with research (as discussed in [AUCC Funding 1999]). As a result, universities have adopted widely varying policies mandating that different amounts of overhead be included in research grants. This situation is in contrast to the US, where indirect costs are usually included. One should be aware of this fact when doing comparisons between Canada and other countries, and also when measuring the R&D funding of universities.


Inputs - Qualitative

People and money are the two essential resources for most enterprises. However, other inputs are also important in determining how efficiently these resources are translated to desired outputs.

One key input is the influence of management expertise and "culture" of an institution. Along with most management theorists, we believe that these factors can dramatically affect any institution's effectiveness. Developing and applying methods for quantifying these factors is an open question of great interest, especially in an academic and research context.

NRC employs various management strategies to select its best candidate projects. These include a technology assessment and gating procedure, examination of prospective research receptors, and some strategic analyses of future scientific potential for breakthroughs. Each Institute has individualized procedures which are adapted to its technologies and markets. The existence of an entrepreneurial program is also significant; this is one area in which universities have recently also been investing substantial resources, via the development of Industry Liaison Offices which actively promote licensing and faculty entrepreneurship.


Outputs - Quantitative

The beneficial outputs of R&D may be difficult to measure, since they can appear after many years, and may be captured or commercialized by an organization several steps removed from the original research group. Nevertheless, R&D metrics are still needed; here are some common and not-so-common ones, that have a primarily quantitative nature:

  • Publications - Number, and citation impact. In Canada, the primary source of bibliometric data is OST; they have several relevant publications on their site, and also did the report [StatCan Bibliometric Flows 1998]. However, raw numbers do not tell the whole story -- as every academic knows, there are many instances of people who publish a lot of relatively low-quality material. Hence the use of citation impact and analysis gives a better idea of the worth of publications as perceived by the scientific community -- in general, articles that are cited more have convinced more people that they are worth reading. The leader in this field is ISI: they have recently introduced the product Essential Science Indicators, which aggregates citation data to rank institutions and journals; their companion product Web of Science provides a fairly comprehensive citation analysis of the scientific literature.
  • Commercialization and IP - Spinoff companies (both number and revenue), licensing agreements and revenue, patents, estimates of entrepreneurial activity levels among staff. Although patents are a widely-used measure of institutional innovation due to the ease and lack of ambiguity in measuring them, it's not clear that raw patent numbers represent a good measure of "research value added". More sophisticated methods of analyzing patent data have been developed, such as the Technology Indicators from CHI Research.
  • Awards - especially internationally-recognized and longstanding ones.
  • Guest workers and students - Number of guest workers, co-op students, grad students, and postdocs. Although these numbers are inputs, they are also outputs since good research and R&D environments attract more students, whose training and research has positive societal value.
  • International collaboration - Committees, membership in standards bodies, and involvement in "big science" activities like astronomy and physics. (As with several of the other metrics, this is both quantitative and qualitative -- although one can measure the number of personnel and number of collaborations easily enough, a more sophisticated approach would weight such collaborations by their "importance", which is hard to define objectively.)
  • "Infrastructural activities" - for NRC, this includes CISTI, IRAP, journals, training, networking, forums, technical advisory groups, setting standards, and so on; universities have corresponding activities. By "infrastructure", we mean activities which form the basis for internal and external R&D activities without necessarily having a direct R&D output value themselves; such activities are often shared between institutions or provided by public institutions, due to their public good character.
  • Public education - Lectures, interviews, articles about the institution, and books for a popular audience. This would measure the "degree of presence" of the institution in the mind of the educated public. (Note that some advertising companies regularly measure such metrics for large clients.)
  • Web footprint - We believe it would be interesting to measure the number of links to and number of sites at *, *, and so on. Doing a regression analysis of these measured variables against other R&D output and input metrics might show a correlation -- intuitively, it seems reasonable that institutions with more output and stature will both produce more and perhaps more importantly get linked to more. The excellent search engine Google uses a similar algorithm. (As an informal experiment, we did a linear regression of number of sites linking to each of ten major Canadian universities against the total value of NSERC grants awarded to that university; the resulting coefficient of determination of 0.80 shows that the two variables are strongly correlated. Further study is called for, to determine the extent to which the observed correlation is due to a third mediating causative variable like the size of each institution, and to better characterize the validity of this metric in general.)

It is argued in [Tassey 2001] that industry (and perhaps the R&D sector as a whole) underinvests in infrastructural and basic research; primary reasons are the high risk involved, and the difficulty of capturing benefits from doing the research. One might therefore suggest that developing R&D output metrics for such research could yield long-term benefits; while more difficult than measuring direct commercialization activity, being able to accurately assess the value generated by early-stage research would encourage investment in such research by allowing objective rewards to basic research that yields demonstrable benefits.

In an industrial setting, many of the same R&D output metrics remain applicable. However, at some point in the analysis of a decision-maker, they are usually translated into financial terms; in combination with available financial resources and the set of possible alternative investments, estimated financial output (or benefit) of R&D will determine the degree to which it is invested in. [Boer 1999] discusses financial and managerial issues involved with this process, and explains several practical metrics used to estimate the value of R&D portfolios.


Outputs - Qualitative

Not every benefit of R&D can be adequately quantified with current methods. In such cases, it is still important to track successes (and failures), using case studies and other tools. Study, careful thought, and consultation with experts from areas outside the usual R&D sphere may suggest ways of partially quantifying or analyzing seemingly qualitative properties -- metrics for tracking the success of advertising are a case in point.

  • Stars and success stories - As an example, NRC was one of the key players in Ottawa in SSOC, the Solid State Optics Consortium -- this consortium promoted WDM which is now fundamental to Ottawa's large photonics industry. Incidentally, the time lag in this case of more than a decade between initial investment and market payback illustrates the difficulty of measuring long-term output.
  • "Perception of institution" - One could measure this by surveys and more informal methods, both within and outside the institution. This yields both qualitative and quantitative data on how the institution is perceived by an audience of interest.
  • Outstanding publications - seminal articles or books.
  • Charitable and community activities - these can strongly influence how an institution is perceived, in the mind of both the public at large and potential employees, visitors, and collaborators.
  • Institutional myths and folklore - Hard to measure, but it definitely has an impact on how the institution is perceived, especially by peers. As an example, consider the many strange and wonderful tales that have been told about MIT pranks, shenanigans, and discoveries...the sense of "playful genius" serves a dual purpose of intriguing and entertaining the public, and attracting further genius. [Mayer 1999] performs a similar service for NRC, in his book surveying technologies and inventions produced at various labs and the personalities behind them.


Further Links

Several groups around the world are doing research on the important question of measuring R&D inputs and outputs:

  • CHI Research specializes in analyzing the patent portfolios of companies and public-sector organizations.
  • EuroStat is the statistical office of the EU.
  • ISI is the big player in the global science citation analysis field.
  • NSF, the US National Science Foundation, has long been a leader in measuring and managing science R&D.
  • PRIME is a research group at the University of Ottawa, doing work on science and innovation policy.
  • OECD, the Organization for Economic Cooperation and Development, does relevant statistical research.
  • OST is Canada's expert on bibliometric analysis. Members belong to several universities, primarily UQAM.


3. NRC's Performance

Inputs, Outputs, and Strengths

As can be seen from the following chart, NRC is comparable to Canada's larger research universities in scale of R&D:


There are several points to note. First, the figure for "Estimated Total $S&T" was derived in the manner discussed previously; we set CS and CR equal to 0.75 and 0.40 respectively, after analyzing several university budgets and StatCan reports. (These coefficient values and the formula in which they are used could both benefit from additional investigative research.) Second, the figure for NSERC funding represents only direct grant and scholarship expenditures -- there are additional NSERC-related funds for universities through avenues like the Networks of Centres of Excellence, as well as other federal sources of funding like the Canada Research Chairs and Canada Foundation for Innovation. (We use NSERC funding as an independent check on the validity of the total S&T formula, since the ratio between the two quantities should be similar for most institutions; in order to get the sectoral funding envelopes for institutions, one would need to add the federal sources mentioned above, and separately calculate private-sector and other sources by institution.) Finally, defining exactly what qualifies as "Science & Tech R&D" takes some thought and judgement. For example, expenditures reported under the heading of "medical research" could actually be anything from fundamental genomic research to clinical trials and product development; similarly, interdisciplinary networks may have only part of their expenditures qualifying as science.

To some, NRC is viewed as a competitor to universities-- but NRC would like to be regarded as a partner and collaborator. Universities and NRC are each necessary, but not sufficient; together they can be more than the sum of their parts. Universities have students, intellectual breadth, and a research mandate focusing more on basic research. NRC has:

  • broad geographical reach,
  • longstanding linkages to innovative firms,
  • a national research organization,
  • leading capacity in new domains,
  • complementary strengths in infotech and "big science" (e.g. Canadian Astronomy Network),
  • support activities (e.g. CISTI),
  • standards research and technical ability,
  • IRAP and CTN as national resources, and
  • clusters for regional innovation.

Reports on NRC (and federal science in general) include:


NRC in a Global Context


In evaluating and understanding NRC and Canadian Universities, it is useful to place them in a broader context. Characterizing the global R&D research establishment in a unified manner is a huge task -- one that we suggest would be very worthwhile, both for understanding the relative strengths of each national institution in a global context, and for cross-fertilizing best practices. As a first step, one can examine the allocations that national research establishments receive; the chart above shows a cross-section of budgets for several well-known research groups (all of which are converted to Canadian dollars).

Reading through the open literature also gives substantial qualitative insight into strategies, tactics, and points of view on R&D productivity. Here are starting points for several research institutions and organizations:

  • AAAS. America's American Association for the Advancement of Science. Has one of the largest cross-disciplinary annual meetings in the world.
  • Academia Sinica. Taiwan's well-known academic and research institution.
  • AIST. Japan's National Institute of Advanced Industrial Science and Technology.
  • CNRS. France's Centre National de la Recherche Scientifique. They have a nice list of research institutes from around the world.
  • CSIRO. Australia's Commonwealth Scientific and Industrial Research Organization.
  • Max Planck. Germany's Max Planck Society for the Advancement of Science.
  • NSF. America's National Science Foundation. Many useful publications, like Science and Engineering Indicators 2000.
  • UK Research Councils.
  • UNESCO. United Nations Educational, Scientific, and Cultural Organization.


4. Canadian University R&D

Scale and Commercialization

In the previous section, we presented a chart showing R&D budgets for several major Canadian Universities. Further detailed information is available in the sources pointed out in the references; three key references are [CAUBO 2000] for detailed financial expenditures, [StatCan R&D in Higher Ed 1998-99], and tables available at NSERC.

Note that it is only relatively recently that most Canadian universities have started dedicating significant resources to the commercialization of their IP. Excellent reports on Canadian University R&D levels and the state of their IP commercialization are:

Some interesting points from the StatCan report:

  • Canadian Universities received $22.7M from licensing and similar IP revenues, and spent a similar amount ($23.5M) on IP management and incubator administration.
  • 454 active spinoff companies, 1109 active licenses, 1826 patents held.
  • 325 patents issued in 1999, of which 168 (about half) were U.S. patents. (Note that looking at U.S. patent figures may be more accurate than overall patent figures, since the former precludes multiple-counting the same invention.)
  • 170 FTE's dedicated to IP management, 15 research parks and incubators.
  • IP ownership policy for inventions: approximately half give researchers ownership, one-quarter keep institutional ownership, and one-quarter use joint ownership.
  • In Canada in 1995, 16 800 scientific publications were produced by universities, and 3 900 by hospitals.

How does this compare with NRC? With respect to the measures of patents and active spinoffs, NRC is at the same scale as Canada's largest research universities:


The comparable magnitude of NRC and research universities is evident: in important measures of both R&D input and output, the scale of the institutions are similar. We are interested in doing further work in this area to better understand and quantify the areas of strength of each sector, with a view toward developing strategies for increasing research productivity.


Research Offices and Remarks

The ILO's (Industry Liaison Offices) and Research Offices of universities are important resources to determine the commercialization performance of Canadian Universities. Here are the sites for several major Canadian research universities:

[Carleton Research] - [Dal Research] - [U of Alberta ILO] - [UBC UILO] - [UMontreal Research] - [U of Ottawa Research] - [USask Research] - [U of T Tech Transfer] - [U of T Research and International Relations] - [Waterloo Research].

In discussions with us, representatives from some of the above institutions brought up relevant issues:

  • Using raw publication numbers as a metric of research excellence has many shortcomings, since quantity is not equivalent to quality. Weighting publications by number of citations and journal quality gives a better metric (see the databases at ISI).
  • Many universities' IP portfolios are skewed toward certain areas of science, e.g. biotech, because it may be more customary and more useful to get patents in those areas.
  • AUTM (Association of University Technology Managers) and the Bayh-Dole act in the US are highly relevant to understanding their system. The Patent & License Exchange at pl-x is an interesting portal for the valuation and management of IP. (On a related note, we found [Boer 1999] to be an excellent book on technology valuation.)
  • Focusing on US patents instead of global patents might be better since this avoids double-counting, and since most valuable patents wind up getting registered in the US. Linking patent numbers to productivity may be a mistake, with licensing income being a better measure; this needs more research.
  • Professor Heather Munroe-Blum is the VP at U of T RIR, as well as being chair of the University Advisory Group. She has written a provocative and influential report on improving Ontario's University research and innovation.
  • U of T's network is so large and distributed that figures do not include the affiliated hospitals. Also, U of T and most Canadian Universities do not appear to have central publication figures -- they would need to be counted department by department. (Alternatively, bibliometric data can be derived from citation databases, as discussed previously; even with current information technology tools, getting an accurate figure remains a labor-intensive task.)


5. Sharing Resources: A Positive-Sum Game

NRC-University Collaborations and Clusters

We found many types of collaborations between NRC and universities, some of which are noted in [NRC Performance Report 2000]. These collaborations are of both a formal and informal nature, and can be split into two categories: personal collaborations and institutional collaborations. (Of course, the dividing line is not sharp between the two; the former tend to be more informal connections between a small group of researchers, while the latter tend to be formal collaborations at the level of a research group or lab.)

Personal collaborations are many and varied. In 1999-2000, 1 in 8 R&D personnel at NRC (or 246 out of around 2000) were adjunct professors at postsecondary institutions. Every year, there are many guest workers, students, and postdocs who work at NRC labs. The governing council of NRC has university representation, which affects the research effort and perspective of the whole institution. Lectures and presentations are an important avenue of cross-fertilization between research establishments; the IIT series (at NRC's Institute for Information Technology in Ottawa) is a good example. Finally, much direct research collaboration (e.g. joint publications and projects) takes place.

At an institutional level, NRC transfers substantial funds to large-scale projects such as TRIUMF, astronomy ventures, the Canadian Light Source, and the Sudbury Neutrino Observatory. MoU's & formal agreements are often signed for larger projects (generally at least $50K). Joint conference hosting and development sometimes occurs. And at an international level, NRC sends people to represent Canada as a whole on various advisory groups, standards bodies, and other such organizations.

NRC has several on-campus sites that benefit from the synergy and ambience of profs, students, firms, and facilities that naturally cluster in a university environment. Major collaborative institutes and ventures include:

Clearly, NRC already has very significant interactions with Canadian Universities; equally clearly, these interactions could be strengthened to mutual advantage. Developing better assessment methodologies for measuring R&D and removing structural and financial barriers to doing innovative research will create a stronger synthesis of Canada's research institutions.


Conclusion: Partners in Research and Commercialization

All sectors of the R&D community face the challenges of efficiently sharing knowledge and best practices, and using the principle of competitive advantage to focus on what each does best while collaborating to develop successful products, processes, and societal solutions. Many national governments have recently made this area a priority: Trans-Forum is an excellent portal maintained by Strategis (Industry Canada) on linking university ILO's, industrial partners, and government research institutes.

Through a survey and analysis of many of the available data sources on R&D in Canada's public and higher-education sectors, we hope to have laid the groundwork for future work in this key area for national competitiveness. Areas for further study include developing a standard suite of quantitative and qualitative indicators for R&D input / process / output, understanding the relative advantage of different institutions and institutional cultures, suggesting more detailed strategies for successful collaboration, assessing the effectiveness of R&D portfolios at both sectoral and project-specific levels, incorporating private-sector R&D into the research framework, and adopting best practices from institutions in other countries.

Although the payoffs are longer-term than most investments, there is no longer any doubt that technological innovation is crucial for institutions, countries, and civilizations as a whole. Since limited resources are a fact of life, it is essential to understand the measurement of innovation, and to promote strategies for R&D collaboration and cooperation.

About the Authors:

Jack Smith, Canada Mortgage and Housing Corporation 1977 - 1981, Energy, Mines and Resources Canada 1981 - 1989, National Research Council of Canada 1989 – 2002

Hassan Masum. Ph.D Carlton University 2003, National Research Council, Industry Canada.

6. Acknowledgements and References


Special thanks to Dr Peter Hackett, VP of Research at NRC, for supporting this research.

Helpful comments and suggestions were received from a number of contributors, including Darlene Gilson, Daood Hamdani, Mike Hamilton, Annie Hlavats, Caroline Lachance, Ben Matthews, Monique McNaughton, Jeffrey Roy, and Kanu Sikka.


Articles and Books

[Acs 2000] Regional Innovation, Knowledge and Global Change. Zoltan J Acs (Editor), 2000. Collection of articles on "innovation clusters" (regional innovation systems that benefit from geographic proximity).

[AUCC Funding 1999] AUCC Research File, May 1999, Vol.3 No.1; "The level of funding for university research in Canada and the United States: Comparative study."

[Boer 1999] The Valuation of Technology: Business and Financial Issues in R&D. Peter Boer, 1999. Excellent overview of management methods, metrics, and algorithms for valuing technology.

[CAUBO 2000] Financial Statistics of Canadian Universities and Colleges 2000. Available directly from CAUBO for $100, or at many university libraries. This is the best source for consolidated financial reports of Canadian Universities, with expenditures broken down by category and university.

[De la Mothe and Paquet 2000] Information, Innovation, and Impacts. John de la Mothe and Gilles Paquet (Editors); 2000. A joint project between researchers at U of Ottawa and Statistics Canada, this volume brings together points of view on the impacts of innovation on modern information economies. (Note that this book is one of a series on "Economics of Science, Technology, and Innovation", from Kluwer Academic Publishers.)

[Doern and Sharaput 2000] Canadian Intellectual Property: The Politics of Innovating Institutions and Interests. G Bruce Doern and Markus Sharaput, 2000. Concise review of the main players and processes in Canadian IP.

[Innovation Policy 2000] Science, Technology, and Innovation Policy. ISBN 1-56720-271-3; 2000. Excellent collection of articles on science and innovation policy, including such topics as clustering, the changing role of universities, national science policies, knowledge management, sustainability, and R&D in the developing world.

[Mayer 1999] Scientific Canadian: Invention and Innovation from Canada's National Research Council. Roy Mayer, 1999. Interesting profile of scientific discoveries and engineering developments by innovators in NRC.

[NRC Performance Report 2000] Very useful -- has summary of NRC inputs (headcounts and financial), and outputs (papers, collaborations, guest workers, patents).

[Oslo 1997] Oslo Manual: Proposed Guidelines for Collecting and Interpreting Technological Innovation Data. Available from OECD / EuroStat; ISBN 92-64-15464-7. Interesting overview of statistical methodology for operating and analyzing surveys on innovation.

[Ruttan 2001] Technology, Growth, and Development: An Induced Innovation Perspective. Wide-ranging exploration of the role of technological change in economic growth and development.

[Tassey 2001] Gregory Tassey. "R&D and Long-Term Competitiveness: Manufacturing's Central Role in a Knowledge-Based Economy". Paper from NIST, Technology Administration, Dept of Commerce (USA), April 2001; Well-written report on the essential role of manufacturing, and the importance of recognizing the different stages and requirements of basic, "infrastructural", and applied research.

[Zacks 2000] Rebecca Zacks. "The University Research Scorecard". In MIT Technology Review, July / August 2000; pp 88-90.


Canadian Organizations

ACST Advisory Council on Science and Technology. Several useful reports.

AUCC Association of Universities and Colleges of Canada.

AUTM Association of University Technology Managers. They do an annual survey of IP commercialization in both Canada and the US that is well worth reading; the summary is available on their Web site.

CAUBO Canadian Association of University Business Officers. They publish an annual consolidated report of the financial statistics of Canadian universities, referenced above in the "Articles and Books" section.

CAURA Canadian Association of University Research Administrators.

NRC Canada's National Research Council.

NSERC Natural Sciences and Engineering Research Institute of Canada. The site has quite a bit of useful information with regards to NSERC grants and related expenditures. A complete breakdown by university of NSERC money received is available for the past decade.

OST Observatoire des sciences et des technologies -- Canada's bibliometrics experts.

Research Money. Newsletter on Canadian R&D, science and technology research, and innovation.

StatCan Statistics Canada is a useful resource, in particular the Science, Innovation, and Electronic Information Group. See the separate section on StatCan below.


Statistics Canada

(Most of these papers are available at the Research Papers page of the Science, Innovation, and Electronic Information Group.)

[StatCan Bibliometric Flows 1998] StatCan Catalogue # 88F0006XPB No. 10. "Knowledge Flows in Canada as Measured by Bibliometrics".

[StatCan GERD 1989-2000] StatCan Catalogue # 88F0006XIB-01001. "Estimates of Canadian Research and Development Expenditures (GERD), Canada, 1989 to 2000, and by Province 1989 to 1998."

[StatCan IP Commercialization 1999] StatCan Catalogue # 88F0006XIB-00001, May 2000; "Survey of Intellectual Property Commercialization in the Higher Education Sector, 1999".

[StatCan Industrial Research & Development 2000] StatCan Catalogue # 88-202-XIB, January 2001; "Industrial Research and Development: 2000 Intentions (with 1999 preliminary estimates and 1998 actual expenditures)".

[StatCan R&D in Higher Ed 1998-99] StatCan Catalogue # 88F0006XIB-01002, February 2001; "Estimation of Research and Development Expenditures in the Higher Education Sector, 1998-99".

There are many people in this group at StatCan who have valuable insights into the R&D and Innovation measurement process; for measuring R&D and Innovation, this is one of the most proficient groups in Canada.

Published (January - April), 2002

Revised March 13 2002

Last updated: November 25 2016