Go back

“Research Fortnight Benchmarking” – FAQ

Contents:

QUESTIONS ON THE RAE 2008 RESULTS

QUESTIONS ABOUT GRANTS AND AWARDS DATA

TECHNICAL QUESTIONS

QUESTIONS ON THE RAE 2008 RESULTS

Q: How is the Research Fortnight Quality Index calculated?

A: There are different perspectives on how to generate league tables and the Research Fortnight Benchmarking application allows users to generate different tables based on different principles. These include the weightings between different star categories, whether all institutions should be included or not (eg separating out specialist institutions).

There’s more detail in the following diagram:

View this photo

As a general principle, all aggregations are weighted to take account of the FTE staff in each submission

  • Weight, Power and Market Share are different ways of looking at the same thing. They all combine both Quality and Volume.
  • For each submission, we generate the “Weight” by multiplying the RF Quality Index by the FTE.
  • Within a UoA, we generate “Market share” as a percentage by calculating Weight / (Total Weight of all submissions)
  • Within a UoA, we generate “Power” as a mark out of 1 by calculating Weight / (Maximum of Weights).
  • So the HEI’s Power rating is a mark out of 1 that combines the total of all its weights in all the UoAs with the total of all the weights in all the UoAs of the institution with the largest total weight (last time this was Oxford).

Q: On the pre-release version of the Benchmarking Service, in “Tuning” and on the button for “Restrict UoAs”, it stops at 54 so isn’t a complete list of UoAs, where are the rest?

A: This is a fonts bug, which we’ve fixed. They will be there in the release version.

Q: Again, on the pre-release version, in the restrict UoAs option, if I go through and uncheck all the UoAs where we didn’t enter, I’ll be left with our 14. Does this cascade down to all of the reports elsewhere. Eg, if you remove UoAs 1-14 and then look at University of Edinburgh (via comparator sets / Scotland’s Big Six/cash), the amounts remain the same regardless of the areas that have been removed. I would have expected these figures to fluctuate depending on which UoAs are included.

A: Yes, the calculations “cascade”.

Q: If a submission has, say 3 research groups within it – will we be able to determine the Quality Index for each of these groupings within the overall UoA submission?

A: Joint submissions are given a single Quality Profile in the RAE. So all the components will also gat the same Research Fortnight Quality Index.

Q: How does one compare a specialist institution with the pertinent department(s) in a university?

A: A Quick answer is to browse the relevant unit of assessment in the RAE screens. A longer one, is to set up a Comparator Set with all the organisations in it; then to tune out all the UoAs that you’re not interested in (possibly 66 of them).

Q: For joint submissions (e.g. 2 HEIs submitting to 1 UoA together), will tables give power rating, national ranking and market share for the joint submission (i.e. both HEIs together)?

A: Yes, but exactly what happens depends on the context. So, for example, in grand league tables of institutions across all UoAs, each component of a joint submission is considered part of the different parent institutions. But when looking at a single UoA, the joint submission is treated as a single entity (but shows all participating institutions).

Q: How will we know that the ingestion of the raw data into the benchmarking tool is complete, given that it takes 15 minutes or so? Will a flag be flown to say “complete”, or something similar?

A: Yes.

Q: Will support also be available Thurs/Fri? A lot of time on Weds ill be spent on meetings/disseminating results / PR activity. May well be Thursday before any detailed work via the benchmarking service is undertaken.

A: Yes.

Q: Once we’ve ingested the data, can we then copy the grant_win folder to a USB and then use this on another pc?

A: Yes, but please make sure you take the entire folder, including all the files.

Q: Will Research Fortnight itself be giving greater weight to the league table produced using the Quality Index OR the Power calculation?

A: We’ll be looking at the data and making our decision then.

Q: Is there an existing pre-defined comparator set of post-1992 HEIs (i. ‘modern universities’)?

A: Yes, and many others. Look under “University sectors”.

Q: Is there a Home Button?

A: Yes – click the Research Fortnight logo (or the text next to it). But note that there are in fact two distinct “Homes” – the pink RAE screens and the green Benchmarking screens.

Q: Is there a Back button?

A: No. Use the breadcrumbs at the top of the page to navigate. Also, the listings themselves are generally live. So, if you are looking at the Physics UoA, you can click through to Durham. Similarly, you can click through from Durham to the Physics UoA.

Q: Will we be able to set up our comparator sets in advance of ingesting the data?

A: Yes.

Q: Is there a guide on how to use the Benchmarking Service? e.g. an online help manual on how to do these task (set up a comparator group etc.)

A: The online help starts here…

Q: With regards to the Funding in England tab, is there going to be a Funding in Scotland tab too?

A: Not for now.

Q: Can you compare yourself within a region – e.g. in Wales?

A: Yes. This is what the Collections are for.

Q: If you restrict UoAs in the one-by-one tab, will this restriction be carried through to the main and super panel screens?

A: No.

Q: Does the tuning only apply to the comparator set you created? If you do another query doyou need to readjust the tuning?

A: The Tuning settings persist until you change them.

Q: If I have limited it to a specific field in UoA and then want to include all fields again in a different analysis, I must remember to remove the previous restrictions?

A: Yes.

Q: I understand how the weightings will be applied to the 2008 RAE results and the quality score calculated, but how are the results for 2001 and 1996 scaled for the purposes of comparisons?

A: We asked users last week about three different ways of comparing with 2001. The preferred option was to leave the two scoring systems on separate standards, and that’s what we’re doing. This was probably because all the alternatives are complex and leave users wondering what the numbers really mean.

Q: After fine tuning, is there a restore to original settings feature?

A: Yes. There is a button on the main Tuning page.

Q: Is there any restriction on the number of downloads of the benchmarking service within the institution?

A: No.

Q: Can you define your own Comparator sets?

A: Yes. Use the “My Comparitor Sets” tab.

Q: Will we be able to automatically define a comparator set by RAE2001 banding (e.g. all institutions which were banded 4 for a particular UoA in 2001) – or will we have to manually list all the relevant institutions to user-define the set?

A: Yes. Use the “Search Organisations” button (not in the pre-release version).

Q: Will the benchmarking tool have the facility to cut the data by Unit of assessment and Institution. From what I have seen, you can cut the data by either HEI or UoA, but not both. Is this something that will be possible? It would be handy to look at how an institution has done compared to others in a given UoA.

A: Yes, it does that, in several ways – e.g. using Comparitor Sets and Tuning out other UoAs.

Q: Will there be any volume indicators showing proportion of staff submitted to RAE?

A: This is a very long story. But the short answer is: Not until 2009.

Q: How are 2001 results ranked?

A: For Quality, it is: First by grade; then by proportion of staff submitted; then by

QUESTIONS ABOUT GRANTS AND AWARDS DATA

Q: For medical subjects the large medical research charities are at least as significant as the MRC for example. So will you have data from e.g. Cancer Research UK, Diabetes UK, British Heart Foundation etc.?

A: We aim to. But it depends on them.

Q: What type of funding awards are covered?

A: Pretty much everything, down to and including small Engagement awards.

Q: Will the database include applications data and/or information on success rates?

A: Not at the moment.

Q: Will it be possible to scale the data in a way that takes account the relative size of institutions?

A: You’d probably want to export the data to Excel to do this.

Q: Will market share percentages be available to more than 1 decimal point?

A: Yes, if you export the data.

Q: Wouldn’t it be better to use the Research Grants and Contract data from HESA as this is a more complete picture of an institutions income?

A: Not for performance management. The HESA data records income. So, compared to awards made, it’s a lagging indicator. Also, any year’s HESA figures are the result of many complex interactions, so it’s unclear what “perfromance” those numbers refer to. Finally, there is no way to relate the HESA figures to the ups and downs of funder’s own expenditure, so there’s no way to generate the key figure – market share.

Q: In future years, will the benchmarking data be available on all years or will it always be available on a 3year rolling basis?

A: The grants data will increase as the years go by.

Q: Does your dataset include the length of individual awards so that we can smooth internally?

A: It’s patchy at the moment. So we don’t make any use of it at present. In due course we’d like to do that for you.

Q: You mentioned the financial years for the funders are different. How do you take this into account when producing total cash awarded?

A: Each funder’s figures are comparable. So the totals are also comparable.

Q: Do you have an estimate for the proportion of research grants an institution may receive, which have yet to be incorporatd in to the tool?

A: No.

Q: Do you have any PGR benchmarking in the tool?

A: No.

Q: As the application is generic, is there/might there ever be, a way of plugging our own spreadsheets into this?

A: Yes. Ideas are always welcome. There’s also no reason why you shouldn’t connect the application to your internal databases via the unique id funders give to their awards.

Q: Could you add an option to change the divisor in the quality index formula to calculate a simple GPA?i.e.divide by the sum of the weights?

A: No.

Q: Please could you indicate who you have categorised as the main UK funders other than those displayed already?

A: The funders currently included are the research councils, Wellcome Trust, Leverhulme Trust, DoH in England. More are on the way.

Q: If you use income from Regions, this will lead to a bias in the results (e.g. excluding Scottish-based NHS awards). Wouldn’t it be better to ensure that you have full UK coverage before including funders in your tables?

A: The English NHS funding is excluded from the default set of funders. But for those who want it, it can be switched on.

Q: We found on several occasions that university UoAs visible on screen were missed out when we exported tables to excel. The missing HEI was random so each time we spotted this we had to manually insert the data.

A: The existence of Joint and Multiple submissions means we have to treat Quality and Power slightly differently and this may cause some confusion. For example, in the tables for each UoA:

In Quality tables, all *submissions* are listed separately

In Power tables, all *institutions* are listed separately.

(This is mentioned on the screens.)

So, if you are comparing a listing from a Power table with HEFCE RAE data, the rows may not match – due to multiple submissions.

And if you are comparing a listing from a Quality table with HEFCE RAE data, the rows may not match – due to joint submissions.

Q: Why are different rankings different to organisations that have the same score, eg in the Research Fortnight Quality Index?

A: The listings only show the result rounded to a certain number of decimal places. We use the additional places to separate institutions for the purposes of rankings.

Q: I assume the adjusted quality figure here is an error?
106 104 UHI Millennium Institute 0.02 69 29.3

A: No. It’s correct. But we don’t know what the explanation is.

Q: The background section highlights that only 94% of the Mainstream QR funding is allocated to the results of the 2001 RAE. Can you clarify whether the 6% you’re referring to is allocated using the 2007 Research Activity Survey as indicated in the first table of the HEFCE website at http://www.hefce.ac.uk/research/funding/QRfunding/? The full QR figure for Warwick in 2008 was £24,317,841 which is the amount that appears in the”2008 Allocation” column of the “Funding in England” tab in Figure 2.However, if you’re only using 94% of the allocation in the scenario modelling, then the change column will be inaccurate (as you’re comparing 100% in 2008 with only 94% for 2009).

A: The scenario works on 100 per cent of Mainstream QR, not 94 per cent. So the numbers are comparable.

TECHNICAL QUESTIONS

Q: When I click on ‘grants.exe’ file get error message reading “This application has failed to start because the application configuration is incorrect … re-installing the application may fix this problem”

A: Errors of this nature tend to occur when the.zip file has not been completely extracted, and seems to be most likely to affect people using Windows XP Pro. To ensure that the file is correctly extracted, carefully take the following steps:

1. Click on the link to download the.zip file from this page, and when prompted choose the “Save” option rather than “Run” or “Open”.

2. Once the download is complete, right-click on the.zip folder and select either “Extract all” or “Extract to…” (picking a location for the latter) and follow the extraction wizard through the necessary steps until the directory is fully unzipped.

3. Open the new Directory (either called “Grants” or containing a folder called “Grants” depending on which of the options in step 2 you selected) and enter the Grants directory.

4. Locate the file called either “Grants” or “Grants.exe” (the name will depend on your specific system configuration) and double click it. The file’s icon should be a blue and silver folder like this:

Grants.exe

5. You should then be presented with a window requesting a username – please enter the username provided to you for your version of the Benchmarking application.

6. Another window will then request both a username and password, again, please enter the username and password provided to you for your version of the Benchmarking application.

7. You should then see a Research Fortnight logo that will, after 2 seconds, turn into the first screen of the Benchmarking tool. You’re in!

Please note: These steps will also resolve issues where the error message is along the following lines:

“Application has failed to start xerces.dll”

Q: How can I tell whether I’m using the pre-release version or the production release?

A: The first “splash” screen of the application, containing the Research Fortnight logo, will give the version number under the logo. As of Monday 16 February this is “v 1.12”.

Q. Can I save my Comparator Sets?

A. They are saved! Click through to the first Comparator Sets page. Then, from the drop down list, choose the option “User”.

Q. Wouldn’t it be better to have a progress bar for the slow operation of generating the Funding in England Scenarios?

A. You can have a progress bar to tell you how close you are to completion. Go to League Tables > Dynamic > Funding in England. Instead of clicking through to view the table, then click the export button. Thgis will generate an export of the same data – and show you a progress bar as it does it.

Q. Can you explain the Provisionally Adjusted Research Fortnight Quality Index?

A. In this RAE, data on staff not submitted has not been published. So users of ranking tables do not have any indication of how representative a result is of the underlying department as a whole.There are strongly different views around on how to deal with this. But we think the point skeptics have to take on board is that we have a big problem to deal with. No matter how many times we say, “We’re comparing submissions” most people (even in universities) hear “We’re comparing departments/universities”. So the raw data itself is also misleading and it’s a question of the lesser of two evils.

It is possible to approximate the missing data using proxy data from the staff returns made to HESA, though the accuracy of any such approximation is open to doubt. See Luke Georghiou’s analysis. At the moment, we are waiting to receive more detailed dat from HESA, which in turn is subject to institutions approaving the release of the data for this purpose. Once we get that, we’ll be able to decide whether the results are good enough to base rankings on.

In the meantime, the Provisionally Adjusted Research Fortnight Quality Index does the job using the publicly available data. It is calculated as follows.

Research Fortnight Quality Index * FTE Staff submitted / HESA staff total for the university.

In this case, the HESA staff total for the university is calculated with reference to table 26b in the CD packaged with “Resources of HEIs 2006/7”. From this, we take the total staff numbers for all academic cost centres with the exception of continuing eductaion.

There are numerous problems with this. And we don’t consider it anywhere near reliable enough for rankings. But among consenting adults, it gives a first attempt at answering the question with the data that is currently available.

Q: Does it have a clean uninstall or if not can you describe how this application would be un-installed?

A: All the files created and used by the application are contained within the application folder. Hence, to uninstall the application, simply delete that folder.