How does your university honestly compare?
The major UK league tables use proprietary methodologies you can't verify. This ranking is built entirely from open government data: HESA graduate outcomes, NSS student satisfaction, and REF research quality. Every number links to its source. No paywall, no black box.
How do UK university rankings actually work?
UK university rankings — published by The Guardian, The Times/Sunday Times, and The Complete University Guide — are composite indices built from underlying metrics published by government agencies. The core data sources are HESA (Higher Education Statistics Agency), which publishes graduate employment and degree outcomes data under open licence; the National Student Survey (NSS), published by the Office for Students; and the Research Excellence Framework (REF), a comprehensive peer review of UK research quality conducted every 7-8 years. The ranking publishers combine these sources with different weightings, producing different league table positions for the same institutions.
The weightings used by the major UK ranking publishers differ substantially, which is why a university can rank 10th in one table and 25th in another. The Guardian weights student experience and graduate outcomes heavily. The Times weights research quality more prominently. The Complete University Guide includes entry tariff (A-level grades of admitted students) as a significant component, which means research-intensive universities with high entry requirements score well regardless of actual student outcomes. Understanding which metrics drive each ranking is essential for interpreting the results: a university ranked 5th primarily because of high entry tariffs (competitive admissions) may not offer a better student experience than one ranked 20th with lower entry points but higher NSS scores and graduate employment rates.
This page builds its ranking directly from the open government datasets — HESA Graduate Outcomes, NSS satisfaction data, and REF research quality scores — with all weightings published transparently. The underlying data is the same data that The Guardian, Times, and Complete University Guide use; the difference is that the weights and methodology are fully disclosed, allowing users to understand exactly why each institution scores as it does rather than accepting a black-box composite number.
Russell Group rankings: are they what they seem?
The Russell Group is a self-selected group of 24 UK research-intensive universities that formed an association in 1994. Membership is a matter of institutional choice and peer agreement rather than independent assessment — universities apply to join, and existing members vote on admissions. The group includes the most research-intensive UK institutions (Oxford, Cambridge, Imperial, UCL, Edinburgh, Manchester, and others) but also includes some institutions that rank below non-Russell-Group universities on student satisfaction and graduate outcomes metrics. Russell Group status correlates with but does not directly measure teaching quality, student experience, or employment outcomes.
For prospective students, the relevant question is not "is it Russell Group?" but "how does this specific institution and course perform on the metrics I care about?" HESA Graduate Outcomes data shows significant variation within the Russell Group on graduate employment rates. NSS data shows significant variation in student satisfaction, with some non-Russell-Group universities consistently outperforming Russell Group institutions on teaching and academic support scores. The Russell Group's practical value lies in employer recognition (some graduate schemes explicitly prefer Russell Group graduates) and research environment (Russell Group institutions receive a disproportionate share of research funding), neither of which directly translates to a better undergraduate teaching experience.
Russell Group rankings do not exist formally — the group does not publish internal rankings of its members. When media refers to "Russell Group rankings," it typically means the position of Russell Group members within external league tables like the Times or Guardian. The positions of Russell Group universities relative to each other vary considerably depending on which metrics are emphasised. The FTN ranking on this page rates each institution on published open data rather than group membership, allowing comparison of Russell Group and non-Russell-Group institutions on equal metrics.
FTN ranking methodology (transparent weights)
| Metric | Source | Weight |
|---|---|---|
| Graduate professional employment (15 months) | HESA Graduate Outcomes | 25% |
| Student satisfaction (overall) | NSS | 20% |
| Teaching satisfaction | NSS | 15% |
| Research quality (4*/3* combined) | REF 2021 | 20% |
| Continuation rate | HESA | 10% |
| Degree completion rate | HESA | 10% |
Frequently asked questions
The three main UK university league tables (The Guardian, The Times, The Complete University Guide) all use proprietary methodologies with varying degrees of transparency. Their published rankings are copyrighted and cannot be freely reproduced. Meanwhile, the underlying government data is entirely open: HESA publishes under CC BY 4.0, NSS results are freely available, and REF results are public. By building a ranking directly from these open sources, we offer full transparency, no paywall, and customisability.
The Russell Group is an association of 24 research-intensive UK universities, including Oxford, Cambridge, Imperial, UCL, Edinburgh, Manchester, and others. Membership signals a commitment to research excellence. Russell Group universities receive approximately 75% of UK university research funding. In practical terms, membership matters for research reputation and some employer recognition, though many non-Russell Group universities outperform Russell Group members on student satisfaction.
Research-intensive universities prioritise research over undergraduate teaching. High-achieving students have higher expectations that are harder to meet. Russell Group universities tend to be larger. And assessment and feedback, the consistently lowest-scoring NSS theme, may be weaker at institutions where academics prioritise research output.
The REF is a national assessment of UK university research quality, conducted approximately every seven years (most recently 2021). Expert panels assess submitted research outputs, research environment, and impact case studies across 34 subject-based Units of Assessment. Results are reported as the proportion of research rated 4* (world-leading), 3* (internationally excellent), 2*, and 1*. REF outcomes determine how approximately £2 billion in annual quality-related (QR) research funding is distributed across universities.
The Graduate Outcomes survey is run by HESA and contacts graduates approximately 15 months after completing their degree. It asks about current employment status, job type, salary, and whether graduates feel their degree was worthwhile. Response rates vary by institution and subject. The survey replaced the older DLHE (Destination of Leavers from Higher Education) survey from 2018-19 onwards. HESA publishes headline figures for each provider, and this data underpins employability scores in most UK ranking systems.
It is different, not definitively better or worse. The Guardian and Complete University Guide use additional proprietary metrics including spend per student, student-to-staff ratios, and their own weighting formulas. Our ranking uses only fully open data with transparent weights you can inspect. Neither approach is objectively superior. Our ranking is particularly useful if you value transparency and want to understand exactly which data points are driving a result. For subject-specific guidance, The Guardian and Complete University Guide subject rankings are worth consulting alongside this tool.
Potentially, yes. First-class degrees have risen from around 16% of all UK degrees in 2010 to over 30% by 2023, and the proportion varies significantly by institution. If a ranking includes degree attainment as a metric, universities with higher first-class rates appear stronger on that dimension regardless of whether student learning has improved. Our ranking uses Graduate Outcomes employment data and NSS satisfaction scores rather than degree classification rates, which partially insulates it from grade inflation concerns.
For most students, yes. Subject-level rankings are more actionable because university departments vary enormously within the same institution. A university ranked 50th overall may be in the top 10 for your specific subject, or the reverse. The NSS publishes subject-level satisfaction data, and HESA publishes graduate outcomes by subject and institution. The Complete University Guide and The Guardian both publish extensive subject-level league tables built from this granular data, and these should be your first reference when comparing departments.
The answer depends on what "best" means. For research output, Oxford and Cambridge consistently rank first and second in every UK ranking, with Imperial College London and UCL close behind. For student satisfaction (NSS scores), smaller specialist institutions and some post-92 universities regularly score above larger research-intensive ones. For graduate employment rates, some specialist institutions in healthcare, engineering, and business outperform general universities. The Times and Sunday Times University Guide consistently places Oxford first overall due to its performance across research, entry standards, and student outcomes. For subject-specific excellence, the picture changes: Manchester is considered a leading institution for business, Edinburgh for medicine, Bath for engineering, and so on. "Best university" is therefore a question that cannot be answered without knowing the subject, career aspiration, and personal priorities of the student. This calculator allows filtering by individual metrics so prospective students can identify the top performers on the criteria that matter most to their specific situation.
UK university rankings have a measurable but frequently overstated effect on graduate employment. Certain high-profile graduate schemes — particularly in investment banking, management consulting, and law — explicitly restrict applications to a list of target universities, typically 10-20 institutions dominated by Russell Group members and Oxford/Cambridge. Outside these specific employers and schemes, the evidence for a general ranking effect on employment outcomes is much weaker. HESA Graduate Outcomes data shows that graduate employment rates at 15 months post-graduation vary significantly within university tiers — some lower-ranked universities produce strong employment outcomes in specific regions or sectors, while some highly-ranked universities have lower overall employment rates due to a higher proportion of graduates pursuing further study. The most reliable predictors of graduate employment are the specific subject studied, the presence of a placement year, geographic flexibility, and the quality of the individual application — not the institution's position in an aggregate league table. Employer citation of "university attended" as a selection factor is declining, with most large graduate employers having moved toward skills-based assessment as their primary filter.
- HESA Student Record and Graduate Outcomes data 2022-23. CC BY 4.0. hesa.ac.uk.
- National Student Survey (NSS) 2024. Office for Students. officeforstudents.org.uk.
- Research Excellence Framework (REF) 2021. ref.ac.uk.
This ranking is calculated by Find The Norm using publicly available data from HESA, the National Student Survey, and the Research Excellence Framework. It is not affiliated with or endorsed by HESA, OfS, any UK funding body, or any university. Different weightings produce different rankings. Use this ranking as one input among many when making university decisions.