Open, rigorous evaluation of quantitative research — published with DOIs, not locked behind journal decisions.
The Unjournal publishes expert evaluations of quantitative research informing global priorities. Each evaluation includes a structured written assessment and numerical ratings across seven criteria, with evaluator-provided confidence intervals. All evaluations are open-access and assigned DOIs via PubPub and Crossref.
We work with researchers whose work bears on important global challenges — in global health and development, animal welfare, catastrophic risks, environmental policy, and related areas. Evaluators are field specialists selected by our management team and field specialists network.
We are not a journal. Authors do not "submit to" The Unjournal for acceptance or rejection. We evaluate research that already exists — working papers, preprints, and published papers — and publish those evaluations openly alongside the work.
Authors are notified when their work is selected for evaluation and receive the evaluations before publication, with the opportunity to respond. Evaluator identity may be kept confidential at the evaluator's discretion.
Research enters our evaluation process through one of two routes. Both tracks use the same evaluation standards, the same publication format, and the same author engagement process.
Each evaluator rates the paper on seven criteria using a 0–100 scale. For each rating, evaluators also provide a lower and upper confidence bound, representing uncertainty in their assessment. The criteria are:
| Criterion | Description |
|---|---|
| Overall assessment | Holistic quality and contribution of the research |
| Advancing knowledge | Contribution to understanding and practice in the field |
| Methods | Justification, reasonableness, and validity of methods used |
| Logic and communication | Clarity, rigor, and accessibility of the argument |
| Real-world relevance | Applicability to policy, practice, or real decisions |
| Global priorities relevance | Relevance to causes of high importance to humanity |
| Open science | Data availability, pre-registration, reproducibility |
In addition to the 0–100 ratings, evaluators provide two further assessments: a predicted journal tier (ordinal ranking of the venue where this paper would likely be published), and a judgment of whether the paper merits journal publication (yes / borderline / no).
All numerical ratings are accompanied by evaluator-specified lower and upper confidence bounds, making uncertainty explicit rather than hidden.
We prioritize quantitative research in areas where rigorous evidence most directly bears on reducing suffering and improving outcomes at scale.
RCTs, quasi-experimental studies, and systematic reviews of health interventions, poverty alleviation, education, and public policy in low- and middle-income contexts.
Research on consumer behavior, corporate campaigns, welfare science, and policy affecting farm and wild animals.
Research on biosecurity, nuclear risk, AI safety and alignment, and governance of emerging technologies.
Climate policy, natural capital, biodiversity, and the economics of environmental interventions.
Political economy, behavioral public policy, development institutions, and evidence-based governance.
Experimental and applied research on behavior change, attitudes, persuasion, and decision-making relevant to global priorities.
Research evaluation reform, open science, replication, and the sociology and economics of research production.
Theoretical and empirical work on how to prioritize interventions across cause areas, including cross-cause comparisons.
All evaluations are published open-access with DOIs. You can browse them on our PubPub community or explore aggregated ratings and trends using our interactive dashboard.
The interactive dashboard shows ratings distributions across papers, trends over time, and breakdowns by research area and evaluator. Data are updated as new evaluations are published.
Authors may submit their own working papers or preprints for consideration. Researchers may also recommend others' work. We review all submissions for fit with our research priorities.
Typical turnaround from submission acceptance to evaluation publication
Timelines vary based on evaluator availability and research complexity.
All evaluations are open-access, citable with DOIs, and published alongside the research — not hidden behind editorial decisions.