
12 papers retracted, 7 editor positions removed, and the "open secret" of Elsevier’s elite paper mill exposed.
On Christmas Eve, 9 “peer-reviewed” economics papers were quietly retracted by Elsevier, the world’s largest academic publisher.
This includes 7 papers in the International Review of Financial Analysis (a good journal—it has an 18% acceptance rate):
Plus two more retractions in Finance Research Letters (29% acceptance rate):
Two days later, three more papers were retracted at the International Review of Economics & Finance (30% acceptance rate):
Combined, these 12 papers have 5,104 citations.
All 12 papers had one thing in common: Brian M Lucey, Professor of International Finance and Commodities, Trinity College Dublin — the #1 ranked economics and business school in Ireland — as a co-author.
Lucey published 56 papers in 2025, one paper every 6.5 days. Lmao.
Lucey has published 44 papers in Finance Research Letters alone, an Elsevier journal he edited.
I emailed Lucey for comment, but he did not respond.
Brian Lucey… where have I heard that name before?
Oh yeah, he bullied me on Twitter in 2023.
‘If you wait by the river long enough, the bodies of your enemies will float by.’
— Sun Tzu
The stated reason for the retractions was that: “review of this submission was overseen, and the final decision was made, by the Editor Brian Lucey, despite his role as a co-author of the manuscript. This compromised the editorial process and breached the journal’s policies.”
In plain terms, Lucey was serving as editor while approving his own papers. The result was a complete bypass of peer review—an abuse of editorial authority that functioned as a citation-cartel scheme.
Apparently this was an open secret in the profession for many years, with EJMR comments going back 5+ years explicitly calling him out as a cheater:
Along with the 12 retractions, Lucey was removed as an editor at 5 journals: International Review of Financial Analysis, the International Review of Economics & Finance, Finance Research Letters, Financial Management, & Energy Finance.
Lucey remains as editor-in-chief at Wiley’s Journal of Economic Surveys.
I emailed Wiley, and they provided me with this statement:
We are aware of these concerns and have investigated Prof. Lucey’s activity on Journal of Economic Surveys. Our research integrity team did not find any concerns regarding conflict of interest or mishandling of papers, nor has Prof. Lucey published any papers in the journal since he joined the editorial team as a co-editor in 2024. We expect full commitment and adherence to our editorial practices and standards, and we will be monitoring the situation to ensure that there is no improper handling of papers at the journal.
In response to Wiley’s statement, one EMJR user wrote: “I am baffled how they could possibly still have confidence in him, given his serious and systematic ethical lapses in editorial positions. Sounds somewhat naive to expect ‘full adherence to our editorial practices and standards’!”
Until being purged from the leadership of these 5 journals, Lucey played a central role in coordinating Elsevier’s Finance Journals Ecosystem, which allows “participating journals to suggest transferring a rejected manuscript to another journal in the system without the need for resubmission and the associated cost."
That system, and the editors involved, “came under fire last year when a preprint suggested it might facilitate citation stacking as a way to boost journal impact factors. The analysis in the preprint also suggested a citation ring involving Elsevier editors could be at work.”
I emailed the anonymous “Theophilos Nomos” who wrote this paper, but they did not respond to my email.
That pre-print names Samuel Vigne, a finance professor at Luiss Business School, former PhD student of Lucey, and prolific Lucey co-author (they have published at least 33 papers together) as a core node of Lucey’s citation cartel.
Multiple publications by Vigne and Lucey are flagged on PubPeer.
This example neatly illustrates how their co-authorship trading scheme operated:
It describes a draft uploaded to SSRN with three authors:
After submitting that draft to the Elsevier finance ecosystem, that draft was scrubbed from SSRN, and in the final published version, an additional author (Samuel Vigne) was added as a new author, with an “equal contribution” statement. The two versions are otherwise identical, containing the same figures, sections, and text.
Co-authorship trading is only one part of the operation. The other is citation stacking. In this model, a small, tightly linked group funnels an enormous volume of papers into the same handful of journals, then systematically stuffs those papers with citations to one another. The result is a rapid, artificial explosion in citation counts that makes them look like influential geniuses.
Take John Gooddell, a professor at the University of Akron and a Lucey co-author. Gooddell has published 68 papers in Finance Research Letters alone, a journal edited by Lucey. If each paper contains even a modest 50 references, that amounts to roughly 3,400 citations recycled through a single outlet. In 2024 alone, Gooddell published 61 papers. He’s not doing research. He’s farming citations.
Following Lucey’s retractions, Samuel Vigne was removed as the editor-in-chief of International Review of Financial Analysis and Finance Research Letters.
In addition to that anonymous pre-print, there is also a 2025 paper written by actual professors with sophisticated econometric analysis & graph theory which describes the citation cartel in much more detail. The conclusion of that paper is: ”Elsevier ecosystem journals benefited from the creation of the ecosystem … Elsevier journals in the ecosystem have overlapping editors and Elsevier appoints these editors in coordination with a single academic [Brian Lucey] that manages the fleet of ecosystem journals.”
Brian Lucey posted a reply to this paper, which was extremely weak and does not contain any tables or figures. It mostly ignores the data and structural model of the citation ring and instead leans on Lucey’s “lived experience” as an editor (“we have experience shepherding…”), while also nitpicking semantics and phrasing, such as Lucey complaining that they called him a “professor of finance” instead of his full honorific, “professor of international finance and commodities.”
The Elsevier ecosystem web page went live on 4 November 2020 , according to Lucey’s rebuttal. Below is a visualization of the network before and after this transition date, which shows a clear distortion of the citation network. During 2021-2025, the Ecosystem citations per article is 103 % higher.
2016-2020: (Before)
2021-2025: (After)
2020 is also the year where Brian Lucey’s citation profile exhibits an exponential “J-curve”, a Hallmark of citation rings. Did he suddenly become a well-respected genius in 2020? Or did he figure out how to cheat the system?
In a comment to Retraction Watch, Lucey further argued that citation cartels are not a crime, because everyone does it.
”Because here’s the thing: Elsevier are aware of [editors publishing in their own journals] as a pretty common practice in finance and economics. We’ve given them evidence of hundreds of instances of this. And nothing has happened, which does raise the question, you know, maybe they’re going to go back and go look at all these. Presumably, they will treat everything the same.” Lucey shared his list of such instances. It includes 240 articles, 133 of which are in Science of the Total Environment, which was delisted from Clarivate’s Web of Science in November.
(N.B.: As several commenters have noted, the list linked above includes citations to editorials and special issue introductions, which are typically penned by editors-in-chief. The disclaimer at the top of the document Lucey provided reads, “In no way is this meant to suggest any ethical or other breaches. It is a list of persons who occupied a EiC or similar role in the Journal mentioned at the same time as a paper in which they were an author or coautho[r].”)
Dr. Thorsten Beck, in a blog post, confirmed that no, not everyone does it, and yes, it is a crime.
This incident raises an important question: is this common practice across academic journals? And are there rules for editors publishing in ‘their’ journals? As I was editor across three journals for a total of 11 years, I can certainly speak to this (and clearly say NO).
…
I don’t have formal confirmation but I have been told by several independent sources that ultimately even Elsevier realised that this editor was seriously damaging the reputation of the journal, appointing a second editor and then easing out the ‘doubtful’ editor from his responsibilities.
The fallout from the Lucey–Vigne era extends far beyond a handful of retracted PDFs. What it exposes is a structural weakness in how academic “excellence” is manufactured, measured, and monetized. By presiding over a coordinated cluster of journals, a small group of editors effectively gained the ability to print their own academic currency.
However, blaming Lucey and Vigne alone ignores the hand that fed them. Elsevier did not just “allow” this to happen; they engineered the environment for it to flourish, because of incentives: Elsevier’s internal metrics (Impact Factors) directly benefitted from this behavior. It was a symbiotic corruption: the editors received a fast-track to academic stardom, and Elsevier received a high-margin, high-volume production line of citable content.
This is the “paper mill” reimagined for the elite: not a basement operation in a third-world nation, but a polished, corporate-mandated factory within the halls o the world’s most powerful publisher. This is the natural result of a corporate mandate to maximize profits by bundling journals into monopoly-priced packages, forcing universities to pay for the very “prestige” that Elsevier’s own staff helped to dilute. As one EJMR commenter noted, “The tragedy isn’t that they cheated; it’s that the system was designed to let them thrive for a decade before anyone bothered to look at the data.”
The question now is whether Trinity College Dublin will fire Lucey.
They did not respond to my inquiry.
An editor of a psychology journal was offered $1,500 per accepted paper.
Richard Tol, a professor of economics at the University of Sussex, wrote that he was offered $5,000 per paper.
Muhammad Ali Nasir, a professor of Macroeconomics at Leeds University, wrote about how common selling papers is in European finance journals: “I had been made such offers from anonymous emails but I choose not to engage and in one case forwarded the email to EiC. I will be surprised if any editor is not approached by these people.”
This raises a multi-million-euro question: given their documented corruption, are the various “educational consultancies” and special-purpose vehicles operated by Brian Lucey and Samuel Vigne used to circulate ecosystem funds, conference fees, or “consultancy” payouts from authors seeking a shortcut to publication?
One anonymous economist says:
Here is a hypothetical outline of how such a cash-flow scheme could function.
“Hello [unknown, distant institutions], we offer consulting services: €€€ for excellent advice on how to publish in top-tier finance journals. Our advice yields results.”
Money flows into companies.
Papers flow into journals.
Another anonymous economist says:
I’m not going to provide details on how to corruptly have a paper published. I’m just going to speculate on what could be going on in a situation like this. It could be based on “consultancy fees” for advice on publishing that you or your institution pay to one of those companies. They give some advice, including what papers to cite, etc, and if you follow their advice you are likely to be published in one of their journals. This could be attractive for researchers and institutions in, e.g., China and the Middle East.
Another anonymous economics professor I spoke to told me:
Universities in East and West Asia pay cash bonuses for publications. Some authors hire a broker (many advertise openly on Facebook), other authors contact the editor directly. The cash bonus is shared between the author, broker, and editor.
Besides selling papers, they also sell special issues, which allow the guest editors to do what they want.
And they sell positions on the editorial board, which are important for promotion to the next academic rank.
Some payments are in cash, others in kind.
Finally, they organize conferences. Registration fees more than cover the costs of putting on a conference. The conference name suggests it is organized by a society, but it really is Lucey who pockets the profits.
Brian Lucey and Samuel Vigne operate four private companies in Ireland and the UK classified under “other education,” likely functioning as consultancies or special-purpose vehicles for academic or policy work.
The existence of these consultancies warrants investigation into potential conflicts of interest and financial misconduct.
It doesn't surprise me it happens within the Elsevier ecosystem. Elsevier has a long tradition of scientific misconduct and scientifically immoral behavior (see Wikipedia).
The operating margin of Elsevier is around 40% which is huge! At the end mostly paid by tax-payer money.
Personally, I never review or publish with Elsevier.
You are in very very good company. The British mathematician Timothy Gowers famously boycotts Elsevier also
https://gowers.wordpress.com/2012/01/21/elsevier-my-part-in-...
Huge numbers of academics have signed up to the Elsevier boycott, see http://thecostofknowledge.com/
I am skeptical it is a problem isolated to Elsevier. Given the LLM craze now prioritizes open access, https://andrewpwheeler.com/2025/08/28/deep-research-and-open..., it would not surprise me people start gaming MDPI in the same way for example.
MDPI is gamed by design, I think that while Elsevier is awful, MDPI is even worse with 100s of special issues where you are guaranteed to land publication in journals with quite nice IF (which is inflated by publishing large proportion of reviews and less original research).
I wonder if the term "published" as a binary distinction applied to a piece of writing is a term and concept that is reaching the end of its useful life.
"Peer reviewed" as a binary concept might be as well, given that incentives have aligned to greatly reduce its filtering power.
They might both be examples of metrics that became useless as a result of incentives getting attached to them.
Both metrics are supposedly binary but in reality have always depended heavily on surrounding context. Archival journals have existed all along. Publication is useful as an immutable entry in the public record made via a third party. Blog posts have a tendency to disappear over time.
I'm certain that the comment you responded to never claimed that it was "isolated to Elsevier" in the first place, nor is it very compelling to speculate about how in the future something even worse might emerge.
Right now Elsevier is by far the biggest offender and also happens to the be the topic of the conversation and the article.
Exactly. Elsevier is a dominant company. Of course it's going to have a huge share of anything that goes into journals. They probably also have a huge share of the Nobel prize winning papers too.
That being said, I'm happy to encourage open access.
One of the reasons why in Germany universities were able to collectively negotiate better open publishing deals with Wiley and Springer, but Elsevier just flat out refused to agree to any better terms for three years.
(See Project DEAL: https://deal-konsortium.de/en/agreements/elsevier)
Happened in other countries as well, see e.g. https://www.timeshighereducation.com/news/elsevier-boycott-l...
I’m not sure why I’ve never really concerned myself with Elsevier, but that makes a lot of sense, knowing a rather vile and slimy con artist snake that works/ed for them.
[dead]
I've heard of Chris but not too well. This guy does not f*c$ around, don't get on his bad side.
The state of research is dire at the moment. The whole ecosystem is cooked. Reproducibility is non-existent. This obvious cartel is a symptom and there should be exemplary punishment.
Publishers are commercially incentivized to simply maximize profit and engagement. The main actors are academics and most of them try to uphold the high standards and ethics. Yes there is free-riding, backstabbing and a lot of politics but there is also reputation and honesty.
A few academics give academia a bad name, at the worst possible time and when society needs honest, reliable, reproducible and targetted research the most.
About Chris, this 3.5 years old post made me wonder what he's all about. https://www.chrisbrunet.com/p/this-princeton-economics-profe...
Liking free speech, disliking affirmative action, being critical of those he disagrees but also giving them a chance to respond.
edit: is what he seems to be about based on the linked article
Huh? The linked article is nothing more than "this guy is black, so therefore helping any underprivileged black people gain university admissions is bad"
It's outrageous racism. A conclusion about all minorities based on one person's math mistake, where the logic is entirely based on shared skin color.
If you replace the races and make it a conclusion about legacy admissions or something, it's obviously stupid and illogical, right?
"This white guy doesn't know Afghanistan from Kazakhstan. More proof legacy admissions is bad!"
It's just this but with race this time.
Much has changed since this was published in 2008
Are we reading the same article? The focus is on a white woman.
There's a bunch of needlessly inflammatory bullshit in that article. "Innumerate woke Bolshevik" and making fun of someone because he thinks she looks like a Harry Potter character. This guy seems like nothing more than a high school bully. E-mailing someone asking them to respond is nothing more than a fig leaf.
And he seems so nuanced ...
> Anyone who signed that petition is not only my personal enemy, but the enemy of free speech, the enemy of the spirit of the academy, and the enemy of western civilization.
[flagged]
[flagged]
[flagged]
[dead]
[dead]
All of academic publishing has fallen victim to Goodhart's law.
Our metrics for judging the quality of academic information are also the metrics for deciding the success of an academic's career. They are destined to be gamed.
We either need to turn peer review into an adversarial system where the reviewer has explicit incentives to find flaws and can advance their career by doing it well, or else we need totally different metrics for judging publications (which will probably need to evolve continuously).
We assume far too much good faith in this space.
I have no doubt that there are honest academics who publish research which actually contributes to humanity's corpus of knowledge. Whether that is some new insight into the past, observations on nature and man's interaction with it, clever chemical advances, or medical innovations which benefit mankind. People who publish works which will be looked upon as seminal and foundational in a decade or two, but also works which just focus on some particular detail and which will be of use to many researchers in the future.
But I can't shake the impression that a lot, perhaps the vast majority, of science consists of academics (postdocs and untenured researchers in particular I suppose) stuck in the publish-or-perish cycle. Pushing pointless papers where some trivial hypothesis is tested and which no one will ever use or read — except perhaps to cite for one reason or another, but rarely because it makes academic sense. Now with added slop, because why wouldn't you if the work itself is already as good as pointless?
The system, as you say, is fucked.
Most scientists want to do good science. They get intrinsic meaning and satisfaction in doing so. But with any large group of people there will be a few bad faith actors that will manipulate any exploit in the system for their own personal benefit. The problem here is that 'the system' of academic appointments, and even more importantly, funding sources, are built around this publishing metric. This forces even the good faith scientists to behave poorly because it was a requisite to even being able to exist as a working researcher.
0. I think your perspective is really detached from the actual scientific enterprise. I think this kind of take exists when there are cultural clashes combined with a strong focus in the media and online with the mistakes and issues in science, not its successes.
Science is actually progressing at an amazing rate in recent years. We are curing diseases and understanding more about life and the universe faster than ever.
Just briefly skim some top journals right now:
Here's an amazing 'universal vaccine' for respiratory viruses in mice https://www.science.org/doi/10.1126/science.aea1260
here are brand new genome editors in human cells https://www.science.org/doi/10.1126/science.adz1884
Here's amazing evidence of an ancient lake on Mars https://www.science.org/doi/10.1126/science.adu8264
Here's a meta-analysis of 62 (!) different studies on GLP1 receptor agonists to figure out whether they can contribute to pancreatitis https://onlinelibrary.wiley.com/doi/full/10.1002/edm2.70113
(covered here https://www.nature.com/articles/d41586-026-00552-6)
Here's identification of a new mechanism of resistance in Malaria https://www.nature.com/articles/s41586-026-10110-9
Here's curing a genetic disorder using gene editing in mice https://www.nature.com/articles/s41586-026-10113-6
Here's a study that has figured out that as CO2 levels rise, there's less nitrogen in forests https://www.nature.com/articles/s41586-025-10039-5
and here's personalized mRNA vaccines curing people of breast cancer https://www.nature.com/articles/s41586-025-10004-2
Like all of these are just from the past month or two and are pretty astounding advances. And they are just a subset of all of the scientific advances recently. All of them have contributors in academia (and science performed outside of academia would not exist without academia, as it depends upon it for most of the conceptual advances as well of course as for scientist training).
1. Stuff like paper mills and complete fraudsters exist, but for the most part, these things are the exception, not the rule. Your average scientist doesn't even hear or think about these things and the weirdos who cause them, to be honest. Nobody has ever heard of "International Review of Financial Analysis" outside of an extremely niche economics subfield.
2. "Public or perish" is not a cycle, really. While I believe it's not good for people to be constantly working under pressure, the fact that academia is so competitive currently is a healthy sign. It's because we have so many people with extremely impressive resumes and backgrounds, doing extremely impressive work, that makes funding so competitive. And when funding is competitive, it's no wonder that funders prefer to fund people who have produced something and told the world about it ("publish").
3. Fraudsters and hucksters have been in science forever. Go read an account of science in the early 19th century. There are tons and tons of stories of crazy scientists who believed ridiculous things, scientists who kept pushing wrong dogma, and so on. And yet nobody knows about them today, because the evolutionary process of science works: the truths that are empirically verifiable win out, and, given enough time, the failures are selected against.
Fantastic effort post and the necessary dose of fresh air to balance out hedonic skepticism.
The collapse in faith of institutions in various ways, for different reasons has created a vibe that gives any criticism of any institution has a whiff of plausibility, and these days that's all you need for some people to treat it as settled fact. That is basically what I think the poisoned and anti intellectual attitude of hedonic skepticism is all about.
The pace of technological advance over the past 5-10 years is staggering in so many ways. If our era weren't known for collapse of democracies and conflict, it could have been heralded as a major historical moment of technological advance on a number of levels.
> Like all of these are just from the past month or two and are pretty astounding advances
Assuming their claims are true, which is a big assumption.
But the point of all these investigations is that they might not be true and universities/journals wouldn't care if they weren't.
Elsevier is certainly evil, but I would say the root issue is the practices of the institutions where these "authors" are employed. This kind of thing is academic misconduct and should result in them losing their jobs.
This goes deeper than the institutions, actually. The KPI for many (non-industrial) researchers is the number of publications and citations. That's what careers and funding depends on.
Goodhart's law states "When a measure becomes a target, it ceases to be a good measure", and that's what we see here. There is a strong incentive to publish more instead of better. Ideas are spread into multiple papers, people push to be listed as authors, citations are fought for, and some become dishonest and start with citation cartels, "hidden" citations in papers (printed small in white-on-white, meaning it's indexed by citation crawlers but not visible to reviewers) and so forth.
This also destroys the peer review system upon which many venues depend. Peer reviews were never meant to catch cheaters. The huge number of low-to-medium quality papers in some fields (ML, CV) overworks reviewers, leading to things like CVPR forcing authors to be reviewers or face desk rejection. AI papers, AI reviews of dubious quality slice in even more.
Ultimately the only true fix for this is to remove the incentives. Funding and careers should no longer depend on the sheer number of papers and citations. The issue is that we have not really found anything better yet.
As for an alternative, how about using the social fabric of researchers and institutes instead? A few centuries of science ran on it before somebody had the great idea to introduce "objective" metrics which made things worse. Reintroducing that today would probably cause a larger spread in the quality of research, which is good: research is kind of a "hit-driven industry" - higher highs are the most important thing. The best researchers will do the best research, probably better without carrot and stick than with.
> As for an alternative, how about using the social fabric of researchers and institutes instead? A few centuries of science ran on it before somebody had the great idea to introduce "objective" metrics which made things worse.
Oh boy, you seem to be missing the forest for the trees. When science was a hobby of the rich, there was no need to measure output. Only when "scientist" became a career and these scientists started demanding government funding (which only really crystallized in the 20th century), then we started needing a way to measure output.
You could try doing away with an objective measure of academic output and replace it with the "social fabric of researchers and institutes" (whatever the fuck that means) instead , but then all you'd have is a good ol' boys club funded by taxpayer money.
If the metric is publication and citation count and funding is awarded by panels of experts, how is that better than cutting out the flawed metric and continuing to award funding via panels of experts? Either it's a good ol' boys club or it isn't but I don't think a horribly flawed metric is going to change that.
That said, as far as I'm aware those metrics aren't explicitly considered by said panels (NIH for example). Any issue in that regard is presumably due to either unconscious bias or laziness on the part of said experts when exposed to such metrics.
> If the metric is publication and citation count and funding is awarded by panels of experts, how is that better than cutting out the flawed metric and continuing to award funding via panels of experts?
I agree it's not perfect but that's still several steps removed from "Billy is one of us, he should get that tenured position" and, as this article shows, it requires openly unethical behavior, which others can recognize and eventually prosecute (even if that isn't being done often enough).
It's almost like saying "well corruption happens anyways so why do we even criminalize it and have public hearings? Just skip those bits and openly auction votes instead".
I interpreted "social fabric" to mean "panel of relevant professionals" which is what we currently have but perhaps you interpreted it differently?
I think most interviews can essentially be described as "Billy is one of us, he should get position X" if one is feeling cynical.
> I interpreted "social fabric" to mean "panel of relevant professionals" which is what we currently have but perhaps you interpreted it differently?
That would be one way to implement his suggestion of getting rid of objective (if flawed) measures of a researcher's performance.
I agree that, inevitably, there will be a subjective human decision in there, but I argue that dropping all objective measures of performance and going just on vibes is kicking the door wide open for corruption, while it's merely cracked right now. And the exploit mentioned in this article is a very public and explicit one, which is why other researchers were so aware of it and it eventually caught up to him. If it all gets moved behind closed doors instead, it will be even harder to detect and prosecute this sort of behavior.
I was thinking of basically "This person is good, we should hire them. Their results are going to improve our institute's reputation."
What the guarantee is that folks won't abuse this system in the same way they do the citation system? The recommendation letter system is often abused for the pettiest of reasons...
There is no guarantee. The current system is also not a guarantee for good results, though.
This will be a hard argument to make.
The decision makers who are the target audience for these metrics value "objective" data. They value the appearance of being quantitative, but lack the intellectual tools to distinguish between quantitative science and pseudoscience with numbers bolted on.
That's modern bureaucracy in a nutshell.
A few centuries of science of white males. While I agree that the system with ”objective metrics” has a lot of problems, but just removing it would bring us back to the old days when almost all science was done by a few privileged white men.
Almost all science was done by "a few privleged white men" because Europe and the Americas were the only places that had modernized with large central states, university systems, and educational systems. Even in that scenario before the "objective metrics" of the post-war system came about, we still had people like Madam Curie and Ramanujan being able to work with stellar results. The idea that somehow academia would stonewall all of the non-whites is absurd.
It’s really hard to come up with better examples of the exceptions that prove the rule than Marie Curie and Ramanujan. How many more names can you come up with?
I’d even argue that still today women and minorities are strongly disadvantaged at many institutions. I’d say that as a white male that recently left academia myself. I have seen how some of my colleagues have been treated.
What you describe is still a problem with the institutions, because it is ultimately the institutions that provide the incentives (in the form of jobs). You're right that they're using bad metrics, but it is the institutions who are making those bad decisions based on the bad metrics.
There are lots of better things, like people making hiring and firing decisions based on their evaluation of the content of papers they have actually read, instead of just a number. If someone is publishing so many papers that a hiring committee can't even read a meaningful fraction of them, that should be a red flag in itself, rather than a green one.
It's true that hire and tenure decisions are under the institution's control. But a lot of funding comes from external sources, and most public funding uses some sort of publication-based metric. There are exceptions, but that's the game. The CV of your PhD's is often judged by the publication list and the corresponding citations. That's research institutes where they might go, other universities, large companies etc. will look at this. It's difficult to change this system as isolated player, and coordinates efforts so far failed on the "what else" question.
A problem with the public sector in this instance is it has money to spend, but no way of allocating it particularly well.
It will just pick the best allocation metric it has available, even if that metric would never stand up to scrutiny in the private sector, or any more directly measured domain, public or private.
I think the state could simply allocate money to long-lived scientific institutions and let the experts there handle things as long as there is no obvious corruption.
Self-regulation has a tendency to either work well for a few years, then gradually become corrupted... or be corrupt from the beginning.
A distressingly high percentage of humans like zero-sum status games. More people are happier when status is recognized as a semi-unbounded positive-sum game.
What does "let the experts handle things" look like in practice that's much different to fakeable impact metrics such as citation count?
It doesn't foster bad publication practices (volume over quality).
To dig even deeper into the problem: you have to get a large number of institutions to agree to stop this at once, none will voluntarily risk their (generally) working pipeline and system first. It disrupts a lot of different things and takes them out of the currently established model that everyone still uses to measure success. It reminds me of how most people who say “well not everyone should go to college!” Are obviously omitting “…except for my kids of course.” It borders on an expressive response, it’s not something anyone wants to actually take action on.
There’s not a whole lot to gain for the individual or even the institution unless they hit an absolute home run on the first try that also shows positive results very quickly. More than likely the decision will be questioned at every turn
And incorrect assumptions. As I understand it, "I did a study on this and it turns out there's no connection" generally results in the study not being published (if the study was testing for the validity of the connection)... which is sad, because that's still useful information to have.
Those are separate issues.
Publishing uninteresting science for the record is different from an incentive to go against the crowd to refute incorrect claims.
Both would be good especially these days.
Exactly. There should be much greater incentives to (in)validate prior publications. That is what science is about.
There's imperfect ways to work with goodhartable metrics. https://www.lesswrong.com/posts/fuSaKr6t6Zuh6GKaQ/when-is-go... talks about some of them (in the context of when they go bad).
Evil Seer would be a good anagram if only Elsevier did any of the actual [re]viewing themselves
The needle is beginning to move on this I believe: https://www.nature.com/articles/d41586-026-00321-5