Still, it might be interesting information to have access to, as someone running the model? Normally we are reading the output trying to build an intuition for the kinds of patterns it outputs when it's hallucinating vs creating something that happens to align with reality. Adding in this could just help with that even when it isn't always correlated to reality itself.
I've been hearing talk for years about a "web of trust" system, that could filter spam simply by having users vouch for eachother and filtering out anyone not vouched for. However, I haven't seen a function system based on this model yet.
Personally I'd love to add in something like the old slashdot comment model, where people would mark content as "helpful", "funny", "insightful", "controversial" etc, and based on how much you trust the people labeling it, you could have things filtered out, or brought forward.