The authors are basically asking the alignment problem to be well-defined and easy to model. I sympathize. Unfortunately the alignment problem is famously difficult to conceptualize in its entirety. It's like 20 different difficult counterintuitive subproblems, and the combined weight of all the subproblems that makes up the risk. Of course probabilities are all over place. It'll remain tricky to model right up until we make a superintelligence, and if we don't get that right then it'll be way too late for government policy help.