huh. I'm a professional data scientist, and my masters was in signal processing. In one class the final exam required us to transcribe fourier transforms of speech into the actual words. In another the final exam required us to perform 2d FFTs in our head.
Please be careful about generalizing.
I agree that many 'data science' programs don't teach these skills, and you certainly have evidence behind your assertation.
wait, did you see the part where the person you are replying to said that writing the code themself was essential to correctly solving the problem?
Because they didn't understand the architecture or the domain models otherwise.
Perhaps in your case you do have strong hands-on experience with the domain models, which may indeed have shifted you job requirements to supervising those implementing the actual models.
I do wonder, however, how much of your actual job also entails ensuring that whoever is doing the implementation is also growing in their understanding of the domain models. Are you developing the people under you? Is that part of your job?
If it is an AI that is reporting to you, how are you doing this? Are you writing "skills" files? How are you verifying that it is following them? How are you verifying that it understands them the same way that you intended it to?
Funny story-- I asked a LLM to review a call transcript to see if the caller was an existing customer. The LLM said True. It was only when I looked closer that I saw that the LLM mean "True-- the caller is an existing customer of one of our competitors". Not at all what I meant.