Can you tell us more about the motivation for this project? I'm very curious if it was driven by a specific use case.
I know there are specialized trading firms that have implemented projects like this, but most industry workflows I know of still involve data pipelines with scientists doing intermediate data transformations before they feed them into these models. Even the c-backed libraries like numpy/pandas still explicitly depend on the cpython API and can't be compiled away, and this data feed step tends to be the bottleneck in my experience.
That isn't to say this isn't a worthy project - I've explored similar initiatives myself - but my conclusion was that unless your data source is pre-configured to feed directly into your specific model without any intermediate transformation steps, optimizing the inference time has marginal benefit in the overall pipeline. I lament this as an engineer that loves making things go fast but has to work with scientists that love the convenience of jupyter notebooks and the APIs of numpy/pandas.
I mean, you're making assumptions about the author's intent going one way, but not the other. What if the polite tone is what they intended? And how do you know they didn't review the output for phrasing and fabrications?
The author acknowledged they used AI to translate. Is the translation they decided to publish among the given tools they had available to them not by definition the most authentic and intentional piece that exists?
All of this aside, how do you think tools like Google Translate even work? Language isn't a lookup table with a 1:1 mapping. Even these other translation tools that are being suggested still incorporate AI. Should the author manually look up words in dictionaries and translate word by word, when dictionaries themselves are notoriously politicized and policed, too?