When submitting a TeX file to Deepl, the result is not so good, but not so bad. Deepl does sometimes a good job with environements, macros, equations, etc. But my guess is that it has no idea what it actually does. So, is there a nice way --with not too much work-- to parse TeX code in a way Deepl makes no mistakes?
Thank you for any help or experience, --Fred
I have tried pandox TeX to Docx... and back. Not so good.
The problem is that machine translation is neural network-based, so you have no guarantees that your Latex tags will be ignored (they are essentially out-of-distribution unless you specifically train on that) and kept in the output.
Instead what you could do is transform your latex document to a temporary XML format and back, as DeepL supports ignore tags in XML, which ignore a certain part of a sentence during translation - this way you can guarantee your tags are ignored and kept in the output. Basically you would need to wrap all Latex
\section{...}etc in XML tags like this<x>\section{...}</x>and addxas anignoreTagin your request.Docs for Ignore tags