I am not going to name names here, but I recently heard a story that was both puzzling and amusing. Say you work on some sort of science experiment that measures something. It measures it relative to a particular coordinate system, with some known accuracy, which one might want to transform into another coordinate system. The question is, assuming the transformation from the initial to the final system is non-trivial, do you need to injure yourself writing code to make the transformation accurate to 1/1,000,000th of the accuracy of your initial measurement?
While I understand that one needs to avoid introducing errors in the coordinate transform, I strongly suspect that making it six orders of magnitude more precise than the initial measurement is a waste of time.