In 2007, Castellaro and Bormann (2007) studied the performance of various two-dimensional (2D) regressions between different magnitude scales by mathematical simulations. The study consisted in (1) generating sets of magnitude pairs (xi,yi) with given true slope βtrue following a Gutenberg–Richter distribution with b=1 and by adding initial errors (ui,ei); (2) studying how far the slopes β obtained in those data sets by standard βSR, inverted standard βISR, orthogonal βOR, and generalized orthogonal βGOR regressions were from βtrue. Studies assessing the best regression method are important because the misuse of the common standard regression easily leads to magnitude conversion errors of 0.2–0.3 units. A different approach to the magnitude conversion problem was proposed before Castellaro and Bormann’s (2007) work and was based on the χ2 method, which is based on the theory of independent and normally distributed errors (ui,ei) and xi. In this work, we derive mathematical explanations for the results of Castellaro and Bormann (2007) in terms of the χ2 method and find that results agree for mean initial errors <0.5 magnitude units. Our results demonstrate the importance of knowing and taking into consideration the true initial errors in regression analysis.