An ideal gravity meter has a linear response so that its calibration is represented by a single scale factor. Microscopic variations in manufacturing lead to significant departures from this ideal. These variations from linearity are measured by observing the response of the gravity meter to the addition of a test mass to the gravity meter beam at various intervals throughout the meter's range. This procedure measures the slope of the calibration curve at these points and therefore serves to define its shape. A compromise between accuracy, reading resolution, and cost needs to be reached. High accuracy requires a large mass difference which causes low resolution, requiring many overlapping observations which in turn leads to higher cost. The current procedure provides precision and resolution consistent with a precision of 10 mu Gals (1 microgal = 10 nm/s 2 ) over short ranges. The resulting calibration curve is scaled to gravity units with repeated observations over a 241.9-mGal gravity range at Cloudcroft, New Mexico. This gravity interval is now known to be too small by a few parts in 10 000, and updating it awaits absolute gravity measurements at the site. For consistency the old value will continue to be used until it can be replaced with a more accurate absolute value.Periodic calibration errors, referred to as circular errors, due to irregularities in manufacturing the measuring screw and gear train, are measured at three points along the travel of the screw. A screw is rejected if the peak-to-peak amplitude of the periodic error exceeds 40 and 8 mu Gals for model G and model D meters, respectively.