Independent quality assurance and quality control (QA/QC) programmes are required by reporting codes for publicly listed companies and are necessary to optimize data quality at all stages of the sampling, preparation and analytical processes involved in mineral exploration, resource estimation and mining grade control. QA/QC programmes should be adjusted over time to meet changing requirements in data quality at different stages of mineral resource development and exploitation. Certified reference materials are used to monitor accuracy and bias at the project laboratory relative to consensus values for the material from round-robin certification analyses. They are also used to monitor drift over time within an individual laboratory and to identify significant failures in QC at the analytical batch level caused by abrupt changes in concentration related to re-calibration of instruments or procedural changes at the laboratory. Duplicate analyses of sample material are generated at key stages of sampling and preparation to estimate the precision of data generated at each stage. Invariably, the largest source of uncertainty occurs during the initial sampling. Coarse blanks are used to monitor cross-contamination between samples or from sample preparation equipment. Furthermore, each of these QC sample types can be used to discover possible sample mix-ups.

Supplementary material: An Excel spreadsheet for the calculation of the average coefficient of variation (CVAVE) (Appendix C) is available at

Thematic collection: This article is part of the Reviews in Exploration Geochemistry collection available at:

Quality assurance (QA) refers to the procedures and protocols (i.e. the processes) put in place to maintain data quality within a mineral exploration or resource definition programme. The level of QA required will depend on the project stage. Early-stage mineral exploration may utilize only basic QA procedures focused on those aspects critical to early-stage decision-making. The QA procedures necessarily become more detailed as projects advance from exploration through resource estimation, scoping and feasibility studies (Long 1998). Nevertheless, there is an argument to be made for implementing a full quality assurance and quality control (QA/QC) programme from the first exploration drill hole in the expectation that it may eventually be brought into a resource. By contrast, quality control (QC) refers to the specific checks undertaken at various stages of data collection to test whether data quality expectations have been met, often using test materials known as controls or QC samples. As with QA, early-stage exploration projects, such as soil sampling, may involve a low frequency of QC checks, but the frequency and type of QC samples would be expected to increase as mineral projects advance to the drilling stage.

The interpretation of QA/QC data should be quantitative to allow for benchmarking of performance against comparable projects and to measure improvements in data quality when changes are made in sampling, preparation and analytical procedures. An effective QA/QC programme should also extend to other important parameters within a mineral exploration and resource development programme, such as sampling protocols, logging of drill material, the measurement of dry bulk densities, locating of drill-hole collars, down-hole surveys and data management. Larger programmes should be accompanied by site and laboratory visits, and/or audits by a qualified person (QP) or competent person (CP) so that a clear understanding of sample flow and procedures can be embedded into the QA/QC programme. Significantly, QA/QC programmes should be accompanied by clear and concise documentation in the form of standard operating procedures (SOPs) that are periodically reviewed and updated as changes are made to the programme. A successful QA/QC programme will help to reduce uncertainties in the project data and thus reduce the risks associated with decisions based on those data. A glossary of terms commonly used in QA/QC programmes is provided in  Appendix A.

Following the discovery of mineral exploration reporting irregularities both in Australia and Canada in the 1970s and 1990s, respectively, the Toronto Stock Exchange (TSX), the Australian Securities Exchange (ASX), the Johannesburg Stock Exchange (JSE), the Securities and Exchange Commission (SEC) and other world stock trading exchanges required that all publicly traded resource companies abide by the rules established by their respective regulatory authorities – such as JORC (Australasia), NI 43-101 (Canada), SAMREC (South Africa), Sarbanes-Oxley (SOX; United States) and the Canadian equivalent of SOX (Bill 198), respectively – in reporting exploration and drilling results, resources, corporate mineral assets and significant news releases. In the case of news releases on resource or reserve estimates, the reporting must be approved by a CP or QP as defined by the relevant codes, and in the case of corporate reporting of overall mineral assets, the executives of a company are responsible for the accuracy and validity of the data contained in the quarterly or annual reports to shareholders, as required by SOX and Bill 198. The purpose of the various codes is to protect the investing public from unsubstantiated or fraudulent claims by a publicly traded mineral resource company. However, none of the codes actually define what ‘acceptable’ practices are, just that the methods used, and the resultant findings, be reported truthfully, and that the CP or QP has followed industry best practices, if they exist, or has followed a practical or theoretically justified alternative procedure. While these codes are a step forward in creating an overall sense of quality in mineral resource reporting, none of the rules or guidelines will stop unscrupulous practitioners from swindling the investing public.

This paper reviews examples of some of the sections of the various codes, with emphasis on the more widely used JORC, NI 43-101, Canadian Institute of Mining, Metallurgy, and Petroleum (CIM) Mineral Exploration Best Practice Guidelines and SOX as they apply to publicly traded resourced companies. This paper can act as a guide to those given the responsibility to abide by the rules and to apply industry best practices. Several examples and suggestions are included.

Some of the reporting standards of different countries or jurisdictions are listed in Table 1. The Committee for Mineral Reserves International Reporting Standards (CRIRSCO) is an organization formed to represent organizations that are responsible for developing mineral reporting codes and guidelines, including those listed in Table 1 ( It has developed the International Reporting Template (IRT) that draws on the best of the CRIRSCO-style reporting standards. These reporting standards are recognized and adopted worldwide for market-related reporting and financial investment.

A comprehensive QA/QC programme must start in the field where collection of samples, insertion of QC samples, packaging, shipping and data recording takes place. The field protocols should be specified in a SOP and routinely assessed for compliance. A disorganized field camp or sampling system will likely result in erroneous data that is not the result of poor laboratory performance but are data entry errors. In the authors’ experience over many decades, up to 70% of all QC mistakes occur because of field errors (Fig. 1). Additionally, measurements of bulk density are usually made in the field on selected samples. The protocol for this important measurement must be documented and the data recorded in such a way as to be compatible with the geological observations.  Appendix B contains a comprehensive checklist of observations that should be routinely made at the sample collection site. The checklist is self-explanatory and should be periodically reviewed by the project manager or an external auditor.

Data supporting a news release, or resource or reserve estimation, must be recorded in a format that is amenable to thorough review or audit. As such, an established protocol for recording data, either from the field and/or the laboratory, as well as related calculations, must be contained in an auditable format, usually a database. In the case of exploration data, a senior CP or a QP should review all data before release and sign off on its accuracy. In the case of a reserve or resource summary, such as contained in an annual report, a third-party auditor should be retained to review the supporting information and confirm the contained figures when required by the applicable code or corporate policy. Any discrepancies or questions must be noted. In both cases, the reviewer must have carried out a visit to the project to verify that all statements are true and accurate, and that industry best practices are being followed.

The laboratories used for establishing the concentration of the commodity elements and any deleterious elements are an integral, and arguably significant, part of any exploration project. The exploration and mining team must be familiar with the laboratory, its personnel, and its analytical and QC methods. Laboratory criteria for accepting or rejecting analytical results should be transparent and, most importantly, the batch sizes used for each target element should be noted, as laboratory batch sizes may influence the frequency of insertion of independent QC samples. For instance, for a project that uses both fire-assay and acid-digestion methods for the elements being sought, a laboratory may use one batch size for fire assay and another for acid digestion. An audit will determine which batch size is the smallest, thus guiding the insertion frequency of QC samples.

NI 43-101 (and SAMREC) includes several sections dealing with the competency of the laboratories used for drilling assay results and resource calculations, while JORC mentions the laboratory requirements and CP review requirements in the Table 1 reporting template. The following excerpt from Form 43-101F1 (2011), Item 11, addresses the topics that a QP or CP should know about regarding the lead and secondary laboratories:

(a) sample preparation methods and quality control measures employed before dispatch of samples to an analytical or testing laboratory, the method or process of sample splitting and reduction, and the security measures taken to ensure the validity and integrity of samples taken;

(b) relevant information regarding sample preparation, assaying and analytical procedures used, the name and location of the analytical or testing laboratories, the relationship of the laboratory to the issuer, and whether the laboratories are certified by any standards association and the particulars of any certification;

(Form 43-101F1, Item 11, 2011).

It is not a requirement of any guidelines that a laboratory needs to be certified. The required disclosure is whether or not a laboratory is certified. Many modern international mineral laboratories state that they are accredited to the ISO 17025 standard, which is specifically for laboratories. The ISO 17025 standard is specific to individual analytical methods and is preferred over the ISO 9001 standard which is more generalized. ISO 17025 accreditation requires rigorous documentation, measurement uncertainty tests and traceability, as well as participation in a semi-annual round robin. Specific methods are accredited by ISO 17025 but only the most requested methods are included. Most mine laboratories are not accredited to any standard.

The CIM Mineral Exploration Best Practice Guidelines 2018 also state that the QP is responsible for ensuring that the laboratories are using industry-accepted practices in assaying. These guidelines also state:

The sample preparation procedures used in each mineral exploration program should be appropriate for the objectives of the program. Where the volume of individual field samples is reduced prior to shipping to a laboratory for analysis, unbiased splitting procedures to obtain representative subsamples should be tested, verified, and then applied.

(CIM Mineral Resource and Mineral Reserve Committee 2018, p. 12).

Some form of proof is therefore required that the samples being analysed are representative of the samples taken in the field. To achieve this, stepwise monitoring of the sample-reduction process, from the field sample, through sample preparation, to the final analysis, must be undertaken using duplicates.

In the case of drill cores or cuttings, often the most important mineral intersections are cannibalized for mineralogy or environmental studies and therefore are not complete. This leaves the original reject or pulp from the laboratory as the only representation of the original samples that support a resource estimate. At least one of these fractions should be stored for the life of the project.

All professional organizations require that the QP/CP examining and signing off on publicly disclosed reports be competent in the topics being reported on. In the case of laboratory visits or audits, a QP/CP familiar with laboratory methods may be required to report on both the lead laboratory and the secondary or check laboratory. At the very least, the project manager and QC or database manager should visit the lead laboratory to become familiar with laboratory personnel responsible for handling the client's samples, as well as the processes being used on those samples.

An advantage of routine laboratory visits is building relationships with laboratory management to ensure that any issues can be readily resolved. A short laboratory visit can assess overall cleanliness, orderly storage and retrieval of samples, sample backlog, instrument availability and other criteria which could impact assay quality and turnaround time.

Detailed audits are best conducted under the direction of knowledgeable experts. The critical aspects include:

  • potential for sample contamination, including sample preparation and instrument-cleaning protocols;

  • quality systems and specifications;

  • instrument calibration methods;

  • compliance with accreditation standards;

  • health and safety.

It is recommended that information for laboratory visits be collected systematically. An audit form covering all aspects of sample preparation and analytical processes, or inspection software such as Field Eagle, should be used to collect information and photographs so that comparisons can be made between visits. Laboratory visits may be triggered by a new phase of drilling, changes in laboratory management/ownership and/or an increase in the number of QC failures. In some jurisdictions, a laboratory visit may be required to support a technical report.

The commonly used QC procedures are the insertion of field blanks, CRMs and duplicates, with the submission of check samples to a second laboratory at some stage of the programme. QC programmes are most effective when there is adequate attention to the details associated with what materials to use, how much to insert and how to interpret results.

Field blanks

Field blanks are samples of natural material with similar characteristics (e.g. hardness, abrasiveness, rock type, sample media) to those being collected in the field that contain acceptably low concentrations of the elements of interest. They are inserted into the sample stream to test for potential cross-contamination, or carry-over, between samples. Cross-contamination of samples may occur during sample preparation and for this reason a coarse field blank is preferred in rock-chip sample and drilling programmes. Several of the CRM manufacturers sell pulverized field blanks which, if used in a drilling programme, will fail to detect cross-contamination between samples during sample preparation as they would bypass the sample preparation process. They are however useful for soil and stream sediment surveys where the pulverized material can be passed through sieves in the field and/or at the laboratory to test for cross-contamination. Specific instructions to do so may be required in the case of sieving.

Cross-contamination may also occur when there is insufficient time for flushing of a sample within highly sensitive analytical equipment, such as inductively coupled plasma mass spectrometers (ICP-MS). This potential contamination source is monitored by laboratories using method blanks, which consist of a solution containing concentrations of the elements of interest below their lower limit of detection (LLD). Method blanks are inserted into the sample stream, usually at random, and are the responsibility of the laboratory undertaking the analyses. Analytical results for the method blanks should be supplied to the client and assessed as part of reviews of data quality. Nevertheless, sample batches where method blanks have failed should not be released by the laboratory. If they are detected in a client's assay certificate it would be a red flag indicating that the laboratory is not closely monitoring its own internal QC results.

Insertion of field blanks

While field blanks may be inserted into the sample stream at regular intervals, along with CRM and field duplicate samples, they should certainly be included following some individual high-grade samples or within high-grade intervals. Some cross-contamination is unavoidable in a commercial assay or geochemical laboratory as the preparation equipment is generally only cleaned with compressed air between samples to maximize the sample throughput. An acceptable amount of cross-contamination between samples for many commercial laboratories would generally be of the order of 1–1.5% of the concentration of the preceding sample. However, in the case of high-grade samples, the amount of carry-over to the next sample could be significant, particularly if surrounding samples are low grade or barren. From a client's perspective, an ore-grade value in a field blank would be unacceptable, but it may be within the 1–1.5% maximum carry-over allowed by the laboratory. By contrast, the carry-over to another high-grade sample may not be significant as a proportion of the contained element of interest. Evidence of cross-contamination in a field blank is an indication of a more widespread problem with cleanliness given their typically low rate of insertion into the sample stream.

It is also important to recognize that the purpose of field blanks is to test for the possibility of cross-contamination in the sample stream, and not to clean the sampling equipment after high-grade samples. However, the blank material should have a similar hardness to the routine samples to ensure removal of all contamination when run through the preparation equipment. The configuration of sample preparation equipment varies from company to company, between laboratories within specific companies, and even within a single laboratory when occasional equipment breakdowns or maintenance are considered. It can therefore be difficult to predict whether a field blank will directly follow a high-grade sample through crushing and pulverizing. Many laboratories use multiple banks of pulverizers and so a field blank directly behind a high-grade sample in the sample stream may pass through a different pulverizer from the one the high-grade sample has passed through. It can also be difficult to determine whether carry-over occurred during crushing or pulverization without specific knowledge of the sample preparation layout used at the laboratory. This distinction is important. If cross-contamination occurs during pulverization, that master pulp can be discarded, and a second split taken from the excess coarse crush, or reject, material. If cross-contamination occurs during crushing, then the entire sample is compromised, and a fresh sample must be obtained if the contamination is significant and relevant. It is therefore advisable to discuss the sample preparation layout with the laboratory carrying out the work so that field blanks can be placed within the sample stream to test potential cross-contamination from high-grade samples at both the crushing and pulverizing stages. Multiple field blanks may be required to achieve this, but it is far from certain they will be effective. Asking the laboratory for a list of the sample preparation order can help sort out some of these issues and will identify those blanks which may have immediately followed a high-grade sample through a preparation step.

Contamination by high-grade samples

Given the variability of sample preparation streams, a better approach would be to inform the laboratory of the presence of high-grade samples within the sample submission and to request the use of barren washes immediately following their passage through preparation equipment. This ensures that any contamination of the preparation equipment is collected by the barren wash and not by the next sample. An analysis of the barren wash could also be undertaken to determine the extent of the carry-over from the preparation equipment, and to determine whether the issue is occurring during crushing or pulverizing, or both. In the case of high-grade free gold, it may be necessary to request a double barren wash after high-grade samples given the propensity of gold to smear onto crusher and pulverizer surfaces. It is important to also include coarse field blanks with the submission even where barren washes are requested to monitor whether the barren washes have indeed been undertaken and to ensure they are having the desired effect of removing contamination from the sampling equipment.

Selection of material used for field blanks

Consideration must also be given to the selection of the material to be used for coarse field blanks. The amount of material submitted should approximate the typical mass of sample being submitted. This avoids the laboratory recognizing the presence of field blanks within the sample stream and ensures that the amount of carry-over is not overstated in field blanks of significantly smaller mass than typical samples. Therefore, large quantities of coarse field-blank material will be required for large sampling programmes. It is useful however, to submit a set amount of blank material that is just different enough from the routine samples so that it is identifiable in the assay results by its received weight. A rock type of similar composition, hardness and abrasiveness to the typical project lithology is desirable, with the proviso that ideally it will have homogenous contents of the element(s) of interest below the method's LLD. Quartz makes an ideal candidate for gold projects but may not always readily available. Granite, basalt and limestone are also commonly used, but will often contain low levels of base metals and, in the case of limestone, cause incomplete fusion during fire assay so that the carry-over may not be detected. Procuring sufficient material for field blanks that meet all these requirements can be a challenge in some circumstances.

Field blanks may also be incorporated into surficial sampling programmes as well. A sand barren in the element(s) of interest may be inserted in the case of stream-sediment programmes. Aeolian sand may be suitable, where available, but where a suitable natural material is not readily available or is demonstrated to contain significant quantities of the element(s) of interest, certified pulverized rock blanks could be used. The laboratory should be instructed to pass the powdered material through the sieves along with stream-sediment or soil samples to test for cross-contamination. Some practitioners will insert pulverized blank material into the sample stream with rock or drill-core samples, but this material will obviously not test carry-over from a jaw crusher and will be immediately recognizable to laboratory staff who may take extra care with cleaning prior to running the material through a pulverizer.

Typically, a multiple of the LLD of field blanks needs to be exceeded for the element of interest to trigger either a warning or a QC failure. This threshold value may vary over time if analytical methods change (Fig. 2). The multiple of LLD may vary from a factor of three to a factor of ten, depending on the commodity and the LLD value. Judgement is required in selecting warning and failure trigger values. A value too low may initiate an unrealistic number of interventions and too high a value may mean that significant cross-contamination goes undetected. There are a couple of approaches in the case where the field blank contains small but measurable quantities of the element of interest. A multiple of the average content of the field blank could be used to define warning and failure trigger values. An alternative approach would be to calculate an average and standard deviation for the element(s) of interest in the field blank and assess cross-contamination in terms of exceeding two or three standard deviations (or Z-scores of 2 or 3) above the average value as warning and failure trigger values, respectively. Regardless of which approach is favoured, at least five to ten samples of the coarse field blank material should be analysed by the project laboratory in the absence of mineralized material to establish its composition and homogeneity. Failure to do so may lead to unnecessary excitement when assays are initially received!

Follow up of failures for blanks

Follow-up investigations are recommended where the level of an analyte exceeds the threshold value determined for a field blank (Table 2). However, first it is necessary to confirm that the coarse blank has not been accidently switched with a routine sample. Fortunately, the common use of multi-element analyses means that blank material normally has a distinctive composition that will allow it to be identified within the sample sequence. If the possibility of a sample switch can be ruled out, then the next step would normally consist of reviewing the analyte concentration in the immediately preceding samples to determine if the cause of the elevated value is due to the presence of high-grade material. Where cross-contamination is suspected, re-assays of the coarse reject material from the field blank are advised. Elevated analyte values in re-assays of the coarse reject would indicate that the entire sample has been contaminated during crushing and the sample must be re-collected. If the coarse rejects are free of contamination, then it can be concluded that contamination occurred during sample pulverization and either the batch or the samples on either side of the contaminated blank should be re-analysed using new splits of the coarse reject material. Any problems with incomplete flushing of analytical equipment should be evident in the method blank results provided by the laboratory. At this point it would also be beneficial to have a conversation with the laboratory about protocols for the preparation and analysis of high-grade samples, either by flagging individual samples in a submission, or by submitting high-grade samples or drill-core intervals containing high-grade samples as separate submissions with different preparation requirements. Many laboratories will have separate sample preparation areas for high-grade run-of-mine samples and exploration samples submitted for low-level analyses.

Whether samples are re-analysed when a field blank fails QC depends on several factors, including turnaround time at the laboratory, cost, the level of cross-contamination and the significance of the samples for decision-making. For example, a continuous run of mineralized drill-core samples through a deposit is likely to be composited for reporting and/or resource estimation purposes. Cross-contamination between samples in the middle of this interval is less relevant than samples collected on the edges of mineralization, particularly if these shoulder samples are close to the cut-off grade used for compositing or resource estimation. Conversely, cross-contamination of a stream sediment sample may lead to an unmineralized catchment basin being prioritized for follow-up exploration based on a single result. It may be necessary to re-analyse a significant portion of the samples on either side of the failed field blank where the cross-contamination is significant (i.e. approaching cut-off grade). Alternatively, minor levels of carry-over could be dealt with by requesting barren washes following high-grade samples or by asking the laboratory to review their equipment cleaning protocols.

Further uses for field blank insertions

Although field blanks are inserted to test for cross-contamination between samples, they also serve a further purpose, particularly when multi-element data are being collected. Typically, the field blanks will have a distinctive composition that allows them to be identified in multi-element results. This provides a check on the integrity of sample handling both in the field and at the laboratory, as a recognized field blank out of sequence indicates that a sample ordering or numbering error has occurred. Even when multi-element data are not collected, or where the main commodity element is analysed using a separate method, the insertion of field blanks can be tracked by using a consistent and distinctive weight of material. The use of coarse blanks is also useful for narrowing down a source of cross-contamination when two separate analytical streams are being used, such as an acid digestion ICP finish multi-element scan and fire-assay gold. If both methods show contamination, the contamination occurred during sample preparation, but if only one of the methods shows evidence of contamination, then the contamination source could be within that method process, for instance spill-over in fire assay, poor cleaning of the analytical instrument or sample mis-ordering errors.

Summary of the use of field blanks

While seemingly one of the simpler aspects of an effective QA/QC programme, the use of coarse field blanks requires thought and planning. If done correctly, not only will the use of field blanks indicate problems with cross-contamination between samples from dirty sample preparation equipment, but it can also reveal errors in sample ordering or data management.


A CRM is defined as:

reference material (RM) characterized by a metrologically valid procedure for one or more specified properties, accompanied by an RM certificate that provides the value of the specified property, its associated uncertainty, and a statement of metrological traceability.

(ISO Guide 30:2015, Section 2.1.2).

CRMs are often informally referred to as ‘standards’.

Insertion rates of CRMs

There are no specific insertion rates required by regulators. Persons responsible for project QC are instructed to follow ‘best practices’. One possible indicator of industry best practice is to examine industry trends. To accomplish this, Analytical Solutions Ltd studied approximately 100 NI-43-101 technical reports in 2014 and then again in 2018 to document insertion rates used by the industry (Fig. 3). All the reports were filed with the Canadian regulators for projects worldwide. Over half of the technical reports showed that CRMs were inserted at a rate of at least 1 in 20.

Many QC programmes are designed around the analytical batch size such as an 84-unit fire-assay furnace or 40-place test-tube rack. As stated earlier, it is necessary to organize a laboratory tour or consult with the laboratory to determine analytical batch sizes. The laboratory will include approximately 5% of their own QC samples in each analytical batch, which typically includes at least one each of a blank, a duplicate pulp and a CRM. The blank used by the laboratory is most often a reagent blank and does not include sample material. The laboratory's internal QC can be considered when determining the appropriate rates of insertion.

The optimum insertion rate for CRMs is at least one per analytical batch. Although it may be preferable to include all of the project CRMs in a batch, this may not be practical when three or more different CRMs are used for a project. Blanks can also be inserted with each analytical batch and could be considered a low-concentration CRM in some cases (i.e. when appropriately certified).

Commercially available CRMs are packaged in different ways. Up to 2 kg jars may be used by mine laboratories, but it is generally more convenient for field programmes to use pre-weighed foil sachets or pulp envelopes. It is especially important that sulfide-rich CRMs are packaged in nitrogen-flushed sealed foil packets to prevent oxidation during storage. Oxidation or changes in moisture content can impact results by causing biases or systematic errors.

It is preferable to provide the laboratory with only the minimum amount of material required for several determinations. The laboratory should not have the opportunity to repeatedly assay the CRM to achieve an expected value, as the same attention will not be applied to samples.

Selecting CRMs: range of values

Usually, several CRMs are selected covering the expected grade range. It is often prudent to use just three or four CRMs at a time to avoid the confusion of inserting the wrong CRM or associating it incorrectly in a database.

Fundamentally, the element concentrations of the CRMs should be at levels where decisions are made. For an exploration-stage project, for example, the CRM could have concentration values near the expected threshold level for discriminating between background and anomalous values. A CRM with concentration values around 10–20 times the laboratory-defined lower detection limits can also help improve confidence at very low concentrations.

For advanced projects, it is important to use a CRM with values corresponding to key decision points, like the lower mine cut-off grade (or estimated for similar deposits), average grade and higher grade. Other concentrations to target are:

  1. median value;

  2. a concentration range that triggers a second analytical method (e.g. gravimetric finish for gold);

  3. concentration at or close to the upper cutting limit (‘capping for resource estimation’).

As the industry expands the use of multi-element data, from AI exploration applications to geometallurgy for advanced projects, it is no longer adequate to only monitor laboratory performance for commodity elements. Selected CRMs should have a range of concentrations for pathfinder elements, potentially deleterious elements in mill recovery or of potential environmental impact. It is critical that the CRM concentrations have been determined by the same analytical methods that are used on field samples.

Selecting CRMs: matrix matching

The minerals industry has generally accepted the importance of using matrix-matched CRMs. This may entail the selection of commercially available CRMs of similar composition and/or mineralization style or the production of a suite of custom CRMs from ore samples provided by a mining company from the deposit under evaluation. In an early-stage exploration project, sufficient material may not be available to develop custom CRMs, or lead times may prohibit this approach. In most mineralized systems, the proportions of mineral constituents, and hence whole-rock chemistry, vary substantially on a mesoscopic scale, so samples submitted for assay will reflect this heterogeneity. It is therefore not feasible to design a QC programme where reference material mineralogy or chemistry will match every sample submitted, nor is it necessary. The purpose of CRM is to know that a laboratory is providing consistently accurate results.

To achieve accurate results, a laboratory needs to carefully control many steps in the analytical procedure. These steps are similar for many methods and include:

  1. were the samples weighed in the right order;

  2. was the sample weight correct for the method;

  3. were the correct reagents added in the appropriate volumes or weights;

  4. were digestion or fusion temperatures correct, and for the correct amount of time;

  5. if necessary, was the right solution added to adjust to the correct final volume;

  6. was the instrument calibration correct;

  7. were the necessary calculations (including dilutions or unit conversions) performed correctly.

When these steps are done consistently and correctly, results for CRMs should report within acceptable ranges. An exact match of the mineralogy of the CRM and samples is not critical to monitor most of these steps. The use of matrix-matched CRM, however, is important if the element of interest occurs in minerals that need specific digestion or fusion conditions. Some examples include:
  1. distinguishing between nickel in olivine v. sulfides in a magmatic deposit;

  2. tin occurring as cassiterite v. secondary minerals such as varlamoffite ((Sn, Fe)(O,OH)2) or sulfide minerals such as stannite;

  3. rare earth elements occurring as phosphates (monazite, xenotime) v. oxides or ion-absorption clays;

  4. barium occurring as barite which is not soluble in an aqua regia digestion and not always retained in solution with a four-acid digest.

It is essential to know the mineral host(s) for the elements of interest and whether those minerals are soluble in the chosen acid digestion or fusion. Differences between the minerals digested with a four-acid digest compared to lithium metaborate fusions, for example, can be important for lithogeochemical surveys. A useful discussion of this topic can be found in Zivkovic et al. (2023).

Some analyses are focused on major components of the sample such as major oxides. In this case, the laboratory is expected to determine the composition of elements mostly occurring as silicates or other rock-forming minerals. The dissolution methods are not adapted to specific minerals and are expected to be appropriate for a range of rock types. Commercial laboratories will not alter the analytical methods to optimize sample dissolution. In this case, a detailed match of the mineralogy and rock type is not required.

Gold is a special example when the fire-assay fusion method is used. An important step in the fire-assay method is the selection of the fluxing reagents used to melt the rock. The fire-assay method is prone to underestimating gold under certain conditions; it is a complicated method and at every stage of the process there are possible gold losses. Small losses throughout the process are cumulative and can result in significant low biases. It is critical to use CRMs with the same problem as the samples, such as mineralogy, which may also impact fire-assay performance.

Custom CRMs

ISO 17025-accredited commercial laboratories will have calibrated instruments such as ICP optical emission spectrometers (ICP-OES), ICP-MS and X-ray fluorescence (XRF) spectrometers for a broad range of element concentrations and matrices. There are special cases where very high concentrations of some elements (e.g. iron, phosphorus) may introduce interferences in determining trace-element concentrations. Laboratories should always be advised if samples have unusual matrices and are being submitted for routine laboratory analytical packages. These are cases where site-specific CRMs are required.

Alternatively, project-specific mineralized material may be used to develop a matrix-matched set of CRMs. In fact, there are some companies whose QC protocols mandate this approach when suitable materials are available. Custom CRMs are particularly suited to operating mines for grade control and metallurgical plant QC. There are companies that assume that this approach provides a cost-effective solution when reasonably large batch sizes are produced, thus optimzing economies of scale, and a competent CRM producer undertakes the work. If custom reference materials cannot be prepared in large volumes, too few laboratories are used for a round robin or suitable homogeneity is not achieved, then commercially available CRMs are a preferred solution. Conversely, multi-tonne batch sizes for custom reference materials will provide a more cost-effective solution than purchasing individual units of commercially available reference materials.

Unless there are specific minerals, grade ranges or analytes that are not readily available, then it is not generally required to prepare custom reference materials for exploration projects. Fifteen years ago, when QC programmes were not universally employed, it was often a necessity, but there is now a greater range of commercially available, high-quality reference materials to choose from.

There are commercially available CRMs that are artificially produced from a mixture of albite or quartz with known amounts of mineralized material. These are not matrix-matched CRMs and may not be applicable for some deposit types. These artificial CRMs typically fuse easily for the fire-assay method and return acceptable gold values. If the flux is not appropriate for the sample being submitted, then the gold assays may not be correct. This cannot be determined by the performance of the artificial CRMs as they are not matrix-matched with the sample types.

Certification of CRMs

ISO 17034:2016 sets out the requirements to produce reference materials. It is intended to be used as part of the general QA procedures of the reference material producer. It covers the production of all reference materials, including CRMs.

ISO 17034 guidelines apply to mineral assaying and include:

  1. setting the parameters for how ‘round robins’ are performed to establish accepted values;

  2. the parameters that may be impacted by a difference in analytical technique and detection limits between ‘round robin’ and sample analyses.

If every result is assumed to be technically valid, then six to eight participating laboratories are recommended. However, there can be statistically and technically invalid results (which there always are) then, at least 10, preferably 15, participating laboratories need to be included. The ISO 17034 guidelines suggest submitting two units of the pre-packaged CRM to each participating laboratory and requesting six analyses over at least 2 days. A minimum of three to four units should be selected from the packaged materials to test for homogeneity. Nevertheless, there is currently no legal requirement to use a CRM supplier that is ISO 17034 accredited, but some of the international commercial suppliers have acquired this accreditation.

A challenge for all certification programmes is the relatively small number of determinations used. The statistics applied assume a normal distribution and this assumption may not be valid with only a few analyses. A statistical test, such as a Grubbs’ Test or a Dixon Test can be applied to reject values outside 2.5 standard deviations; however, the ASTM -178 (2011) guideline for dealing with outlying observations states that statistical tests are used to identify outliers but not to reject them from the dataset. When outliers are included, then the allowed standard deviation will be larger.

CRM producers struggle with when to reject outliers. If outliers are the result of laboratory error, then the values can be excluded from statistical calculations. If outliers are due to poor CRM homogeneity, then it is not acceptable to remove them. As a practical solution, the mining industry has adopted use of a pooled (inter-laboratory) standard deviation of all round-robin (or collaborative study) results. Twenty years ago, as CRMs started to be more widely used, the within-laboratory standard deviation was applied, but this was found to generate large numbers of QC failures.

Established expected values for CRMs

When reviewing CRM certificates, it is important to note the following:

  1. verify that the laboratory's precision of the analytical method(s) used for samples matches expectations based on round-robin statistics, especially within 20 times the detection limit of the analytical method;

  2. check that three times the relative standard deviation (RSD) is less than the project risk tolerance.

Tolerances for low concentrations based on round robins can be challenging. Analytical methods used by commercial laboratories may have a range of lower detection limits which impacts the precision of results for the certification. If the analytical method used has a higher detection limit than was used for the round robin, the quoted tolerances in the certificate will most likely not be achieved. It is important to know the quoted tolerances for different concentration levels to avoid confrontation with the service provider on what constitutes a QC failure, especially close to detection limits.

Many CRMs are certified using a variety of analytical methods. It is important to use the statistics that most closely match the method used to analyse field samples. Accepted values are generally reliable for analytical methods that produce total method concentrations. Analytical methods that generate total metal concentrations include fire assay for precious metals, XRF methods (pressed pellet and fused disc) and strong decompositions such as sodium peroxide or lithium borate fusions.

Other digestion procedures, such as aqua regia or four-acid, can vary between laboratories, thus creating a wide range of reported values. Some elements, such as copper and zinc in sulfides, are readily dissolved by most acid digestions and are not prone to re-precipitation after digestion. Certification is generally reliable for these elements.

Other elements, like barium, are sensitive to subtle differences in digestion procedures, such as the exact amount of acid used, temperature, digestion time and final volumes. Figure 4 is a box and whisker plot of barium values reported for certification of a commercial CRM (OREAS 608) for a rhyodacite matrix. The barium values range from 50 to 950 ppm, demonstrating that it is difficult to assign an accepted value.

It is prudent to review the round-robin values reported for the certification programme to determine if recommended values are reliable for specific analytical methods and elements.

Monitoring QC with CRMs

CRM results should fall between ±2 standard deviations 95.5% of the time and between ±3 standard deviations 99.7% of the time. This means that statistically that about 1 in 20 results will fall outside ±2 standard deviations and 1 in 300 or so outside ±3 standard deviations . Values reporting outside these limits are defined as QC failures and appropriate action is required to determine if analytical results are correct.

Shewhart control charts are used to monitor CRM results. The chart is a line graph of the analytical results with a centre line representing the accepted concentration of the CRM and sigma lines to determine process stability. Results are plotted against a time function (i.e. date, sample number, work order number, analytical order, etc.) and anomalous points and trends needing investigation are identified.

Figure 5 is an example of a control chart for zinc values reported over a 2-year period for a high-volume project. In general, the CRM has performed well, with one obvious exception that may represent a CRM data entry error. The graph in this example is accompanied by a table of useful statistics, such as the expected value, observed average value, expected standard deviation and measured standard deviation.

Common CRM failure rules are:

  1. any result falling outside ±3 standard deviations constitutes an accuracy failure;

  2. any two out of three consecutive results outside ±2 standard deviations on the same side of the mean signify a bias.

It is worth noting that situation (1) will occur randomly 0.3% of the time where errors are normally distributed, and situation (2) will occur 2.3% randomly. Thus, not all failures represent analytical problems and a few statistical failures should be expected in large datasets. Examples of common accuracy and bias failures using these rules are illustrated in Figure 6. Many more rules may be applied to CRM data, but they can become complicated and difficult to easily identify.

Often CRMs are used in rotation. When this is the case, bias is not easily detected on the control graphs. It is necessary to check CRMs as they have been inserted in the sample stream. This issue will not be identified by control charts built for individual CRMs and it may be necessary to plot CRM results in terms of per cent relative bias or Z-scores (i.e. the number of standard deviations above or below the expected value). This allows data from several CRMs to be plotted on the same chart, or data from several elements within one CRM to be plotted together. Data for several elements plotted on the same control graph can often clearly identify if all of the elements fail for a CRM by approximately the same percentage, then there is the possibility that there has been an issue with the digestion. If the reported values for numerous elements are wildly different to those expected, an investigation should focus on whether the CRM designation was assigned incorrectly in the database, or if the wrong CRM was inserted.

Many projects use sophisticated database programs that automatically identify QC failures using pre-assigned tolerances within three standard deviations. Alternatively, spreadsheets can be set up with conditional formatting to highlight two standard deviations and three standard deviations QC failures. It is equally important to create control charts to look for step changes in results or drift over weeks or months. The critical point is that this review should be undertaken as soon as possible after the data arrive from the laboratory in case action is required.

Actions required for CRM QC failure

Prior to requesting re-analyses, it is prudent to look at the following:

  1. Confirm that the correct CRM was submitted and/or properly recorded in the database. If the reported values closely correspond to those expected for another CRM in use, it is acceptable to note the likely error and forego re-assays. The error can be corrected by changing the CRM identification in the database with appropriate comment.

  2. Determine if it is possible that the CRM has been switched with another sample. This can happen anywhere from the field to the laboratory. Such errors are most prone to happen in sample preparation or on the fire-assay floor. Re-assays will be required but it will determine if existing pulps should be re-assayed or if new pulps need to be prepared from the coarse rejects.

Many laboratories automatically report sample weights on receipt. Sample weights are an easy way to check for sample mix-ups as CRM typically weigh far less than the submitted samples. It is possible to use the length of drill-core intervals as an approximation of sample weight to further monitor sample mix-ups.

Most projects use a rule that re-assays of the same pulp are requested for five to ten samples before and after the QC failure or, depending on the insertion rates, from the midpoint between the previous passed CRM and the midpoint to the next passed CRM. In some cases, samples for re-assay are selected based on the size of the analytical batch. It is preferable to have the CRM re-assayed, provided sufficient material was submitted, or to have a new packet of CRM analysed with the re-assay batch.

When re-assays are received the results are compared against the original assays. A project should set rules for acceptance or rejection of original assays (e.g. if the majority of assays are biased high or low). If original assays are not acceptable, the laboratory should be requested to provide a replacement assay certificate and documentation explaining the QC failure.

It is necessary to ensure that every assay can be linked to a laboratory certificate so it is not appropriate to change assays for some samples in a database unless they can clearly be linked to the correct laboratory certificate. Furthermore, both the failed and the replacement certificates should be preserved in the database. Original failed data should never be overwritten with the new results.

Summary of CRM use

It is important to apply QC rules consistently. It follows, therefore, that written standard operating procedures with clear and precise procedures should be available to everyone on a project who is authorized to work with the assay database.

It is valuable to maintain a failure/action table to record QC failures and the actions taken to correct them as well as failure rates or if no action was required or taken. Tracking the type of errors may provide opportunities to change procedures (e.g. improved training of field personnel, better management of CRMs for insertion into sample shipments) or highlight changes required at the laboratory.

CRMs are the key ingredient to monitor accuracy, but care must be taken to ensure that the CRM selected for a project is appropriate in terms of its matrix and grade. Accuracy is the most important component of QC programmes for resource estimation through to geochemical surveys, as biases have direct implications for the estimation of grade and thus value of a mineral resource.

Check sampling

The use of analyses from a second arm's length laboratory for a selection of rock or drill samples used to delineate a resource or confirm mining grades, is standard practice in most exploration and mining programmes, but the methods to be used are virtually ignored in the regulatory framework.

The CIM Mineral Exploration Best Practice Guidelines simply state that ‘regular check sampling by a third-party analytical laboratory’ (CIM Mineral Resource and Mineral Reserve Committee 2018, Section 2.7.4, p. 12) should be done. JORC states in the Table 1 reporting template that ‘external laboratory checks’ (JORC 2012, p. 2) should be included in a QC programme. SAMREC does not mention check samples at all.

CRM can confirm the accuracy of analyses at their specific grades and matrices but cannot confirm the grades over the entire concentration range of the project. Therefore, the purpose of check samples is to confirm the grade of samples over the entire range of concentrations found in the resource or mine. This is particularly important when the extent of concentrations requires laboratories to change methods, dilutions, instrument calibrations, sample weights, etc., to bring a particular target commodity into the measurement range. The methods used in the primary and check laboratories must be known to properly compare the results from the check samples.

The protocol for selecting check samples has been developed over the past 20 years based upon trial and error from large resource drilling programmes, such as those used at Oyo Tolgoi in Mongolia, Pascua Lama in Chile and Argentina, and Resolution in Arizona, where tens of thousands of samples were taken to outline the resource. Each of these programmes involved several commodities and differing matrices which required the laboratories to modify the analytical methods used. It was found that 5% of the samples, selected randomly from the entire programme, would adequately confirm, or not, the analyses from the primary laboratory or outline problems in one or both laboratories. In recent years, for a large resource drilling programme, 5% of the sample pulps only from within the resource envelope have been subject to checking.

To determine which of the laboratories may have a possible problem, all check samples must contain a range of CRM to act as the referee as there is usually a relative bias between laboratories. Preferably, the CRM should be the same for both the primary and check laboratory. This introduces a logistical challenge, as the stream of check samples must have the CRMs inserted throughout the submission to the second laboratory to be blind to the check laboratory. The most reliable method of submitting check samples is to have the primary laboratory bag 2 pulp samples from the pulverizer every 20 samples and compile these into sets of 90 samples to be returned to the client. The client receives these sets of 90 pulp bags and inserts the CRM appropriately. A pulverized blank can also be inserted if desired to monitor for possible contamination in the analytical process, thus making an analytical batch of close to 100 samples – the numbers should be adjusted to match the batch size used at the check laboratory, with allowance for insertion of laboratory QC samples. The same sample numbers as the original sample can be used so that the results from the secondary laboratory can be imported into the database, but care needs to be taken not to overwrite the original analyses. They should be stored in a separate table from the primary analyses as QC samples, or designated as check assays.

Interpretation of the check laboratory results can be simple but may become complicated if a large range of concentrations is involved and the data are skewed. Means, or averages, are not robust statistics for many geochemical datasets because populations are not normally distributed, even after a log transformation. Comparison of the averages for check assays must be done with great caution and based on an understanding of the underlying distributions, but, where appropriate, statistical tests such as Students's t-test can be used to compare means.

Some practitioners simply use an X–Y scatter plot and a correlation calculation to compare the two laboratories. This procedure does not reveal the details in the data and should be discouraged. It is possible to have a high correlation coefficient between the primary and check assays but a slope that deviates from one is indicative of a relative bias between the two datasets. The comparisons must be plotted in a quantile–quantile (Q–Q) format to ensure the relationships hold for the entire range of grades being examined.

A Q–Q plot for zinc is shown in Figure 7. The results from the check laboratory are in excellent agreement with the original (primary) laboratory. Figure 8 shows a Q–Q plot for fire-assay gold. The data are in good agreement up until c. 10 000 ppb (10 ppm), after which there is a negative bias in the check laboratory results. This bias could be traced back to a change in instrumental to gravimetric finishes at 10 ppm. Figure 8 illustrates how sensitive Q–Q plots are for revealing subtle biases.

It is important not to introduce selection bias by preferentially selecting high-grade samples for check assays. A selection of samples based only on grade introduces heteroskedasticity (i.e. the variability of an element varies across the concentration range), which will create an apparent bias that is not real. While the selection of samples for check assay may be biased to those within the resource shell, the actual selection of samples should be random within it. The stability of pulps used for check assays is also important. Oxidation of sulfides may occur between the time that original assays were reported and when the samples were resubmitted for check assays. In some clay-rich samples, changes in moisture will also impact the check assays. Care should also be taken if bulk pulps used for check assays are transported any distance by road. The vibration of fine-grained material may result in gravity settling of heavy minerals. Pulp samples transported in such a way should be re-mixed upon arrival at the check laboratory or a false-negative bias in data from the check laboratory may result.

Summary of check sampling

Check assays form an important verification of assay results obtained over the course of an advanced mineral project. Care must be exercised in selecting samples for check assay, ensuring that the check laboratory is using analytical methods comparable to the original assay laboratory, and sending the same CRM that were submitted to the original assay laboratory. It should be anticipated that the check laboratory will have a bias relative to the original laboratory.

The ability to demonstrate the relative precision (RP) of sampling and subsampling is essential for appropriate exploration decision-making, the calculation of mineral resources, grade control in mining operations and for the reporting of assay results to the public. The RP at each stage of sampling should be estimated. While RP can be estimated for any grade sampled, it becomes particularly important to understand the relative precision at cut-off grades used to estimate resources or undertake grade control in a mining operation.

Theoretical considerations

A basic concept to understand is that RP is a function of concentration in that it decreases with increasing concentration close to the detection limits for a given analytical method, after which it levels out (Fig. 9). A plot of precision v. concentration will therefore vary with the analytical method employed, as illustrated by pulp duplicate analyses in Figure 9 for four different fire-assay methods for gold, each of which has a different detection limit. The three standard fire-assay methods have similar method precisions, whereas the ore-grade fire-assay method has a high method RP. Note that the change in RP with increasing concentration is rapid within an order of magnitude of the detection limit and that the method RP is generally not defined until the concentration is at least two orders of magnitude above the detection limit. This has implications for method selection for different types of analyses because data within two orders of magnitude of the detection limit will be less precise than data more than two orders of magnitude above the detection limit.

As a practical example, if a fire-assay method with a gravimetric finish having a detection limit of 0.05 ppm were used routinely on a gold project for which the cut-off grade was 1 ppm, then data around the cut-off grade would be less precise than other methods using an instrumental finish. Gravimetric finishes are usually reserved for re-assay of higher-grade samples above a certain threshold value for this reason. This may pertain to those samples for which the gold assay exceeds the upper level of detection for the selected analytical method.

There are two fundamental approaches to estimating the relative precision associated with sampling and subsampling:

  1. theoretical, based on sampling theory (Gy 1982; Francois-Bongarcon 1998) and a detailed understanding of the mineralogy, grain-size distribution, liberation and grade of the material to be sampled;

  2. empirical, based on the analysis of duplicate samples.

Sampling theory and observation indicate that uncertainties increase with increasing grain size, decreasing sample mass (Stanley 2007) and decreasing grade. The former two variables are a function of the sampling and sample preparation process, and therefore subject to control within limitations. The grade of the material being sampled is often unknown in an exploration context and can only be estimated in more advanced stages of resource development. For this reason, an empirical approach is preferred for practical QC.

The use of duplicate samples to estimate uncertainties associated with sampling and subsampling is well established within the resource industry (Thompson and Howarth 1978; Sinclair and Bentzen 1998; Stanley 2006; Stanley and Lawie 2007a, b; Stanley and Smee 2007; Abzalov 2008). The two main approaches to quantitative analysis of duplicate pairs involve regression analysis of the data, as initially proposed by Thompson and Howarth (1978) and modified by Stanley and Lawie (2007b), or the calculation of the average coefficient of variation (CVAVE) or relative standard deviation, as argued by Stanley and Lawie (2007a) and Abzalov (2008, 2011). Both approaches require a considerable amount of duplicate-pair data at relevant grades that are at least an order of magnitude above the LLD.

In general, poor precision is associated with assaying for gold, as gold may occur as discrete particles and concentrations of economic value are very low (e.g. Sketchley 1998). In contrast, precision for many base-metal projects is less problematic as economic concentrations are many orders of magnitude higher than for gold projects and the host minerals, like sulfides, are more easily evenly distributed in a prepared sample.

The regression analysis method of Thompson and Howarth (1978) involves grouping concentration-sorted data into sets of 11 duplicate pairs and then fitting a linear regression line to a plot of the group means v. the median of absolute differences (as a proxy for the average standard deviation) between the duplicate pairs. This approach assumes that the errors are normally distributed. A biased result that underestimates the RP compared to other approaches will be given where the errors deviate from a Gaussian model, such as in the case of coarse gold projects (Stanley and Lawie 2007b). Stanley (2006) and Stanley and Lawie (2007b) have described a modified approach to produce unbiased estimates of precision using the Thompson and Howarth (1978) regression approach. It uses the root mean square (RMS; Stanley 2006, equation 1) calculation of an average standard deviation for each individual group of 11 samples rather than the median absolute difference. However, Stanley and Lawie (2007b) preferred an approach that regressed all duplicate variances against duplicate means using a quadratic model, provided there is a sufficient number of duplicate pairs, and thus avoids the necessity of grouping data. The square root of this model then defines a linear relationship between standard deviation and the mean of the duplicate pairs that does not require errors to be normally distributed.
where σ represents the RMS group standard deviation and n is the number of duplicate pairs.
The approach favoured here is the calculation of CVAVE, as recommended by Stanley and Lawie (2007a) and supported by Abzalov (2008); see Appendix C (Supplementary material).
where n is the number of duplicate pairs, and ai and bi are duplicate values. Use of the CVAVE approach has the advantage of being related to other common measures of relative error, such as RP (RP = 2 × CVAVE), relative variance (CVAVE2), absolute relative difference (ARD=2×CVAVE) and half absolute relative difference (HARD=(2/2)×CVAVE). The mathematical theory behind the use of duplicate sample pairs to estimate RP errors will not be repeated in this paper, other than to point out that the maximum possible CVAVE for duplicate pairs is 141% (2×100) without having to undertake more than a single replicate (i.e. duplicate) analysis (Stanley and Lawie 2007a). Instead, the reader is directed to the publications previously referred to for details.

Practical considerations

Duplicate data are typically collected during sampling and at each stage of sample-mass reduction. Sample-mass and grain-size reduction occurs during coarse crushing of rock and drill-core samples, and pulverization of all rock samples, generally to a nominal grain size of less than 75 µm. Soil and stream-sediment samples are typically sieved, first in the field and possibly again at the laboratory. This results in a decrease in both grain size and sample mass.

Field duplicate sampling from drill core or drill cuttings, or field sampling, is the responsibility of the company or individual collecting the samples. Coarse crush duplicates (often referred to as preparation duplicates), pulp duplicates and analytical duplicates are the responsibility of the laboratory undertaking sample preparation and analysis, and their results should be provided to the client. Preparation duplicates are generally collected randomly at the rate of 1 in 40 samples, whereas pulp duplicates are generally collected at a higher rate of 1 in 25 or 1 in 20 samples.

Field duplicates in surficial geochemical sampling programmes should be collected at the same time as the primary, or original, sample, using the same sampling method. In the case of soil samples, a fresh pit or auger hole should be dug within c. 2 m of the original sample site. In the case of stream sediments, which may be composite samples themselves, the field duplicate should be collected over the same reach of the stream from different subsample locations. A similar sampling strategy can be applied to rock-chip samples as well. The purpose of field duplicates is to determine the uncertainties associated with collecting a sample within the field over a short distance. In other words, they estimate how representative a sample is of the material being sampled.

Drilling duplicates consist of either half-core or quarter-core duplicates (half of a half-core sample), or a quarter-core sample of the remaining half-core sample once the primary half-core sample has been collected. The use of half-core duplicates leaves no core for future re-assay or further analytical work. The interpretation of data using quarter-core duplicates, however, requires a mass correction (Stanley 2014), as discussed in more detail later in this section. The use of half-core duplicates avoids these issues and is the simplest and preferred option, unless there is a need to preserve a complete record of the drill core for future reference. A modified approach to the collection of half-core field duplicates from drill core is to remove a thin slab of rock from the remaining half-core for retention in the core tray, resulting in a duplicate with nearly the same mass as the primary half-core sample. By convention, the assay from the initial half-core sample is the primary sample in the database and the duplicate half-core result is retained as a QC result.

Drill-core duplicates are not necessarily QC samples for two main reasons. First, it is very difficult to assign a quality expectation and an action to be taken if there was a defined QC failure. Poor agreement between core duplicates is as likely to be controlled by geology, such as orientation of mineralized veins or sulfide accumulations, and cannot be differentiated from failures at the laboratory. QC systems are designed to take corrective action if a QC failure is identified and to take steps to improve quality. If sampling half-core is the issue, the solution is to either submit whole core for analysis for all samples or to increase the diameter of the core drilled.

It is useful to use core duplicates to assess the uncertainty associated with drill-core sampling and is of interest to resource modellers. It is possible to collect representative suites of drill-core duplicates for different rock types and styles of mineralization. A programme of duplicate core samples may not be necessary for the life of the project once it has been demonstrated that data precision expectations have been met and is not necessarily included as a requirement in all regulations. However, it may be desirable to implement audits and oversight to ensure that the more mineralized halves of the core are not preferentially selected as the primary core sample.

Duplicate reverse circulation (RC) rock-chip samples are generally collected from the cyclone at the same time as the original sample using either a rotary or riffle splitter. Duplicate samples from percussion or auger drill samples, or from sludge, are more difficult to collect. Samples collected over the same interval of drilling should be cut to create the primary and duplicate sample, otherwise the primary and duplicate samples will originate from different depths within the drill hole. Underground duplicate channel samples and chip samples should be collected over the same interval lengths as the primary samples. Duplicate muck samples should be collected from muck piles using the same sampling strategy as the primary sample, although it can be difficult to obtain representative samples from coarse gold mineralization.

Primary and duplicate samples should be analysed within the same analytical batch to eliminate any between-batch variations which may also affect the results. Duplicate samples must be linked to their primary or parent samples in databases to allow easy pairing of the data for interpretation.

Duplicate data examples

Figure 10 shows a variety of plots used to assess the reproducibility of copper for analyses of 1000 field duplicates re-analysed by ICP-MS following an aqua regia digestion from a regional geochemical survey (−80-mesh stream sediment samples) from Yukon, Canada. Figure 10a is a modified Thompson–Howarth plot in which the standard deviation for groups of 11 samples has been estimated by the RMS method described by Stanley and Lawie (2007b). The slope of the linear ordinary least square (OLS) regression is 9.1%, which compares favourably with the calculated CVAVE of 10.8% but, as is typical, is slightly lower than the measured CVAVE (Abzalov 2008). Also shown are relative difference v. average grade plots (Fig. 10b), scatter plots of the duplicate data (Fig. 10c) and the CV from individual duplicate pairs v. average grade (Fig. 10d). All show a reduction in relative uncertainty with increasing grade typical of representative samples with values generally well above the LLD. Notable is a single value that lies off the trend defined by the bulk of the data, which may represent an incorrectly matched duplicate pair.

The plots in Figure 10 can be contrasted with Figure 11, which shows the field duplicate data for ICP-MS gold from the same samples, which are unrepresentative analyses of −80 mesh stream sediment samples analysed with a 0.5 g aliquot mass. The data are presented to show what poor data look like and to illustrate some of the pitfalls of the Thompson–Howarth approach. Furthermore, a high proportion of the results (70%) lie within an order of magnitude of the LLD. The CVAVE calculated for the 30% of the gold data more than an order of magnitude above the LLD is 61%, significantly higher than the CVAVE for copper in the same samples. The slope of the modified Thompson–Howarth plot is much greater than one, even when the regression line is forced through the origin and exceeds the theoretical limit of 1.41, as the regression is strongly influenced by a few outliers containing nuggetty gold grains in the highest-grade group of 11 samples. Removing the group with the highest gold values reduces the slope of the regression line to below 1.41%, but there is still a poor fit between the bulk of the data and the remaining groups with high group mean gold grade. The plots are dominated by values within an order of magnitude of the LLD where the data are inherently imprecise. Once the method precision improves at higher grades, the absence of a correlation between decreasing uncertainties with increasing grade is notable in Figure 11b–d. Instead, uncertainties increase with increasing grade in some samples due to the nuggetty distribution of gold particles. It can be concluded that the sampling, sample preparation and analytical methods employed were unsuitable for gold in the Yukon RGS (Regional Geochemistry Survey) re-analysis program. The aliquot size was too small for the grain size to provide a representative subsample and representative analyses would require either a significant increase in sample mass or a decrease in grain size.

Once a linear regression model has been fitted to the data, the precision (P) for a given grade is calculated from twice the slope of the OLS regression line plus twice the Y-intercept divided by the concentration:
where c is the concentration or grade, b is the Y-intercept of the regression line and m is the slope from regression.

Examples of RP estimates for different duplicate types at various grades for an advanced project characterized by coarse gold is illustrated in Figure 12. Invariably, the main source of uncertainty lies in the original sample collection (Stanley and Smee 2007), and this is a direct function of collected sample mass (Stanley 2014). Subsequent rigour in sample preparation and analysis cannot compensate for large uncertainties associated with the collection of non-representative samples, but the overall uncertainty in field duplicates can be reduced somewhat by optimizing sample preparation to maximize sample homogeneity.

CVAVE calculated from field sample duplicates includes all the errors associated with the subsequent sample preparation and analytical steps (i.e. the errors are additive). The difference between CVAVE from the field duplicates and the preparation duplicates isolates the uncertainty associated with sampling (i.e. the sampling precision). However, calculating the sampling precision is not a simple case of subtracting the preparation CVAVE from the sample CVAVE as standard deviations are not subtractive (Stanley 2014). Instead, CVAVE data must be converted to variances before the difference can be calculated and sampling CVAVE recalculated by taking the square root of that difference (Stanley and Smee 2007).

The mass of samples collected also has implications for the collection of drill-core duplicates and the interpretation of the duplicate data. Drill-core duplicates should be collected during the initial sampling phase consisting of either half- or quarter-core duplicates (i.e. half a typical half-core sample; using the a priori method of Stanley 2014) or, if collected at a later stage, may involve the collection half of the remaining half-core (i.e. quarter-core repeat analysis; the a posteriori method of Stanley 2014). The latter instance would not be a true field duplicate as it is analysed separately from the original sample and a component of batch variation will have been introduced into the comparison of data. As a quarter-core duplicate has, on average, half the mass of a typical half-core sample, the CVAVE estimated from these duplicates must be corrected for the mass difference, as described by Stanley (2014). A correction of 0.5 is applied to the sampling variance and the CVAVE recalculated by adding the adjusted sampling relative variance to the relative variance calculated from the CVAVE for the preparation duplicates. The correction is different in the case of using a quarter-core duplicate and an original half-core sample in that a weighting of three-quarters, rather than half, of the sampling variance is used to recalculate the total CVAVE for the core duplicates. The reader is referred to Stanley (2014) for the theoretical justification and details of this correction.

The Thompson–Howarth method of estimating precision may be sensitive to outlying grouped data with high relative errors that impact regression curves. For positively skewed datasets, such as those encountered in advanced-stage drilling projects where duplicate data should be plentiful, it is beneficial to state CVAVE for different grade ranges, particularly around the expected cut-off grade. For 4500 laboratory internal pulp duplicate data, plotted in Figure 13, the overall RP calculated using CVAVE is 28% because the average is influenced by data close to the method LLD. However, for gold values over 2 g t-1 the RP is estimated as 9%, which is the expected method precision for routine 50 g fire assays.

Using the same approach taken with core duplicates, the uncertainties associated with subsampling the coarse-crush material can be isolated by subtracting the relative variance (squared CVAVE) calculated from pulp duplicates from the relative variance from coarse-crush duplicates (Stanley and Smee 2007). The pulp duplicates are also typically used to calculate the method precision, which includes the uncertainties associated with subsampling, digestion and analysis. Note that all RP decrease rapidly within an order of magnitude above the method detection limit (Fig. 9).

It should also be recognized that large numbers of duplicate pairs are required to estimate CVAVE. The use of randomly selected samples for duplicate analysis avoids bias but also may produce many analyses either close to or below the method LLD for some commodities, such as gold- or platinum-group elements, or below the cut-off grade to be used in resource estimation. The proportion of samples with values more than an order of magnitude above the LLD may be as low as 20–30% for some gold projects. Data within an order of magnitude of the LLD have minimal information as the uncertainties associated with the final stages of preparation and analysis are inherently large. They would typically be excluded from the calculation of CVAVE or, where regression analysis is used, have minimal influence on the fit of the regression line compared to the higher-grade samples. Ultimately, resource geologists need to know the uncertainties associated with separating ore and waste (i.e. at the anticipated cut-off value).

Therefore, the selection of field duplicates should be biased towards mineralized zones where these can be identified visually but should occur at the time of the original sampling before any values have been determined. Alternatively, the frequency of duplicate collection could be increased for those projects where a significant proportion of samples are likely to yield results at or below the method LLD. The selection of preparation and pulp duplicates is generally done randomly by laboratories, but it is possible to designate additional samples for analysis using preparation or pulp duplicates, such as the selected field duplicates. This approach also has the added advantage of obtaining data from each stage of the sampling, preparation and subsampling of the same material. Where mineralized material is not readily identifiable, as in the case of most soil or stream-sediment surveys, a higher rate of duplicate samples should be collected to ensure sufficient data well above the LLD are available for analysis.

Once the CVAVE have been isolated for each subsampling stage, the contributions to the total CVAVE associated with analysis of those samples become apparent. There are practical limitations to the mass of samples which can be collected in the field, such that it may not be possible or economically viable to significantly lower the uncertainties associated with sampling (e.g. routinely drilling with a large diameter core). However, as the sample preparation and subsampling stages also contribute to the total uncertainty, it is possible to lower the total uncertainty somewhat by lowering the uncertainties associated with sample preparation by pulverizing a larger split of material following coarse crushing, by analysing a larger sample by using a bulk analytical method (Stanley 2007), or by crushing, pulverizing or sieving to a finer grain size. Note, however, that improvements to subsample precision do not compensate for poor field sampling RPs due to the nuggetty distribution of the phase of interest in the material being sampled. They serve only to minimize the uncertainties associated with the various stages of sample preparation and subsampling (i.e. they do not make a bad situation worse) but will not significantly decrease the total relative error where this is dominated by the sampling error.

Application of CVAVE

The collection of duplicate data is particularly important in the early stages of a project to determine where CVAVE might be unacceptably high so that steps can be taken to improve sampling and subsampling. The recognition of whether the collection of samples or subsamples is sufficiently precise may be based on the experience of the practitioners involved in the project, or it can be done by benchmarking project CVAVE estimates against published CVAVE data for different commodities and mineral deposit types (e.g. Abzalov 2008, 2011). These estimates are indicative only and provide a guide for the sorts of uncertainties which would be considered acceptable for different duplicate types and commodities. These published values also illustrate how data-quality expectations vary for different commodities and different mineral deposit types, such that a one-size-fits-all approach is not appropriate. Ultimately, each deposit is unique and estimates of CVAVE will reflect the distribution of minerals within the deposit at the level of sampling once optimized to minimize CVAVE.

Whether steps are undertaken to minimize CVAVE is an economic decision based on data-quality expectations for the project at a particular point in time. Ultimately, the data need to be representative of the material being sampled or subsampled to the satisfaction of the QP/CP responsible for the public release of data.

The previous discussion involves the collection of duplicate samples to estimate the CVAVE at various stages of a mineral programme. These are not QC samples per se as no determination is made from an individual duplicate pair as to whether an acceptable CVAVE has been attained. The percentage difference between the duplicate analysis and the average of the duplicate pair should be less than the calculated tolerance value for preparation duplicates used by the laboratory for its internal QC of a specific analytical method. In this way, duplicate samples act as QC samples to indicate whether RP falls within acceptable bounds. Regardless of whether individual duplicate pairs are assessed or whether a large volume of duplicate data are used to estimate CVAVE at different stages of sampling, the data should be used to drive continuous improvement in data quality to meet the quality expectations of the project.


The relative error at various stages of field sampling and preparation is typically monitored using duplicate pairs that are analysed within the same sample batch. There are several graphical and numerical approaches to the interpretation of data from duplicate pairs, namely a modified Thompson–Howarth regression analysis using the RMS approach and the calculation of CVAVE. Both approaches provide a quantitative estimate of the RP at a particular sampling stage, but the regression approach tends to underestimate RP compared to CVAVE. CVAVE can be benchmarked against published data for similar commodities and styles of mineralization but should be used to drive improvements in the sampling, sample preparation and selection of analytical methods for a particular resource project.

Public reporting codes generally stipulate that modern mineral exploration and development programmes include a QA/QC programme, although they provide no guidance as to how this should be achieved. This paper provides practical advice on the design of QA/QC programmes and the interpretation of QC data for the assessment of data accuracy, RP and cross-contamination. Furthermore, a well-thought-out QA/QC programme identifies uncertainties in the data being collected and provides an opportunity to rectify deficiencies with respect to data-quality expectations. In this way, an effective QA/QC programme can be used to minimize uncertainties related to sampling, sample preparation and analysis, and to identify those uncertainties which cannot be practically reduced and therefore present a risk to decision-makers and investors. QA/QC programmes should be dynamic and flexible enough to evolve with the various stages of resource development and with the complexities of individual projects.

The authors have benefited from interactions with many mineral exploration and mining companies over several decades. We have all benefited from opportunities to review quality control (QC) data from a wide variety of projects and commodities that have informed the recommendations and discussion contained in this article. We would also like to acknowledge the Association of Applied Geochemists for approaching us to undertake this review. Credit is also due to a very thorough review by Clifford Stanley and an anonymous reviewer. However, the views expressed remain our own and are based on our collective practical experience with quality assurance and QC (QA/QC) programmes.

BWS: writing – original draft (lead); LB: writing – original draft (equal); DA: writing – original draft (equal); DH: writing – review & editing (supporting).

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

The authors declare they have no known competing financial interests or personal relationships which could have appeared to influence the work reported in this paper.

Not all the data used to generate graphs presented in this paper are available due to their confidential nature. Please contact the corresponding author to obtain what data are publicly available.

Appendix A: Glossary of QA/QC terminology

Accuracy: The degree to which an analysis of a substrate, or the mean of a set of analyses, approach the ‘true’ concentration.

Barren wash: The use of a barren coarse quartz wash in crushing and pulverizing equipment immediately after samples known to be of high grade to minimize cross-contamination or carry-over to the next sample in the processing stream.

Bias: A systematic difference between results obtained from one laboratory compared to either results from a certified reference material (CRM) or a second laboratory because of differences in procedures, instrumentation and/or calibration. One goal of a quality assurance/quality control (QA/QC) programme is to identify and quantify biases.

Bulk density: The weight of an object or material divided by its volume, including the volume of its pore spaces. Units of measurement are mass per volume (e.g. g cm−3).

Carry-over: See cross-contamination.

Certified reference materials (CRMs): ‘Controls’ or standards with known concentrations within established limits used to validate analytical measurement methods, or to calibrate instruments. Accepted concentrations and standard deviations are established by a round-robin analysis involving multiple determinations in several laboratories. Concentration information is traceable to a certificate or other documentation.

Chain of custody: This refers to all steps in a sampling chain that take possession of the sample, including geologists, samplers, transporters and receivers (usually a laboratory). It provides a record of the sequence of entities that have custody of samples as they move through a shipping chain, providing the ability to trace a material back to its origin. Chain of custody will include documentation such as sample origin, transport documentation and receipt at a laboratory. (Modified from

Check assay: A check assay is used to compare sample analysis and measured values for consistency between laboratories. Samples are processed at one laboratory entirely and the remaining pulps are resubmitted to a second laboratory for identical analysis. Note: preparation is not performed at the second laboratory. Any two laboratories may have inherent biases between them. One may report consistently high values, and the other consistently low values. However, both may well be within acceptable limits for bias. (

Coefficient of variation (CV): A standard measure of variability in science based on the standard deviation of a dataset divided by the mean of that same dataset, represented as a percentage. Also referred to as relative standard deviation (RSD). An average coefficient of variation (CVAVE) can be calculated from a population of duplicate pairs from different stages of sampling and subsampling. A root mean square (RMS) method for calculation of CV is recommended over a simple RSD (Abzalov 2008;

Confidence limits: The probability that the certified, or average, value of a certified reference material (CRM) lies within the stated confidence limits, typically 95%. The 95th percentile is equivalent to two standard deviations if normality is assumed. If normality is not assumed, then the 95th percentile would indicate a pass at a rate of 19 times out of 20.

Control sample: Control samples are any type of well-known samples used to assure analyses are properly performed so that results are reliable. The insertion rate of quality control samples should be fit for purpose (Bettany and Stanley 2001;

Cross-contamination: The contamination of the subsequent sample(s) in a sample collection or preparation stream by sample preparation or analytical equipment, either in the field or at the laboratory. It is synonymous with the term ‘carry-over’, typically used by assay laboratories. Note: it is not possible to have no cross-contamination or carry-over between samples in a commercial laboratory setting, so a limit of c. 1% is generally considered to be acceptable. Where the processing of high-grade samples would lead to ore-grade level cross-contamination, the use of coarse blanks or barren washes between samples should be used to minimize the problem.

Detection limit: The level at which an instrumental signal can be detected with statistical certainty by a given method. Typically, this level is three times the standard deviation of the analysis of a blank for the lower limit of detection (LLD or LOD). Detection limit can also be considered as the concentration below which values are not significantly different from zero, given measurement error. This must be distinguished from the lower limit of quantification (LLQ or LOQ), which is typically ten times the standard deviation of the measurement of an analyte in a blank. In practical terms, for most commercial assay laboratories, LLQ is typically an order of magnitude above LLD. Note also that methods also have an upper limit of detection (ULD), above which no estimate of concentration can be obtained. (

Duplicate sample: A sample collected and analysed concurrently, i.e. in the course of sample preparation under comparable conditions to the primary sample. It is useful to distinguish the stage at which the duplicate was collected (i.e. field duplicate, preparation or coarse-crush duplicate, pulp duplicate). Synonyms include replicate, repeat or second cut. (

Field or coarse blank sample: A sample that consistently has no measurable or negligible concentration of the analytes of interest. Used to monitor contamination of specimens during field handling, transportation and sample preparation. Blank specimens can be inserted into a batch in the field or laboratory and are analysed in the same batch as the field samples.

Field duplicate sample: A second sample of equal size (mass) to the original, collected as close as possible to the location of the original sample under comparable conditions and using an identical sampling procedure. Field duplicates are typically collected at the same time as the original and analysed within the same analytical batch to ensure that the only difference in the results is related to the sampling variance.

Field repeat sample: A second sample of equal size (mass) to the original collected at a sample site at a later date and under possibly different conditions but using identical sampling and analytical procedures. Often designed to test environmental factors, such as seasonal effects on surficial samples, or to validate historical results for the purpose of due diligence.

JORC: The Joint Ore Reserve Committee is an initiative of the Australasian Institute of Mining and Metallurgy (AusIMM) and the Australian Institute of Geoscientist (AIG) formed to develop a code for reporting of exploration results, mineral resources and mineral reserves for publicly listed companies in Australia. (

Laboratory duplicate: A second weighing of a sample taken from the same container as the original under laboratory conditions and analysed in the same analytical batch.

Laboratory repeat or check sample: A second aliquot of sample from a pulp envelope. Laboratory repeats are analysed in different batches to the original aliquot.

Memory effect: Instrumentation lines may be contaminated by high levels of an analyte leading to cross-contamination of the following samples (e.g. an ore-grade sample analysed by inductively coupled plasma mass spectrometry (ICP-MS)). For this reason, analytical methods must be appropriate for the expected levels of the elements of interest within the samples. Many laboratories will pre-screen samples before presenting them to instrumentation with the lowest lower limits of detection (LLD). For example, samples for ICP-MS are routinely pre-screened by inductively coupled plasma atomic emission spectrometry (ICP-AES) to identify samples with high levels of elements.

NI 43-101: National Instrument 43-101 Standards of Disclosure for Mineral Projects provides guidelines and a structure for the reporting of technical information for mineral projects in Canada under applicable securities regulations.

Normal distribution: Also known as the Gaussian distribution, this is a probability distribution that is symmetric about the mean, showing that data near the mean are more frequent in occurrence than data far from the mean. In graph form, normal distribution will appear as a bell curve.

Performance gates: A term used by some certified reference material (CRM) manufactures to represent control limits at two or three standard deviations above and below the certified, or average, value based on the pooled standard deviation of cleaned data from a round-robin certification process.

Precision: How close measurements of the same item are to each other. Often referred to as ‘repeatability’. In practice, relative precision (RP), presented as a percentage, is calculated from duplicate analyses, either by regression analysis or the calculation of the average coefficient of variation (CVAVE) and multiplying it by two.

Preparation, coarse or crusher duplicate sample: The second split of a sample collected after coarse crushing. Also sometimes referred to as a second cut.

Pulp duplicate: A second sample collected from the pulverized field sample after the material has been reduced to a powder and analysed in the same analytical batch as the original. Many laboratories identify these as repeats, which is a common source of confusion.

Quality assurance (QA; exploration and mining): A programme for the systematic monitoring and evaluation of the various aspects of a project, service or facility to ensure that standards of quality are being met. Also, processes and procedures designed to avoid error. In an exploration setting this means minimizing the numbers of false-positive and false-negative responses.

Quality control (QC; exploration and mining): An aggregate of activities (such as monitoring of accuracy, precision, contamination and data errors) designed to ensure adequate data quality as dictated by regulatory authorities or quality expectations of the project.

Round robin: The act of developing a consensus set of statistics for a certified reference material (CRM) by compiling multiple analyses from multiple laboratories for a particular set of analytes. Laboratories are typically aware that they are involved in a round-robin process.

Sampling theory: A theoretical approach to estimating the precision associated with sampling various materials based on a detailed knowledge of grade, grain-size distribution, grain shape, liberation and mineralogy of commodity minerals. An empirical approach to estimating precision based on the collection of duplicate data is preferred as this detailed information is typically lacking in early-stage projects.

SAMREC: The South African Code for the Reporting of Mineral Resources and Mineral Reserves. (

Shewhart chart: A standard graphical presentation of quality control (QC) analytical data in sequential order to assess variability, bias and analytical drift over time. Also referred to as control or run charts. Control limits, or performance gates, or often plotted with the data to allow the graphical detection of quality control (QC) failures. (

Specific gravity: The ratio of the density of a substance to the density of a standard, usually water for a liquid or solid. This measurement is a ratio and therefore does not have units.

Standard reference material (SRM): A material that has had one or more of its property values certified by a technically valid procedure and is accompanied by, or traceable to, a certificate or other documentation. (

Tolerance limits: A measure of homogeneity within a certified reference material (CRM) in which a certain percentage of subsample analyses (typically 95%) of the material are expected to lie within the stated limits, or interval, more than 99% of the time. In other words, if the same number of subsamples of a CRM were analysed in the same manner repeatedly, 99% of the tolerance interval (the difference between the lower and upper limits) would cover at least 95% of the total population. As tolerance is a measure of precision, it is sensitive to how close the values are to the statistical limit of detection.

Traceability: The ability to follow the trail of minerals along the shipping chain by monitoring and tracking chain of custody. (Modified from

Umpire assay: Analysis at a third, or ‘umpire’, laboratory to resolve discrepancies between primary and secondary laboratories. Commonly used to resolve disputes between the shipper and the receiver of mineral concentrates.

Z-score chart: Quality control (QC) data from different certified reference material (CRM) can be represented on a single chart by converting the data to Z-scores, being the difference between the measured and certified values divided by the standard deviation of the CRM.

Appendix B: Field observation check lists

All of the following items can be incorporated into a database or an Excel spreadsheet. The field audit report should be included in the overall quality control (QC) report for either a data room in the case of a sale or option, or an NI 43-101 or JORC report in the case of resource estimation or financial due diligence. Information provided by Barry Smee.

Drill site

  • Is the drill site clean?

  • Is the drill shack/room/area organized and uncluttered?

  • Are the core barrels or cuttings organized so they can be handled safely?

  • Is there a core barrel or cutting layout area?

  • Is there sufficient room to recover stuck core from the core barrel without loss of core?

  • Are there pieces of drill core or spilled cuttings on the floor or around the machine?

  • Is the water return and sludge taken away from the drill and placed in a sump?

  • Are the core or cutting boxes organized?

  • Are the core or cutting boxes numbered on the outside of the boxes, and the drill hole name shown?

  • Is the ‘from–to’ length shown on the outside of the core or cutting box?

  • In the case of core, is there a drillers’ block in each box?

  • Is the direction of down-hole marked on the outside of the box?

  • Are the full boxes of core or cuttings covered with a lid that cannot come off during transport?

  • Is the site safe? For example, are hard hats and hard boots mandatory, and are safety harness used when required?

  • Is there a first aid kit on site?

  • What is the communication protocol from rig to office or field site in the case of an emergency?

  • In the case of core, do drillers mark a ‘driller's break’ to differentiate it from a natural break?

  • Is there a ‘quick-log’ done on site?

  • Is there any rock quality designation (RQD) done on site?

  • Are drillers wearing any jewellery which could contaminate the samples?

  • Are the drill lubricants free of Mo, Cu, Zn, etc., which could contaminate the sample? If not, is the core cleaned with a solvent?

Core yard receiving area

  • Is the yard clean and large enough to handle and lay out a full core shipment?

  • Are the core boxes placed in order?

  • Are the lids removed and the core examined for loss?

  • If more than one drill hole, is each drill hole given a specified place in the layout yard?

  • Does the core yard contain pieces of spilled or lost core?

  • Is the RQD and recovery done in the core yard before logging?

  • Are the RQD and recovery data logged directly into a computer system or placed onto forms? If onto computer, is the data checked by a second person before final upload to the database?

  • Are the drillers’ blocks fixed into position with staples or screws or marks?

  • Are the core boxes identified on the outside with a permanent label, along with the box number and ‘from–to’ length based on measurement rather than drillers’ blocks?

Core logging, photography and sample selection

  • Is the logging area well-lit and can the boxes easily be placed in order on the tables?

  • Do geologists generate a ‘quick log’ to summarize the main units and mineralization before selecting samples?

  • What is the order for selecting sample intervals: after the ‘quick log’ but before the detailed logging, or only after the detailed logging?

  • What is the basis for sample selection: lithology or mineralization, or sample length?

  • Is there a maximum and minimum sample length?

  • How is the sampling interval marked: on the core or on the box, or both?

  • Is the core oriented before placing a cut line on the core?

  • Are the sample tickets placed in the core box before cutting? Are the tickets fixed to the box or floating free?

  • How many sample tickets per sample? The laboratory should receive three tickets, one for each sub-sample split, and one for the core box, with one remaining in the ticket book. This requires a sample book with four removable tickets and one for the book.

  • Are the sample numbers alphanumeric or do they have leading zeros? Neither is preferred as they create sorting problems.

  • Are the quality control (QC) positions pre-marked in the sample book to avoid mistakes?

  • Is the core logging data placed into a geological database directly, or onto written forms or spreadsheets?

  • Are data in the format of written comments, or data that can be digitally captured and used?

  • Is there a core library displaying lithologies and alteration relevant to the project to teach new geologists?

  • How is the core photographed? Is there a fixed focal length to the camera? Is the lighting natural and even? Are the boxes well marked so that the identification may be seen from the photo?

  • How are the photos stored? Are they with each drill-hole dataset?

Core cutting or splitting

  • Is the cutting room well-lit and ventilated?

  • Is there a supply of fresh water or cleaned re-circulated water for the saws?

  • Are the core boxes positioned on a table beside the core saw so that the core may be easily removed by the cutter?

  • What happens to the sample tickets during core cutting?

  • Are the sample bags pre-numbered with the sample number?

  • Is the same side of core placed into the sample bag at all times?

  • Does anyone check the sample tickets in the bag against the sample number on the bag, and also against the sample number in the core box?

  • If more than one drill hole is being sampled does a hole remain with the same saw and cutter until completed?

  • Where is the sample ticket in the bag? If using clear plastic bags, can the laboratory easily see the ticket without opening the bag?

  • How are the sample bags sealed?

  • How often is the saw bed cleaned or washed?

  • Are there many core pieces left behind in the saw bed, or around the saw or splitter?

  • Does the wash water travel into settling tanks, and do the cutters use flocculent to settle cutting fines?

  • How are the cuttings disposed of from the settling tanks?

  • Do the sample tickets get destroyed during shipment, or do they degrade when wet?

Quality control (QC) protocol

  • What is the frequency of a field blank insertion into the sample stream?

  • With drill core, what is the field blank? Does it have a similar matrix to the actual samples?

  • Is the field blank actually blank of the elements of interest, or has a suitable background value been established before use?

  • What weight of blank is used for a sample? Is the weight of sample similar to the weight of a normal sample?

  • Is the field blank stored to avoid contamination?

  • Is the field blank sample ticket placed in the core box sequence with the other sample tickets so as not to miss a sample number in the core box?

  • What is the frequency of a core or cuttings duplicate?

  • What is a core duplicate: two half-cores (hole in the core box), two quarter-cores (different mass from a normal sample), or some other combination (usually duplicates of differing mass one to the other).

  • Is the core duplicate ticket placed in the core box in sequence with the other sample tickets so as not to miss a sample number in the core box?

  • Are the CRMs produced by a reputable and recognized company, and of an appropriate size to cover required repeats?

  • What is the frequency of certified standard insertion into the sample stream?

  • Are the standards of appropriate grade and matrix to the samples being tested?

  • Do the standards contain any deleterious elements which could affect the assay procedure?

  • How are the standards packaged? Are they prone to oxidation before use?

  • How are the standards stored and can they be contaminated or opened before being bagged?

  • Are the number of standards being used sufficient to cover the expected grade ranges and element mixes?

  • Are standards selected by rote or random, or by the expected grade of the mineralization being encountered in the sample stream?

  • Who chooses the standards, the geologist or the technician?

  • How is the quality control (QC) entered into the database and how are errors avoided?

Core storage area

  • Are the core boxes clearly marked on the outside with hole numbers, box numbers and ‘from–to' lengths?

  • Are the core boxes organized by drill hole and box number?

  • Are the boxes stored on a rack under cover to protect them from the weather and animals?

  • Is there enough room between racks to remove core boxes safely?

  • Is there a core layout area available if required?

Bulk density measurements

  • Are bulk density measurements taken in the field?

  • Is the method an accepted method?

  • Are the core samples identified so they can be checked?

  • Is the balance calibrated?

  • Are standard reference materials used?

  • Are duplicate analyses undertaken?

  • Are check analyses undertaken?

  • Are the data stored so that they may be checked?

  • Are the rock units identified on the database?

This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License (