In the present study we investigated whether Na+ and K+ levels measured using different methods and equipment, namely an ABG and an AA, were equivalent. If so, the data could be employed interchangeably in routine practice.
To ensure the accuracy of test results, our central laboratory (employing an AA) participates in an external quality assessment (EQA) program; both electrolytes were assayed with reasonable accuracy during the study period. However, the accuracy of ABG data was not evaluated via any EQA program; this is an important limitation of the present study.
The between-day imprecision of both instruments (AA and ABG) was small and lacked clinical significance when compared with analytical performance indicators based on biological variation  or with the United States Clinical Laboratory Improvement Amendments (US CLIA) 88 performance rules .
Data from the ABG appeared to be correlated with AA results (r2 = 0.88 for K+ and 0.90 for Na+); the strength of the relationships between the two variables was acceptable.
However, biological variations in electrolyte levels are so small that a slight error will cause patients to be misdiagnosed . The US CLIA 1988 rules accept a difference of 0.5 mmol/L in potassium level, and 4 mmol/L in sodium level, compared to target values . In our present study; the mean difference between the two Na+ assays was 4.9 mmol/L; this exceeded the acceptable value of 4 mmol/L and the 95% limits of agreement of the difference were minus 0.97 and 10.05 mmol/L.
Our data are in line with those of previous studies [11–14] showing that Na+ values obtained using two different types of measurement differ significantly, and to an extent that may affect therapeutic choice. Our patients were critically ill in the intensive care unit (ICU). Chow et al.  reported that direct ISE sodium and potassium figures were lower than those obtained using indirect ISE. This is associated with the low blood protein levels characteristic of critically ill patients. In such patients, direct ISE offers more accurate and consistent electrolyte results than does indirect ISE.
The mean between-assay difference in K+ levels was 0.25 mmol/L. Although the mean difference between the results of the two K+ assays was within the range given by the US CLIA 1988 guidelines , a difference of 0.25 mmol/l is clinically relevant when intra-individual variation is considered. When it is recalled that the intra-individual biological variation in K+ level has been reported to be 4.8% , any bias exhibited by either method did not exceed the acceptable level of inaccuracy . It is important to emphasize that the cited criteria are very strict; the acceptable inaccuracy in terms of potassium measurement is only 1.8% . It is likely that the observed variations in K+ values of paired samples are attributable to differences in sample type, thus serum or whole blood. It is well known that potassium is released from platelets during clotting  and it is thus not surprising that serum potassium values are higher than are whole-blood levels. The magnitude of the difference observed by us was similar to that earlier reported (0.1-0.7 mmol/L) . After obtaining analytical results similar to ours, Jain et al.  suggested that it was safe to make clinical decisions based on serum K+ levels yielded by an ABG instrument. However, in 15% of our patients the errors were greater than 0.5 mmol/L; this may have implications in clinical practice.
Although the differences in electrolyte levels obtained using the two methods are sufficiently small to not raise a risk of inappropriate therapy in most instances, Morimatsu et al.  calculated the anion gap and the strong ion difference in critically ill patients using results obtained from a central laboratory analyzer and a POCT device; the Stewart-Figge formula was employed. The cited authors showed that the values calculated using data obtained by different methods differed significantly; clinical interpretation and consequent therapeutic decision-making could be adversely affected.
The observed differences between electrolyte levels measured using an ABG and an AA may be explained by a combination of factors, including sample transport, dilution of serum samples prior to testing (thus, the use of indirect vs. direct electrodes), and variations in instrument calibration [16, 17]. It is known that ISE-based instruments from different manufacturers yield Na+/K+ values that differ by 2–5%; calibration of an AA using a NIST standard lowers the figures . Also, it has recently been reported that the use of different types of heparin in blood gas syringes can introduce a pre-analytical bias in electrolyte concentrations. Such syringes can introduce different negative biases when the levels of positively charged ions are measured. The extent of bias differs among syringe types [19, 20].
The wide intratest variability, as shown in the Bland-Altman plots, and the statistically significant mean differences in measured ion levels between the two methods, suggest that the tests do not yield equivalent data. It is possible to compensate for variation caused by known factors using a correction factor, to render data from different instruments comparable. The question is whether such compensation is appropriate. Although a correction factor featuring compensation based on variations in average values can minimize differences between the data from two analyzers in some instances , we cannot recommend this approach toward comparison of Na+ and K+ test results.
Our present study illustrates the importance of determining the concordance, for each individual hospital, of electrolyte values obtained by ABG and those obtained in the central laboratory. As instrument type and calibration methods may differ among hospitals, it is important that each center conducts an in-house study. Ideally, before installation of an ABG, it would be useful to carefully evaluate the clinical significance of any difference between data yielded by central laboratory devices and POCT instruments. Such an evaluation should be conducted prior to ABG installation; this was unfortunately not the case in our hospital. Individual laboratories should utilize external NIST Standard SRM 956 to verify calibrations conducted by manufacturers and to ensure that the results afforded by direct and indirect ISEs (18) do not differ to a clinically relevant extent.
A limitation of our work is that, in the absence of clinical review, we were unable to identify any dataset as containing erroneous values. It was not possible to establish whether the central laboratory or ABG values were closer to the true values for either analyte.