In this study we assessed the RCT reporting quality of 23 RCTs used within a FNB meta-analysis . Most of these articles came from anesthesia specialty journals, except for 3 of the RCTs. All RCTs were related to post-operative management of analgesia after total knee arthroplasty. The overall quality of reporting of the RCTs assessed was poor to moderate, with only 4 of the 15 items being reported in over 90% of the articles. We have identified several areas where reporting of a particular item was insufficient in most of the studies assessed. These areas included: Item 1) Title and Abstract, Item 4) Participants, Item 6) Outcomes, Item 8) Randomization: Sequence Generation, Item 9) Randomization: Implementation, Item 11) Participants Flow, Item 12) Recruitment and Item 14) Outcomes and Estimation. Areas where items were reported moderately well were: Item 7) Sample size, Item 10) Statistical Methods and Item 15) Harms. Also very noteworthy was that the reporting of the key methodological items was poor with less than 50% of the articles reporting any of the 3 key methodological items (Table 3). Our results agree with many studies which have assessed the quality of reporting of RCTs published in medical and subspecialty journals, including studies which have focused on journals specialized in anesthesia [3, 17, 21, 23–29, 44–46]. All of these studies showed there is poor quality of reporting, with the 3 key methodological items assessed in this study (appropriate allocation concealment mechanism, blinding and numbers analysed by ITT) being highlighted as poorly reported [17, 21, 23–27, 35, 44]. Even though there is evidence to suggest that the quality of reporting has significantly improved over time, especially with the onset, use and publication of the CONSORT statement, the poor quality of reporting overall suggests that the statistically significant improvements are not enough to be clinically important as higher quality of reporting is needed to reduce bias and properly support clinicians’ decisions about treatment management [13, 20, 22, 35]. Also, the overall quality of reporting does not necessarily correspond with the proper reporting of the 3 key methodological items assessed for in this study; 3 items which are deemed important in preventing bias [9, 35]. Allocation concealment is important in avoiding selection bias, proper blinding is important in avoiding performance and detection bias, and numbers analyzed (ITT principle) is important in avoiding attrition bias . Any method of randomization, allocation concealment or blinding which allows the investigator to control the treatment group to which the study participant will be assigned should be avoided . Recent studies suggest that deficiency and poor quality of reporting of these 3 key methodological items often causes bias in the estimate of treatment effects, though the extent of bias can be unpredictable [27, 31, 32]. The overall poor quality of reporting in anesthesia literature is important to highlight as it suggests that there is a need to standardize the reporting of RCTs in anesthesia publications where information is retrieved, disseminated, and used for clinical practice. Although the CONSORT group was created to help improve reporting, there seems to be a need to register the urgency for completeness, clarity and transparency of reporting  in order to make publications more accessible and easily comparable with one another. This is apparent with the 23 RCTs assessed for this study as well since all of these articles were published after the CONSORT statement was established in 1996. The introduction of the CONSORT among journals and enforcing its use by requiring authors to submit a CONSORT checklist has the potential to greatly improve the quality of reporting for its readers and researchers needing to use the important information that RCTs provide for future application and meta-analysis. This notion is supported by evidence shown in 3 recent observational studies demonstrating that the introduction of the CONSORT statement is associated with an improvement in the quality of reporting [20, 22, 51].
There are many limitations to this study. One of the key limitations is the small sample size of RCTs used to evaluate the quality of reporting and key methodological items. The reason for this is that our intention was to evaluate the reporting quality of RCTs used in an actual meta-analysis. In hindsight, our study may not have had enough power to detect any significant association with our four predictor variables. In order to compensate for this we looked at trends among our findings with two other studies [3, 25] that looked at analogous predictor variables using comparable methods of statistical analysis, and their association with the overall quality of reporting or reporting of key methodological items in RCTs. We found that two of the predictor variables we looked at (Sample Size and Impact Factor) had shown a significant association with reporting quality in these two studies. We were not able to compare our other two predictor variables (Journal adoption of CONSORT statement at the time of data collection and funding reported) with these two comparator studies and with other studies that were reviewed because of one of three reasons: 1) the variable was not used as a predictor variable, 2) the findings were not reported in a numerical and statistical manner that could be used for trend comparison, or 3) it was not reported at all in publication. For the OQRS there were a few noted trends. Though the direction and the magnitude of the effect of sample size was not seen as similar to the two comparator studies, our 95% CI range for sample size encompassed the IRR of both comparator studies suggesting that there is a possibility that our results would have showed association had the sample size been larger. However, for the predictor variable of impact factor, which was measured in our study on a continuous scale, there were no trends observed in the one study we could use as a comparison. It should be noted that the comparator paper we could use for impact factor measured impact factor association in a categorical manner. Most of the impact factors of the RCT journals in our study fell into only one of those categories suggesting that our CI range may have encompassed their model had we been able to compare on a continuous scale. We also had 4 RCTs that came from journals with no impact factor evaluation for 2010. This decreased our RCT sample size from 23 to 19, further reducing our power to detect a difference.
For the KMIS there were also a few noted trends with one of the comparator studies. Both the sample size and impact factor of our results matched the direction and range found in the study by Lai et al. suggesting that our results would have reached significance had the sample size been larger. This trend is also more likely to be apparent with KMIS, even with the sample size we had, because of noted studies suggesting that key methodological items, such as the three used in this study, are associated with improved quality of reporting [3, 7, 25, 44–46] and the importance the 3 key methodological items assessed (Blinding, Allocation Concealment and Numbers Analysed) have in reducing bias [6, 31, 32, 36, 52]. In the future it might be of interest to assess the quality of RCTs from more than one meta-analysis on a related topic or even a random sample of RCTs in recent years in primary journals. Another limitation of our study is the fact that we cannot verify the degree with which the quality of reporting reflects the true methodological quality of the RCTs we assessed. The lack of reporting of a particular item in an RCT does not mean that the study in truth had a poor methodology. It is possible that the protocols of many RCTs include important aspects of methodology however important methodological details were left out of the published report [53, 54]. In one study it was reported that the proportion of double blinded trials with a clear description of the blinding of participants increased from 19%, with reliance on the publication alone, to 67% when the protocol for the RCT was reviewed and supplemented for assessment with the publication . Although the reporting on the key methodological item of blinding was found to be inadequate in both trial protocols and publications, the results show that there is still a possibility that the methodological and reporting quality of an RCT is greater than what is assessed in the published report [55, 56]. Soares et al. also showed that adequate concealment of allocation was achieved in all protocols but was only reported in 42% of the published papers . Reviewing protocols may improve the quality of reporting assessment however the published report is an important proxy in determining the validity and clinically relevant effect estimate as it is the source most accessible to clinicians and researchers. The quality of the report itself is important for this purpose. An additional limitation was the fact there is no standard instrument to evaluate the quality of reporting of RCTs. The majority of scales have not been thoroughly developed or tested for in regards to validity and reliability as a gold standard (criterion validity) is needed to compare it against. A gold standard does not exist for assessing the quality of RCTs and so checklists demarcating important features of quality and scales based on a quality of reporting checklist are assessed based on face validity and content validity from a theoretical model founded on accepted criteria [57, 58]. Quality scores based on checklists are also unreliable and may introduce bias as scores were found to differ depending on the scoring system that was used [59, 60]. Our results may also not entirely represent the quality of reporting of RCTs used in meta-analyses related to the intervention of the meta-analysis we used . Therefore the assessment of the quality of reporting of the RCTs used in other anesthesia related meta-analyses may help in compiling a sufficient pool of studies providing a sample size with enough power to detect a significant difference. Even with our limitations our results have good internal validity as we had two qualified reviewers and good inter-rater correlation despite that there was a lack of clarity with the reporting of evaluated items. In the future, one important item to add to the OQRS is trial registration. Trial registration is another recently enforced part of any published RCT article in order to ensure accessibility of all research. Studies have shown that the presence of registration with a trial registry is suboptimal [61, 62] and trial registration is associated with improved reporting in RCTs within the highest ranked journals . It is also important to note that the year of publication definitely plays a role in the reporting quality of the RCTs chosen for our study. The CONSORT statement was established in 1996 and the speed of implementation of the guideline from the CONSORT group has been substandard as there continue to be observational studies concluding inadequate quality of reporting of many RCTs dated from 1996 till now [1, 17, 21, 23–25]. Indeed, it has been shown that the total quality of reporting score is significantly associated with factors such as trial size, publication year, and the impact factor which the RCT is published within [44, 47, 48]. With that said the medical research community should also take notice that impact factor has been shown to be statistically significant as a predictor for better reporting quality on several occasions [25, 47, 48] and may be a feature to look for when wanting to find RCT publications that are of high reporting quality. It is possible that a large proportion of journals that have a higher impact factor have made initiatives to have authors submitting RCT manuscripts to their journal to adhere to the CONSORT statement. This may be worth investigating in the future to see if such a relationship exists.