Skip to main content
  • Research article
  • Open access
  • Published:

A look at the potential association between PICOT framing of a research question and the quality of reporting of analgesia RCTs

Abstract

Background

Methodologists have proposed the formation of a good research question to initiate the process of developing a research protocol that will guide the design, conduct and analysis of randomized controlled trials (RCTs), and help improve the quality of reporting such studies. Five constituents of a good research question based on the PICOT framing include: Population, Intervention, Comparator, Outcome, and Time-frame of outcome assessment. The aim of this study was to analyze if the presence a structured research question, in PICOT format, in RCTs used within a 2010 meta-analysis investigating the effectiveness of femoral nerve blocks after total knee arthroplasty, is independently associated with improved quality of reporting.

Methods

Twenty-three RCT reports were assessed for the quality of reporting and then examined for the presence of the five constituents of a structured research question based on PICOT framing. We created a PICOT score (predictor variable), with a possible score between 0 and 5; one point for every constituent that was included. Our outcome variable was a 14 point overall reporting quality score (OQRS) and a 3 point key methodological items score (KMIS) based on the proper reporting of allocation concealment, blinding and numbers analysed using the intention-to-treat principle. Both scores, OQRS and KMIS, are based on the Consolidated Standards for Reporting Trials (CONSORT) statement. A multivariable regression analysis was conducted to determine if PICOT score was independently associated with OQRS and KMIS.

Results

A completely structured PICOT score question was found in 2 of the 23 RCTs evaluated. Although not statistically significant, higher PICOT was associated with higher OQRS [IRR: 1.267; 95% confidence interval (CI): 0.984, 1.630; p = 0.066] but not KMIS (1.061 (0.515, 2.188); 0.872). These results are comparable to those from a similar study in terms of the direction and range of IRRs estimates. The results need to be interpreted cautiously due to the small sample size.

Conclusions

This study showed that PICOT framing of a research question in anesthesia-related RCTs is not often followed. Even though a statistically significant association with higher OQRS was not found, PICOT framing of a research question is still an important attribute within all RCTs.

Peer Review reports

Background

In a previous study we recently completed, we found poor overall quality of reporting [1] with randomized controlled trials (RCTs) used in a femoral nerve block meta-analysis [2]. We also identified considerable shortcoming in the reporting of three key methodological items with these RCTs. Most of the articles used in the meta-analysis came from journals specialized in anesthesia. Poor quality of reporting of RCTs is not exclusive to medical journals specialized in anesthesiology literature [3]. Similar findings have been reported in major general medical journals and subspecialty journals [415].

The lack of transparency in RCT reporting greatly reduces the readers’ ability to judge the quality, validity and reliability of the findings. It is difficult for the reader to find information in a study when reporting of certain qualities, especially those specified by the CONSORT 2010 statement [16], is done in a vague manner. Inadequate reporting and design of RCTs are associated with bias, especially exaggerated intervention or treatment effects that influence the interpretation of the findings of RCTs in helping to develop clinical guidelines and in being used for meta-analyses [1618]. Formal critical appraisal of trials is much more feasible with high quality of reporting of studies. This is because the published paper is typically the primary gateway for most readers to review RCTs. Although it is acceptable practice to contact trial authors to obtain data and other study details, the published article is the most accessible dissemination of the research that can be evaluated. The Consolidated Standards of Reporting Trials (CONSORT) group have published updated guidelines in the CONSORT 2010 statement that provide guidance on the reporting of RCTs for authors and for the medical publishing community at large [1618]. This information is available at http://www.consort-statement.org[18]. Though there have been some significant improvements in the quality of reporting since the CONSORT statement publication in 2001, the quality of reporting remains well below acceptable [10].

It is possible that certain predictor variables are associated with increased quality of reporting. Journal impact factor, sample size and declared funding have been associated with better quality of RCT reporting [3, 6, 11, 15, 19, 20]. Additionally, journal adoption of the CONSORT statement has been associated with improved reporting of RCTs [6, 16, 2123]. Being able to identify variables which can act as predictors for RCT quality of reporting could better help medical practitioners to review current literature; including literature within anesthesia and pain management. In an observational study it was found that journal publication, type of funding (particularly complete industrial funding) and larger sample sizes were significantly associated with a slightly better quality of reporting score [3]. It is also possible that better framing of a research question within an RCT can be independently associated with better overall quality of reporting and reporting of key methodologies, as were reported [24].

Methodologists have proposed the formation of a good research question to maximize the process of developing a research protocol, the study design and analytical assessments needed for a study being undertaken [25, 26]. A good question can aid an investigator tremendously with choosing the correct methodology and strategy for analysis, which in turn helps with appropriately answering the primary research question [2527]. A good research question is structured and requires the specification of the five constituents according to the PICOT approach [28]. These five constituents are: Population, Intervention, Comparator, Outcome, and Time-frame of outcome assessment (or commonly known by the acronym PICOT) [25, 26, 28, 29]. The PICOT approach is clear, concise and easy to use in terms of framing all the components of a research question [25]. The use of a properly structured research question has been proposed to guide the process of developing the research design and protocol [2527]. Our hypothesis is that a clear and complete research question that contains all the PICOT elements would spur better quality of reporting. The rationale behind this hypothesis is that a more complete PICOT question would drive the formation of a thorough research methodology to fully address the specifics of the research question and hence motivate detailed reporting of what was done in the study. More evidence is still needed to help the medical research community better understand whether adherence to the PICOT approach to developing a research question is associated with an increase in the quality of reporting in research publications. The aim of this observational study was to determine if the presence of a structured research question in PICOT format is independently associated with improved quality of reporting of RCTs. In this study, we quantified quality of reporting using a quality of reporting scoring system, which is described in the Methods.

Methods

Study design

This is an observational study based on 23 RCTs used in a meta-analysis to compare the analgesia outcomes of femoral nerve block (FNB), with or without sciatic nerve block, with epidural or patient controlled analgesia (PCA) after total knee arthroplasty [2]. The 23 RCTs used in the above mentioned meta-analysis were assessed in a previous study we conducted [1]. The following two things were assessed for in this earlier study: 1) the quality of reporting using a 15 point overall quality of reporting score (OQRS) [1], 2) key methodological item score (KMIS) using a 3 point score [1]. In this study, we went on to assess these same 23 RCTs used in the FNB meta-analysis [2] for a structured research question using the PICOT format with a 5 point score. The first two quality assessments of the 23 RCTs in the FNB meta-analysis [2], OQRS and KMIS, were, as mentioned above, done in a prior study that evaluated the quality of reporting and reporting of methodological items as the outcome variables as well as four predictor variables hypothesized to correlate with better outcomes (OQRS and KMIS) [1]. The data collected from this previous study was used to examine if a structured research question using the PICOT format is associated with better OQRS and KMIS in the same 23 RCTs [1]. New data collected for this study was in relation to variables needed to assess the presence of a structured research question within the 23 RCTs, using the PICOT format (Population, Intervention, Comparator, Outcome, Timeframe) with a 5 point score (score out of 5, with a possible range of score from 0 to 5 inclusively). For the purpose of this study, our original 15 point OQRS was reduced to a 14 point score because one of the 15 items we initially assessed for in our previous study [1] was “objectives” and this was in part assessed as part of our PICOT score (item 6 on Table 1). Much of our study’s method and protocol design models another study that also looked at the association between a PICOT structured research question and quality of reporting and reporting of key methodological items in RCTs, separately [24]. We found this study by Rios et al.[24] to have a sound methodology for effectively evaluating important variables needed to assess the quality of reporting, key methodological items in RCTs and a structured research question using the PICOT format.

Table 1 Description of overall quality of reporting items

Rating of overall reporting quality and key methodological items

In our previous study [1] we defined the quality of reporting as the extent which the rationale, design, conduct and analysis of each RCT evaluated was reported [1]. We did this by using 15 items from the 2010 CONSORT statement. The 15 items evaluated are reported in Table 1. These 15 items were selected for in our previous study [1] because the literature indicates that an absence in reporting these items is associated with a higher level of bias [16]. CONSORT items under “discussion” were excluded as we deemed them to be too subjective to assess. An OQRS, with 15 items in total (possible score from 1 to 15), was established. For scoring purposes, each of the 15 items was scored 1 if it was reported appropriately and 0 if it was not evidently stated or completely not stated. This score was reduced to 14 for this study for the reason mentioned above.

Three methodological items, which are a part of the CONSORT statement, were also excluded from our 14 point OQRS for a separate assessment to create the KMIS. These three methodological items are: 1) appropriate concealment of allocation, 2) blinding, and 3) numbers analysed (or known as the intention-to-treat principle). They were assessed separately because they are highly important in avoiding bias and misrepresentation of the treatment effect estimates [3035]. Each of the three mentioned methodological items were given a score of 1 if the method was appropriate and 0 if it was inappropriate or vaguely reported. A combined KMIS was calculated for each RCT by adding the scores of each methodological item with a possible KMIS ranging from 0 to 3 [1].

We considered allocation concealment to be done appropriately if centralized randomization, coded, numbered vehicles or sealed, opaque, and sequentially numbered envelopes were reported [36]. For blinding, in trials where there were no barriers to blind groups, at least two groups had to be explicitly reported as blinded to fulfil this item’s criteria. In instances where blinding was not feasible, we considered blinding to have been appropriately done if at least one group was unambiguously reported as blinded [3, 37, 38]. In our previous study [1], numbers analysed based on intention-to-treat (ITT) analysis was defined as a study that included all participants randomized into an RCT and the trials statistical analysis regardless if: 1) the intervention was administered, 2) if patients fulfilled the study entry criteria and 3) if patients withdrew from the trial, or there were treatment deviations from protocol [37, 3941]. The meaning of ITT has been interpreted differently by various RCTs, however the Cochrane Handbook has compiled evidence specifying that the definition we used averts a biased treatment effect and is a practical way to understand the effects of a clinical intervention [41, 42].

As described previously, a number of predictor variables have been shown to have a positive association with the reporting quality of RCTs [3, 6, 11, 15, 16, 1923]. The four predictor variables we used were sample size, impact factor, funding reported and journal adoption of the CONSORT statement at the time of our data abstraction. This step, as already mentioned above, was done in our previous study that included the 23 RCTs used in this study [1]. Sample size was defined as number of patients randomized in each trial. The impact factor of the journal refers to an index number that is assigned to a journal that is catalogued with Thomson Reuter’s Journal Citation Reports. This index number is calculated by Thomson Reuters [43] and is a ratio that describes the frequency that an “average article” in a particular journal has been cited in a specific year or period of time [43]. Funding reported was defined as funding that was mentioned and provided for the undertaking of the study. Journal adoption of the CONSORT statement at the time of our data abstraction was defined as the journal (within which that RCT is published) endorsing, enforcing or encouraging the use of the CONSORT statement for manuscript submissions.

The rating of the research question in PICOT format

We identified one paragraph from the introduction or methods section that we found best stated the primary research question, hypothesis or objective. To accomplish this, reviewers read both the introduction and methods section careful to identify the paragraph which best described the aims of the study. That paragraph was then reread to see which PICOT components were mentioned. We chose one paragraph because according to the PICOT framework, a research question should be cohesive and put together in a single statement [25]. One paragraph should be sufficient to include all the components of a PICOT question. This enables the reader to clearly understand the intention behind the study without having to repeatedly sift through different sections of the paper to piece together the entire research question. In those paragraphs, we evaluated the framing of the research question, regardless if it was in the form of a research question, objective or hypothesis. We then evaluated if the 5 elements of a research question were present in that paragraph. The 5 elements were, the type of population or patient pertinent to the question, the intervention, the comparative intervention, the outcome of interest, and the time allocated for measuring the outcome or outcomes of interest (P, I, C, O and T respectively). We scored each PICOT element with a 1 if it was explicitly stated and 0 if it was absent. In the end we created a PICOT score with a potential score between 0 and 5 inclusively. This scoring system for the PICOT was used in a study done by Rios et al.[24]. The score represents how complete the primary research question is within the evaluated RCT. We decided to qualify a study as providing a structured research question only if the five elements (Complete PICOT) were present in the description of the primary research question, the study objectives or in a declared research hypothesis. RCTs that did not describe all 5 elements (Incomplete PICOT) were deemed to not have included a structured research question.

Hypotheses

We hypothesized that higher PICOT scores will be associated with better quality of reporting as judged by the CONSORT completeness of reporting based on the OQRS and KMIS score system used in this study.

Data abstraction

We used an Excel standardized data abstraction form to extract data from each article. Two reviewers (VJBD and SZ) were blinded to each other’s ratings and extracted data independently. When the research question was rated, the reviewers were blinded to the OQRS and score for the KMIS for each article that was done for a previous study [1]. Any disagreements were resolved through consensus. Kappa statistics were used to measure the inter rater agreement for each of the 5 PICOT elements of the research question. The Kappa statistics for the OQRS and the KMIS have been reported in our previous study [1].

Statistical analysis

Categorical data were reported as number of counts. Each PICOT element was coded as a binary variable (1 = the element was clearly addressed in the research question and 0 = it was not clearly addressed in the research question). The number of RCTs that explicitly stated the PICOT element and the associated 95% confidence interval were calculated. The PICOT score was computed as the sum of the 5 individual elements and ranged from 0 to 5 inclusively. For the element that had a “zero” count or “full” counts, for instance when none of or all of the included trials reported that PICOT element, the 95% CI was calculated by adopting the rule of three [44]. This means that if none of n individuals within a PICOT element (i.e. one of the PICOT elements: P. I, C, O or T) showed the event that we were interested in, we could be 95% confident that the chance of this event occurring is at most 3 in n[45]. For the other PICOT elements, the 95% CI of the count was calculated by making the assumption that the number of RCTs that clearly stated the element followed a binomial distribution. The probability that an RCT had clearly stated the element was set to be the observed probability in the sample. The Cohen’s Kappa (κ) statistics were used to calculate the chance-adjusted agreement between the 2 raters for every PICOT element. Agreement was interpreted as poor if κ ≤ 0.2, fair if 0.21 ≤ κ ≤ 0.4, moderate if 0.41 ≤ κ ≤0.6, substantial if 0.61 ≤ κ ≤ 0.8 and good if κ > 0.8 [45].

A multivariable regression analysis, where the OQRS score was the outcome variable, was done to determine if a higher PICOT score was independently associated with a better OQRS. We included funding reported, journal adoption of the CONSORT statement at the time of data abstraction, sample size and the impact factor of the journal the RCTs were from as covariates for the OQRS. Here these variables were adjusted as covariates in the model. The sample size was transformed by the logarithm function with base 10 to improve interpretability. The OQRS (discrete, ranged from 0–14) was assumed to follow a Poisson distribution. The incidence rate ratio (IRR) was used to state the results of the analysis. The same method was used for the regression analyses with the KMIS as the outcome variable. Variables were considered to be statistically significant at alpha = 0.05. Regression analyses were also conducted to explore which individual PICOT elements were more associated with a better OQRS with adjustment of confounding variables. We did not adjust the overall level of significance for multiple testing because these analyses were primarily exploratory [46]. SPSS© 19.0.0.0 (IBM Corporation, 2010) was used to perform the statistical analyses.

Results

Rating of the research question using the PICOT framework

For the rating of the individual elements of the PICOT research question, the κ inter-rater agreement estimate using Cohen’s Kappa varied from 0.623 to 1. The median PICOT score was 3 (IQR = 1). The percentage of articles that described each PICOT element of a research question is in Table 2. Patients, intervention and comparators were often described well. A minority of the articles described the study’s time frame and less than 55% of the studies adequately described outcomes within the research question in that there was mention of the primary and secondary outcomes of the study. A complete PICOT structured research question was present in 2 out of the 23 RCTs evaluated.

Table 2 Frequency of description of each PICOT element

Factors associated between PICOT framing of a research question and quality of reporting and key methodological item reporting

Table 3 and 4 display the results of the multivariable analysis for the factors associated with the OQRS and the KMIS, respectively. After adjusting for sample size, impact factor, journal adoption of the CONSORT statement at the time of our data abstraction and funding reported, a higher PICOT score was not significantly associated with a higher OQRS (Table 3) or KMIS (Table 4) in the multivariable analysis. The model however did show a trend for the PICOT score and OQRS but not KMIS. At each point or one unit increase in the PICOT score there was a detected 26.7% increase in the number of quality of reporting items discussed, on average. We did not find a significant association between any one of the PICOT items and the outcomes OQRS and KMIS (Table 5). In addition, each of the four covariates: sample size, impact factor, journal adoption of the CONSORT statement, and funding reported were not significantly associated with the outcomes OQRS and KMIS in the univariate analyses. The results of the univariate analysis are reported in Table 3 and Table 4.

Table 3 Association between PICOT score and Overall Quality of Reporting Score (OQRS) a
Table 4 Association between PICOT score and Key Methodological Items Score (KMIS) a
Table 5 Association between individual PICOT items and the outcomes (Overall Quality of Reporting Score (OQRS) a and the Key Methodological Items Score (KMIS) b )

Discussion

We assessed the use of the PICOT framing of a research question in 23 RCTs used within a meta-analysis from 2010 assessing the use of FNB in improving analgesia outcome after total knee arthroplasty [2]. We found that the framing of the research question was usually incomplete, ambiguous and inconsistently written in either the introduction or methods section of the study. Sometimes parts of the research question were split between the introduction and methods section. Only 2 out of the 23 RCTs had a completely structured research question using the PICOT format. This finding is not uncommon as two studies have previously shown poor PICOT framing of research question among RCTs [24, 25], where in one study a structured research question was present in only 33.7% of the reports assessed [24].

From our literature search this is the second study to look at the association between PICOT framing of a research question and the overall quality of reporting and reporting of key methodological items of RCTs. This paper is unique in that it reviews RCT literature in anesthesia and studies used in a meta-analysis. Our results did not show a significant association between the completeness of the PICOT framing of a research question and OQRS and KMIS using the alpha value we set for significance (set as p < 0.05). Although we would attribute this to our small sample size, the results should be interpreted with caution. We did however uncover trends and tried to make comparisons to other studies using PICOT framing or some analogous predictor variable for OQRS or KMIS. After a review of the literature one study was found that showed an association between PICOT framing of a research question with overall reporting quality and reporting of key methodological items and that reported findings in a statistical and numerical approach that could be used for a trend comparison [24]. We were also able to compare the sample size predictor variable from this study.

For the OQRS outcome variable there were a few trends noted. For PICOT framing of a research question (predictor variable) the direction, magnitude and range of the effect was similar to the comparator study [24]. For the predictor variable of sample size, our direction and magnitude of the effect was different, however our range from our 95% confidence interval (CI) encompassed the incidence rate ratio (IRR) of the comparator study. Again, as mentioned previously, this is likely attributed to our small sample size.

For the KMIS outcome variable we also found some trends. For both predictor variables, PICOT framing of a research question and sample size, the direction, magnitude and range of the effect was similar to the comparator study. Our results are promising because in spite of our limitation with our own RCT sample size we still could show trends indicating a positive association between proper and complete PICOT framing of a research question and OQRS but not KMIS.

The noted association with the completeness of the PICOT framing of a research question and the quality of reporting is important [24] as it suggests that the theoretical reasoning behind systematically structuring a research question has practical and tangible applications in improving study design and transparency in the actual reporting. A desirable research question is one that is framed to constructively aid knowledge acquisition [47]. The question’s framing itself plays an important role in methodically and systematically clarifying the thought process needed to assess required outcomes and the purpose and benefit of the question itself [47, 48]. A structured research question is focused as questions too broad will likely lack methodological rigor [47, 48]. The PICOT method is a good approach as its five constituents help to focus a question and incite thoughts about rigorous methodological design and the feasibility of answering the question [25, 48, 49].

Limitations

It is important to note the various limitations in our study, some background on the effect these drawbacks have and any future improvements. One important limitation of our study is that there is no standard instrument to evaluate the quality of reporting of RCTs. The quality scores attained from our evaluation instrument are not validated. Most scales used to evaluate the quality of reporting have not been thoroughly developed or tested as a gold standard (external criterion) is needed in order to compare for validity and reliability of the scale developed [50]. Because a gold standard does not currently exist for reporting the quality of RCTs, scales used for this purpose of reporting quality are only assessed based on an accepted theoretical model [50, 51]. There has not been a single scale shown at being superior at measuring the quality of reporting alluding that dissimilar attributes in reporting are probably being measured [52, 53] or there is, as we infer, an element of subjectivity in the way the scale allows for assessment. Quality scores based on checklists, such as the ones in our study, may also be unreliable and could introduce bias [52, 53]. Scores were shown to differ contingent on the scoring system that was used [52, 53]. The benefit of using the evaluation instrument we used is that it is based on the CONSORT checklist criteria from the 2010 CONSORT statement and the statement itself is widely recognized by journals and editorial groups [18]. The items we assessed for in this study are founded on well recognized features expected in the reporting and conduct of RCTs [18]. Another potential limitation of this study is that some PICOT items may be directly related to the CONSORT statement items used to create the OQRS. For instance, PICOT items “Population”, “Intervention” and “Outcome” may be directly related to the CONSORT items “Participants”, “Intervention” and “Outcomes”. This may have a possible effect on the analysis. To address this possible effect we looked at the associations between the individual PICOT items and the outcomes used in this study (Overall Quality of Reporting Score (OQRS) and the Key Methodological Items Score (KMIS)). The results of this analysis are in Table 5. None of the potential associations with the individual PICOT items were statistically significant—which may also be due to low statistical power. In the future it might be best to assess each individual item in our checklists for the OQRS and KMIS against the completeness of the PICOT structured research question [17] and substantially increase our sample size of RCTs to improve the power needed to detect a difference. Lastly, another possible limitation is the fact that the RCTs that we evaluated mostly come from anesthesia specialized journals, which may appear to reduce the generalizability of our findings. However, the generalizability would not be affected by this because the relationship between PICOT and the quality of reporting is not expected to be different depending on the field from which the RCTs come from. The aim of our study was to assess association between PICOT framing of a research question and OQRS and KMIS within a meta-analysis that has the potential to inform clinical practice. To increase the generalizability, a study of similar design with a much larger sample size of RCTs would be needed. Regardless of these noted limitations, this study brings value because it shows that proper and clear PICOT framing of a research question is still limited. We also showed that we have internal validity due to our good inter-rater reliability correlation between two reviewers who independently assessed the quality of reporting, key methodological items and PICOT framing of the research question. Our results should be interpreted with caution as a small sample size was used and we did not adjust for multiple testing in our analysis as our analyses were only exploratory.

Conclusions

This study showed that PICOT framing of a research question in anesthesia related RCTs are incompletely structured with only two RCTs having a completely structured research question out of the 23 RCTs assessed. Although we did not find a statistically significant association between the completeness of the PICOT structured research question and OQRS and KMIS, our comparison with another study [24] showed strong trends between PICOT and OQRS but not KMIS. We are aware that our sample size is small and that cautious interpretation of the results should be made due to this. Poor overall quality of reporting does not mean there is poor methodological rigor within RCTs as some or even many features of a trial may not be adequately reported unless a protocol is assessed in conjunction. However, protocols are not as easily accessible and researchers when publishing should be aware that the publication of a study is a surrogate when assessing for study quality. The outcome of this study for researchers is that the proper and complete PICOT structure of a research question is an important attribute to all studies as it helps to indirectly improve reporting by initiating the thought process needed for thorough study design and management.

Abbreviations

CI:

Confidence interval

CONSORT:

Consolidated Standards of Reporting Trials

FNB:

Femoral Nerve Block

IRR:

Incidence Rate Ratio

IQR:

Inter-Quartile Range

KMIS:

Key Methodological Item Score

RCT:

Randomized Controlled Trial

OQRS:

Overall Quality of Reporting

PICOT:

Population, Intervention, Comparator, Outcome, Timeframe

SPSS:

Statistical Package for the Social Sciences.

References

  1. Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L: The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012, 12 (1): 13-10.1186/1471-2253-12-13.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Paul JE, Arya A, Hurlburt L, Cheng J, Thabane L, Tidy A, Murthy Y: Femoral nerve block improves analgesia outcomes after total knee arthroplasty: a meta-analysis of randomized controlled trials. Anesthesiology. 2010, 113 (5): 1144-1162. 10.1097/ALN.0b013e3181f4b18.

    Article  PubMed  Google Scholar 

  3. Rios LP, Odueyungbo A, Moitri MO, Rahman MO, Thabane L: Quality of reporting of randomized controlled trials in general endocrinology literature. J Clin Endocrinol Metab. 2008, 93 (10): 3810-3816. 10.1210/jc.2008-0817.

    Article  CAS  PubMed  Google Scholar 

  4. Altman DG: The scandal of poor medical research. BMJ. 1994, 308 (6924): 283-284. 10.1136/bmj.308.6924.283.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Scales CD, Norris RD, Keitz SA, Peterson BL, Preminger GM, Vieweg J, Dahm P: A critical assessment of the quality of reporting of randomized, controlled trials in the urology literature. J Urol. 2007, 177 (3): 1090-1094. 10.1016/j.juro.2006.10.027. discussion 1094–5

    Article  PubMed  Google Scholar 

  6. Can OS, Yilmaz AA, Hasdogan M, Alkaya F, Turhan SC, Can MF, Alanoglu Z: Has the quality of abstracts for randomised controlled trials improved since the release of Consolidated Standards of Reporting Trial guideline for abstract reporting? A survey of four high-profile anaesthesia journals. Eur J Anaesthesiol. 2011, 28 (7): 485-492. 10.1097/EJA.0b013e32833fb96f.

    Article  PubMed  Google Scholar 

  7. Chan AW, Altman DG: Epidemiology and reporting of randomised trials published in PubMed journals. Lancet. 2005, 365 (9465): 1159-1162. 10.1016/S0140-6736(05)71879-1.

    Article  PubMed  Google Scholar 

  8. Greenfield ML, Mhyre JM, Mashour GA, Blum JM, Yen EC, Rosenberg AL: Improvement in the quality of randomized controlled trials among general anesthesiology journals 2000 to 2006: a 6-year follow-up. Anesth Analg. 2009, 108 (6): 1916-1921. 10.1213/ane.0b013e31819fe6d7.

    Article  PubMed  Google Scholar 

  9. Greenfield ML, Rosenberg AL, O’Reilly M, Shanks AM, Sliwinski MJ, Nauss MD: The quality of randomized controlled trials in major anesthesiology journals. Anesth Analg. 2005, 100 (6): 1759-1764. 10.1213/01.ANE.0000150612.71007.A3.

    Article  PubMed  Google Scholar 

  10. Hopewell S, Dutton S, Yu LM, Chan AW, Altman DG: The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed. BMJ. 2010, 340: c723-10.1136/bmj.c723.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Lai R, Chu R, Fraumeni M, Thabane L: Quality of randomized controlled trials reporting in the primary treatment of brain tumors. J Clin Oncol. 2006, 24 (7): 1136-1144. 10.1200/JCO.2005.03.1179.

    Article  PubMed  Google Scholar 

  12. Lai TY, Wong VW, Lam RF, Cheng AC, Lam DS, Leung GM: Quality of reporting of key methodological items of randomized controlled trials in clinical ophthalmic journals. Ophthalmic Epidemiol. 2007, 14 (6): 390-398. 10.1080/09286580701344399.

    Article  PubMed  Google Scholar 

  13. Mills EJ, Wu P, Gagnier J, Devereaux PJ: The quality of randomized trial reporting in leading medical journals since the revised CONSORT statement. Contemp Clin Trials. 2005, 26 (4): 480-487. 10.1016/j.cct.2005.02.008.

    Article  PubMed  Google Scholar 

  14. Moher D, Altman DG, Schulz KF, Elbourne DR: Opportunities and challenges for improving the quality of reporting clinical research: CONSORT and beyond. CMAJ. 2004, 171 (4): 349-350. 10.1503/cmaj.1040031.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW: Standards of reporting of randomized controlled trials in general surgery: can we do better?. Ann Surg. 2006, 244 (5): 663-667. 10.1097/01.sla.0000217640.11224.05.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Moher D, Hopewell S, Schulz KF, Montori V, Gotzsche PC, Devereaux PJ, Elbourne D, Egger M, Altman DG: CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ. 2010, 340: c869-10.1136/bmj.c869.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Juni P, Altman DG, Egger M: Systematic reviews in health care: Assessing the quality of controlled clinical trials. BMJ. 2001, 323 (7303): 42-46. 10.1136/bmj.323.7303.42.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Schulz KF, Altman DG, Moher D: CONSORT Group: CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMC Med. 2010, 8: 18-10.1186/1741-7015-8-18.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Farrokhyar F, Chu R, Whitlock R, Thabane L: A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007, 50 (4): 266-277.

    PubMed  PubMed Central  Google Scholar 

  20. Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB: Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes (Lond). 2008, 32 (10): 1531-1536. 10.1038/ijo.2008.137.

    Article  CAS  Google Scholar 

  21. Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, Gaboury I: Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust. 2006, 185 (5): 263-267.

    PubMed  Google Scholar 

  22. Moher D, Schulz KF, Altman DG: The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001, 357 (9263): 1191-1194. 10.1016/S0140-6736(00)04337-3.

    Article  CAS  PubMed  Google Scholar 

  23. Moher D, Jones A, Lepage L: CONSORT Group (Consolitdated Standards for Reporting of Trials): Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA. 2001, 285 (15): 1992-1995. 10.1001/jama.285.15.1992.

    Article  CAS  PubMed  Google Scholar 

  24. Rios LP, Ye C, Thabane L: Association between framing of the research question using the PICOT format and reporting quality of randomized controlled trials. BMC Med Res Methodol. 2010, 10: 11-10.1186/1471-2288-10-11.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Thabane L, Thomas T, Ye C, Paul J: Posing the research question: not so simple. Can J Anaesth. 2009, 56 (1): 71-79. 10.1007/s12630-008-9007-4.

    Article  PubMed  Google Scholar 

  26. Stone P: Deciding upon and refining a research question. Palliat Med. 2002, 16 (3): 265-267. 10.1191/0269216302pm562xx.

    Article  PubMed  Google Scholar 

  27. Sackett DL, Wennberg JE: Choosing the best research design for each question. BMJ. 1997, 315 (7123): 1636-10.1136/bmj.315.7123.1636.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  28. Heddle NM: The research question. Transfusion. 2007, 47 (1): 15-17. 10.1111/j.1537-2995.2007.01084.x.

    Article  PubMed  Google Scholar 

  29. Hulley SB: Designing clinical research. 2007, Philadelphia, PA: Wolters Kluwer: Lippincott Williams & Wilkins, 3

    Google Scholar 

  30. Schulz KF, Chalmers I, Hayes RJ, Altman DG: Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995, 273 (5): 408-412. 10.1001/jama.1995.03520290060030.

    Article  CAS  PubMed  Google Scholar 

  31. Nuesch E, Reichenbach S, Trelle S, Rutjes AW, Liewald K, Sterchi R, Altman DG, Juni P: The importance of allocation concealment and patient blinding in osteoarthritis trials: a meta-epidemiologic study. Arthritis Rheum. 2009, 61 (12): 1633-1641. 10.1002/art.24894.

    Article  PubMed  Google Scholar 

  32. Nuesch E, Trelle S, Reichenbach S, Rutjes AW, Burgi E, Scherer M, Altman DG, Juni P: The effects of excluding patients from the analysis in randomised controlled trials: meta-epidemiological study. BMJ. 2009, 339: b3244-10.1136/bmj.b3244.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Wood L, Egger M, Gluud LL, Schulz KF, Juni P, Altman DG, Gluud C, Martin RM, Wood AJ, Sterne JA: Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study. BMJ. 2008, 336 (7644): 601-605. 10.1136/bmj.39465.451748.AD.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Schulz KF, Grimes DA, Altman DG, Hayes RJ: Blinding and exclusions after allocation in randomised controlled trials: survey of published parallel group trials in obstetrics and gynaecology. BMJ. 1996, 312 (7033): 742-744. 10.1136/bmj.312.7033.742.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  35. Cochrane Handbook for Systematic Reviews of Intervention. http://www.cochrane-handbook.org/,

  36. Higgins JPT, Altman DG, Sterne JAC: Chapter 8: Assessing risk of bias in included studies. Cochrane Handbook for Systematic Reviews of Interventions. Edited by: Higgins JPT, Green S. 2011, The Cochrane Collaboration, 8.0-http://www.cochrane-handbook.org/, 510,

    Google Scholar 

  37. Miller LE, Stewart ME: The blind leading the blind: use and misuse of blinding in randomized controlled trials. Contemp Clin Trials. 2011, 32 (2): 240-243. 10.1016/j.cct.2010.11.004.

    Article  PubMed  Google Scholar 

  38. Newell DJ: Intention-to-treat analysis: implications for quantitative and qualitative research. Int J Epidemiol. 1992, 21 (5): 837-841. 10.1093/ije/21.5.837.

    Article  CAS  PubMed  Google Scholar 

  39. Lewis JA, Machin D: Intention to treat–who should use ITT?. Br J Cancer. 1993, 68 (4): 647-650. 10.1038/bjc.1993.402.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  40. Higgins JPT, Deeks JJ, Altman DG: Chapter 16: Special Topics in statistics. Cochrane Handbook for Systematic Reviews and Interventions. Volume 2011. Edited by: Higgins JPT, Green S. 2011, The Cochrane Collaboration, 16.0-16.2. http://www.cochrane-handbook.org/ edition

    Google Scholar 

  41. Hollis S, Campbell F: What is meant by intention to treat analysis? Survey of published randomised controlled trials. BMJ. 1999, 319 (7211): 670-674. 10.1136/bmj.319.7211.670.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  42. Higgins JPT, Green S, Cochrane Collaboration: Cochrane handbook for systematic reviews of interventions. 2008, Chichester, West Sussex; Hoboken NJ: Wiley-Blackwell

    Book  Google Scholar 

  43. Thomson Reuters Impact Factor. http://thomsonreuters.com/products_services/science/free/essays/impact_factor/,

  44. Eypasch E, Lefering R, Kum CK, Troidl H: Probability of adverse events that have not yet occurred: a statistical reminder. BMJ. 1995, 311 (7005): 619-620. 10.1136/bmj.311.7005.619.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  45. Landis JR, Koch GG: The measurement of observer agreement for categorical data. Biometrics. 1977, 33 (1): 159-174. 10.2307/2529310.

    Article  CAS  PubMed  Google Scholar 

  46. Bender R, Lange S: Adjusting for multiple testing–when and how?. J Clin Epidemiol. 2001, 54 (4): 343-349. 10.1016/S0895-4356(00)00314-0.

    Article  CAS  PubMed  Google Scholar 

  47. Bragge P: Asking good clinical research questions and choosing the right study design. Injury. 2010, 41 (Suppl 1): S3-S6.

    Article  PubMed  Google Scholar 

  48. Clouse RE: Proposing a good research question: a simple formula for success. Gastrointest Endosc. 2005, 61 (2): 279-280. 10.1016/S0016-5107(04)02579-9.

    Article  PubMed  Google Scholar 

  49. Straus SE: Evidence-based medicine: how to practice and teach EBM. 2005, Edinburgh; New York: Elsevier/Churchill Livingstone, 3

    Google Scholar 

  50. Verhagen AP, de Vet HC, de Bie RA, Boers M, van den Brandt PA: The art of quality assessment of RCTs included in systematic reviews. J Clin Epidemiol. 2001, 54 (7): 651-654. 10.1016/S0895-4356(00)00360-7.

    Article  CAS  PubMed  Google Scholar 

  51. Olivo SA, Macedo LG, Gadotti IC, Fuentes J, Stanton T, Magee DJ: Scales to assess the quality of randomized controlled trials: a systematic review. Phys Ther. 2008, 88 (2): 156-175. 10.2522/ptj.20070147.

    Article  PubMed  Google Scholar 

  52. Juni P, Witschi A, Bloch R, Egger M: The hazards of scoring the quality of clinical trials for meta-analysis. JAMA. 1999, 282 (11): 1054-1060. 10.1001/jama.282.11.1054.

    Article  CAS  PubMed  Google Scholar 

  53. Herbison P, Hay-Smith J, Gillespie WJ: Adjustment of meta-analyses on the basis of quality scores should be abandoned. J Clin Epidemiol. 2006, 59 (12): 1249-1256.

    Article  PubMed  Google Scholar 

Pre-publication history

Download references

Acknowledgements

LT is the clinical trials mentor for the Canadian Institutes of Health Research. The study was funded in part by the CANNeCTIN programme and by the Drug Safety and Effectiveness Cross-Disciplinary Training (DSECT) Program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lehana Thabane.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

VBD was involved with the conception of the study, participated in the study design, the acquisition of data, helped to perform the statistical analysis, made substantial contributions to the interpretation of the data, and drafted the manuscript. SZ was substantially involved with data acquisition, and critical revisions to the paper. CY made substantial contributions to the statistical design, statistical analysis and interpretation of data and helped critically revise the manuscript. LT was involved with the conception of the study and made substantial contributions to the statistical analysis, interpretation of data and critically revised the manuscript. JP, LH, AA and YM were involved with contributing data and helped draft and edit the final manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Borg Debono, V., Zhang, S., Ye, C. et al. A look at the potential association between PICOT framing of a research question and the quality of reporting of analgesia RCTs. BMC Anesthesiol 13, 44 (2013). https://doi.org/10.1186/1471-2253-13-44

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2253-13-44

Keywords