Skip to main content
  • Research article
  • Open access
  • Published:

PreAnaesThesia computerized health (PATCH) assessment: development and validation

Abstract

Background

Technological advances in healthcare have enabled patients to participate in digital self-assessment, with reported benefits of enhanced healthcare efficiency and self-efficacy. This report describes the design and validation of a patient-administered preanaesthesia health assessment digital application for gathering medical history relevant to preanaesthesia assessment. Effective preoperative evaluation allows for timely optimization of medical conditions and reduces case cancellations on day of surgery.

Methods

Using an iterative mixed-methods approach of literature review, surveys and panel consensus, the study sought to develop and validate a digitized preanaesthesia health assessment questionnaire in terms of face and criterion validity. A total of 228 patients were enrolled at the preoperative evaluation clinic of a tertiary women’s hospital. Inclusion criteria include: age  ≥ 21 years, scheduled for same-day-admission surgery, literacy in English and willingness to use a digital device. Patient perception of the digitized application was also evaluated using the QQ10 questionnaire. Reliability of health assessment questionnaire was evaluated by comparing the percentage agreement of patient responses with nurse assessment.

Results

Moderate to good criterion validity was obtained in 81.1 and 83.8% of questions for the paper and digital questionnaires respectively. Of total 3626 response-pairs obtained, there were 3405 (93.4%) concordant and 221 (6.1%) discrepant response-pairs for the digital questionnaire. Discrepant response-pairs, such as ““no/yes and “unsure/yes, constitute only 3.7% of total response-pairs. Patient acceptability of the digitized assessment was high, with QQ10 value and burden scores of 76 and 30%, respectively.

Conclusions

Self-administration of digitized preanaesthesia health assessment is acceptable to patients and reliable in eliciting medical history. Further iteration should focus on improving reliability of the digital tool, adapting it for use in other languages and incorporating clinical decision tools.

Peer Review reports

Background

Current practice guidelines mandate that patients undergo preanaesthesia assessment prior to surgery and anaesthesia, defined as the process of clinical assessment that precedes the delivery of anaesthesia care for surgery and non-surgical procedures [1]. Its goal is to allow for timely identification and optimization of medical conditions, thereby reducing perioperative morbidity and mortality. Effective preoperative evaluation can also decrease case delays and cancellations on day of surgery [2].

Traditionally, preanaesthesia assessment is conducted by a health care provider via a face-to-face interview with the patient. Studies suggest that self-administration of digital assessment questionnaires is a feasible means of gathering medical information for preanaesthesia assessment [3,4,5,6,7,8,9,10,11,12]. Compared with in-person interviews, these digital self-assessment tools are associated with patient acceptance and satisfaction, reliability of information and improved efficiency of assessment [4, 6, 9].

At the preoperative evaluation clinic of our hospital, a 33-item preanaesthesia health assessment paper questionnaire is currently administered to elective surgical patients by nurses to gather medical information pertinent to preanaesthesia assessment [13]. Based on pre-determined criteria, responses help to identify patients with medical issues who require outpatient anaesthetic review 2 to 4 weeks in advance of surgery. Relatively healthy patients are allowed to bypass outpatient referral and undergo standard anaesthetic review on the day of surgery. The questionnaire has served our purpose well, but is not designed for patient self-administration as it contained technical language and medical terms.

In line with global advances in information technology, healthcare institutions are increasingly leveraging on digital health technologies for care delivery. Local hospital statistics indicate that an average 900 patients are scheduled for elective surgeries every month and this number is expected to increase, as disease burden increases with an aging population. To cope with this demand, we postulate that a patient self-administered digital health assessment tool can be developed and implemented for the purpose of gathering medical history relevant for preanaesthesia assessment. The virtual tool allows remote access, so that assessment questionnaires can be completed at a time, place and pace convenient to the patient. The present study describes our experience in the development and validation of a patient-administered digital preanaesthesia health assessment questionnaire on a tablet device at a tertiary hospital.

Methods

The study was conducted at a preoperative evaluation clinic that provides care for women scheduled for elective surgery at a tertiary hospital. A working group comprising three consultant anaesthetists, six clinic nurses and five digital health researchers from a local medical school sought to develop a patient self-administered digital preanaesthesia health assessment application through an iterative process. Ethics approval was granted by the Nanyang Technological University Institutional Review Board (Ref: IRB-2017-12-011) and the SingHealth Institutional Review Board (Ref: 2017/3002).

A mixed-methods approach was adopted. A paper version of the questionnaire was first designed and validated, before conversion to a digital prototype. We hereby refer to the paper versions as Forms 1 and 2 and the digital version as Form 3. All versions of the questionnaire were developed in English and iteratively, each version was an improvement over the previous. Figure 1 describes the phases of development and validation of the questionnaires from paper to digital formats.

Fig. 1
figure 1

Mixed-methods approach to the development of PATCH

Phase 1: development and assessment of form 1

The self-administered paper questionnaire, Form 1, was designed after an extensive review of relevant literature via Pubmed and Google Scholar. Search terms used include (preanaesthesia or preanesthesia or pre-anaesthetic or pre-anesthetic or preoperative or pre-operative) and (health assessment or screening or questionnaire) and/or (validation). Shortlisted questionnaires were further examined for scope and relevance of domains and items, options of response types (i.e. binary/non-binary/free-text response), and design format of questions. Through consensus, the working group also determined the clinically relevant domains and corresponding items to be included in Form 1.

First draft of Form 1 was then presented to twelve attending anaesthetists of the hospital for evaluation of its face validity. While the domains were deemed adequate, the anaesthetists suggested the addition of follow-up questions to qualify some items e.g. number of pack-years as a follow-up to a positive history of cigarette smoking.

The draft of Form 1 was also evaluated at a workshop, where multidisciplinary staff of a local medical school provided feedback on its readability, clarity and contextualization. Participants suggested terms to replace technical jargon and reduced ambiguity in questions. Questions were structured according to domains of the body systems and each question was verified to assess only one domain or concept to the extent permissible (Supplementary Box 1). A glossary of terms (Supplementary Box 2) was also appended to provide explanation of medical terms.

With the collective feedback obtained, Form 1 was finalised and administered as a pen-and-paper survey to a convenience sample of 33 patients in a pilot study, herewith referred to as “Study 1”. The aim of this survey was to identify problems that were not addressed or considered during the design of the questionnaire. Inclusion criteria for patient recruitment were: age ≥ 21 years, ability to read and write in English and scheduled for same-day-admission surgery. After written informed consent, all patients were given instructions on the completion of Form 1, with emphasis on unassisted self-assessment. Following that, each patient underwent a semi-structured interview using questions adapted from the QQ-10 questionnaire, [14] an established instrument for measuring face validity, feasibility and utility of healthcare questionnaires. For our purpose, the original QQ-10 questionnaire was modified by amending options of “mostly disagree to “disagree and “mostly agree to “agree”. Upon completion of the interview, each patient underwent a nurse-led assessment as per standard of care. Demographic data and time taken to complete Form 1 were recorded. Data from Form 1 was analyzed using IBM SPSS version 25 (IBM corp. Armonk, NY, USA). Data from the QQ-10 questionnaire was analyzed both quantitatively and qualitatively, using thematic analysis.

Phase 2 – iteration of form 1 to form 2, with validation

Form 2 was an iteration of Form 1, based on the feedback received from participants of Study 1. Improvements included the re-phrasing of questions to improve clarity and insertion of visual illustrations. The explanation of eight terms in the glossary section was also edited to improve ease of understanding (Supplementary Box 3).

A validation study targeting a larger convenience sample size of 104 patients was conducted. Referred as “Study 2”, 104 patients scheduled for same-day-admission surgery were recruited on presentation to the preoperative evaluation clinic during the designated study period. The primary aim of the study was to evaluate the criterion validity of Form 2 before its conversion to a digital prototype. The inclusion criteria were similar to those of Study 1.

The sample size was chosen, based on a similar study reported in the literature [15]. Consenting patients first completed Form 2 independently, after which their responses were verified by the nurse via a structured face-to-face interview, guided by Form 2. If a discrepancy of response was noted, the nurse would make annotations upon verification with the patient. Each patient was also interviewed using the modified QQ-10 questionnaire, as described for Study 1.

Criterion validity of the questionnaire was assessed by measuring the agreement between the patient responses and those obtained during nurse assessment. To account for questions with prevalence < 5% or > 95%, we have opted to report percentage agreement (PA), instead of the Kappa coefficient. PA is defined as number of questions with concurring responses divided by the total number of questions. Criterion validity is considered good if PA ≥ 95%, moderate if PA between 90 to < 95% and poor when PA < 90% [10]. The frequency of identical (“Yes/Yes, No/No and Unsure/Unsure), contradictory (“Yes/No or “No/Yes”), and non-contradictory (“Unsure/Yes, “Unsure/No,” “Yes/Unsure”, and “No/Unsure) response pairs were also analysed. Sum of the contradictory and non-contradictory response rates describe the total discrepancy error rate. Data were analyzed using IBM SPSS version 25, as described for Survey 1.

Phase 3 – development and validation of form 3

The iteration of Form 2 to Form 3 (the first digital protoype) was based on findings obtained from Phase 2 and renewed input from the working group (Supplementary Box 4). The digital application, called PreAnaesThesia Computerized Health assessment, or PATCH, was developed on an iOS platform on a tablet, using React Native (JavaScript framework). The server was made using NodeJS, a JavaScript framework. Data was stored on MongoDB database. For the purpose of the study, the server program and database were located on a secure server at the Nanyang Technological University.

Improvements adopted for Form 3 included further rephrasing of questions to reduce ambiguity and deletion of questions deemed to be irrelevant. To facilitate patients in listing their medications and previous surgeries, a drop-down list of common medications and surgeries was developed, using data gathered from participants in Phase 1 and 2. The glossary of terms was configured to appear as pop-up boxes of explanation when activated by screen-touch. In addition, the application was designed to provide a summary page for review and final edit before submission. Screenshots of the digital prototype are shown in Supplementary Figure 1.

As the criterion validity of a paper questionnaire does not necessarily extend to its electronic format [16], validation of the digital prototype, Form 3, was conducted in Study 3. In addition to the inclusion criteria described in the earlier phases, the ability to use a tablet device was added as a criterion for recruitment. One hundred and six patients were recruited at the preoperative evaluation clinic over 8 weeks. Consenting participants completed digital self-assessment on a tablet unaided, then underwent nurse assessment using a provider interface of the digital tool and with the nurse blinded to the patient’s responses. PA for each response pair was measured. Time to completion of self-assessment was automatically captured by the application. Data was analysed using IBM SPSS verison 25.

Results

Study 1 (survey)

Of 33 patients recruited, 32 completed the study. One patient was excluded when the nature of admission was converted from same-day-admission to inpatient. Patients identified themselves as Chinese (23/71.9%), Malay (4/12.5%), Indian (1/3.1%), and others (4/12.5%), consistent with the ethnic distribution in the local general population. Median (IQR) age was 37 (32.2, 43) years. Median (IQR) time to complete self-assessment was 4 (3, 5) minutes. None of the patients identified any question as being uncomfortable to answer (Table 1). Table 2 describes the patient perception of selected statements from the QQ-10. Overall, patient perception of self-assessment was favourable.

Table 1 Patients’ assessment of Form 1 (n = 32)
Table 2 Patient feedback on Form 1, based on modified QQ-10 questionnaire (n = 32)

A total of 48 feedback comments were obtained from 21 patients. They pertained mostly to the clarification of medical terms (13/61.9%) and availability of options to guide entry of medications and past surgeries (8 / 38.1%). These comments were taken into consideration in the iteration of Form 1 to Form 2.

Study 2

Of 104 patients recruited, 98 patients (94.2%) completed the study and 6 were excluded due to incomplete paperwork. The patients identified themselves as Chinese (50/51%), Malay (24/24.5%), Indian (10/10.2%), and others (14/14.3%), with a median (IQR) age of 38.5 (33, 44) years. Patients took 7.3 (5.6, 9.4) [median (IQR)] minutes to complete pre-anaesthesia self-assessment and generally responded favourably to statements measuring value in the QQ-10 questionnaire (Table 3). Among negative perceptions, length of questions emerged as the most frequent reason. Of note, 82 (83.7%) of participants were willing to utilize a digital version of the questionnaire in the future.

Table 3 Patient feedback on Form 2, based on modified QQ-10 questionnaire (n = 98)

Analysis of patient feedback on the design of Form 2 revealed a total of 56 comments from 32 (32.7%) patients. Majority of comments referred to the need for clarification of medical terms (23/71.9%). There were requests to shorten the length of the questionnaire (3/9.4%). Overall QQ-10 value and burden scores were 76% (SD = 13%) and 30% (SD = 12.5%), respectively. Mean score for value questions ranged from 2.9 to 3.3, while the mean score for burden questions ranged from 0.9 to 1.64.

Table 4 shows the inter-rater reliability of Form 2. Good criterion validity was attained for 24 of 37 (65%) questions. Six (16%) questions were classified as having moderate criterion validity while seven (19%) had poor criterion validity. Total number of response pairs was 3626. Of these, 3432 were identical, giving a concordance rate of 94.6%. Sixty-seven (1.8%) were discrepant contradictory responses while 127 (3.5%) were discrepant non-contradictory responses, giving total discrepant responses of 194 (5.4%). Of these, the most common discrepant response pair was “unsure/no (94/2.6%), followed by “yes/no (63/1.7%) and “unsure/yes (32/0.9%).

Table 4 Inter-rater Reliability Testing of Form 2

Study 3

Of 104 patients recruited, 98 (94.2%) patients completed the study. They were predominantly of Chinese (55/55.1%), Malay (18/18.4%) and Indian (6/6.1%) ethnicity. Notably, 88 (89.8%) patients were below 50 years old. Median (IQR) completion time to self-assessment on the digital application was 6.4 (4.8, 8.6) minutes. Table 5 shows the results of reliability testing of Form 3. Good criterion validity was obtained for 23 of 37 (62%) questions. Eight (22%) questions had moderate criterion validity while 6 (16%) questions had poor criterion validity. Total number of response pairs was 3626. Of these, 3405 were identical, giving a concordance rate of 93.9%. There were 133 (3.7%) discrepant contradictory responses and 88 (2.4%) discrepant non-contradictory responses, giving a total of 221 (6.1%) discrepant responses. The most common discrepant response pair was “yes/no (89/2.5%), followed by “unsure/no “(76/2.1%), “no/yes (44/1.2%), “unsure/yes (11/0.3%) and “yes/unsure (1/0.03%).

Table 5 Inter-rater Reliability Testing of Form 3

Based on these findings, the working group made further enhancements to the digital application (Supplementary Box 6). In summary, the “unsure option was deleted to encourage commitment to a definitive response. Probing stems of questions were also added to specific domains to improve qualification of symptoms. Drop-down options of past surgeries, medications and allergies were updated to include more choices. These amendments led to the development of an improved digital version. The feasibility of its implementation was reported in a study published recently [17].

Discussion

Using a robust, mixed-methods approach, the present study describes the development and validation of a patient-administered digital assessment application on a tablet device for the purpose of gathering medical history relevant to preanaesthesia assessment. The PreAnaesthesia Computerized health Assessment (PATCH) application is accepted by patients and reliable when compared with nurse-led assessment.

For health assessment instruments to have practical value, they should have reliability and validity. An example of content validity is face validity – the extent to which items are perceived to be relevant to the intended construct, while criterion validity is a dimension of reliability. Compared to published studies [10], the present study achieved > 90% criterion validity in 84% of questions in the digital prototype. The difference could be related to differences in subject characteristics, such as literacy and social factors, which result in different perception and interpretation of the questions [15].

In the analysis of responses between patient self-assessment and nurse assessment, discrepant contradictory response-pairs, such as ““no/yes and “unsure/yes, can be concerning as they suggest failure of the digital tool to detect an issue that is eventually uncovered by nurse assessment. Fortunately, these constitute only 3.7% of total responses in the validation of the digital prototype. The fact that 93.9% of total response-pairs were concordant strongly supports its reliability in gathering preoperative medical information.

We observed that participants in the present study were mainly English-literate female, with median age < 40 years. Studies suggest that age and literacy can affect a patient’s perception and willingness to adopt mobile health technology. In a systematic review to evaluate barriers in adopting telemedicine, age, level of education and computer literacy emerged as key patient-related determinants [18]. The authors speculated that preference for personalised care and lack of training in new technology among older patients could have contributed to this observation. In another study that examined the usage patterns of virtual health services, younger and predominantly female patients were more likely to be early adopters of virtual medical consultations [19]. In driving digital strategies for patient care, healthcare organisations must address the needs of patients and tailor the engagement platform according to patients’ prefences and technology know-how. In the present study, 83.7% of participants in Phase 2 of the study had expressed receptiveness to the use of a digital self- assessment tool. This is not surprising, given the young age of our patients and the high internet penetration rate in the local population [20]. Concerns of data breach should be addressed with strict regulation and compliance with Health Level 7 (HL7) standards, through secure networks, data encryption, login controls and auditing.

Positive patient acceptance of digital health assessment has motivated us to re-design our clinical pathways, leveraging on telemedicine to achieve greater value. PATCH could serve as an online triage tool to determine if patients undergo a tele-consultation or in-person consultation for anaesthetic referral. With an average patient wait-time of 24 min at the preoperative evaluation clinic (unpublished data from internal audit), conversion of physical to tele-consultation could improve patient experience and clinic efficiency. The clinic could, in turn, focus its resources on optimizing care for medically-complex patients who present physically for consultation. Reducing physical visits to healthcare facilities could also confer the benefit of reducing the transmission of infectious diseases [21].

There are limitations to the present study. Recruitment of a larger sample would have allowed the use of Kappa coefficient for measurement of criterion validity. As the patients’ socio-economic characteristics were not reported, we could not control for bias due to socio-economic factors. The study was conducted in young adult female patients of a local healthcare facility. The study may yield different results in a mixed gender population or another clinical setting. As the application is developed in English, the results may not be extrapolated to questionnaires translated to other languages. Further research is directed at improving validity of the digital application and adapting it for use in other languages. There is also a plan to incorporate decision support tools to aid in risk prediction and clinical decision-making. To maintain the human touch, questions would be developed to simulate human-to-human conversation, incorporating elements of empathy [22] – a technique demonstrated to evoke responses more effectively from subjects during a computerised interview.

Conclusion

Self-administration of digitized preanaesthesia health assessment is acceptable to patients and reliable in eliciting medical history. Further iteration should focus on improving reliability of the digital tool, adapting it for use in other languages and incorporating clinical decision tools.

Availability of data and materials

Data collected and analyzed for the study are available from the corresponding author upon reasonable request.

Abbreviations

PATCH:

Preanaesthesia computerized health (assessment)

PHA:

Preanaesthesia health assessment

SDA:

Same-day-admission

PA:

Percentage agreement

References

  1. Committee on S, Practice P, Apfelbaum JL, Connis RT, Nickinovich DG, American Society of Anesthesiologists Task Force on Preanesthesia E, Pasternak LR, Arens JF, Caplan RA, Connis RT, et al. Practice advisory for preanesthesia evaluation: an updated report by the American Society of Anesthesiologists Task Force on Preanesthesia Evaluation. Anesthesiology. 2012;116(3):522–38.

    Article  Google Scholar 

  2. Ferschl MB, Tung A, Sweitzer BJ, Huo D, Glick DB. Preoperative clinic visits reduce operating room cancellations and delays. Anesthesiology. 2005;103(4):855–9.

    Article  Google Scholar 

  3. Vitkun SA, Gage JS, Anderson DH, Williams SA, Halpern-Lewis JG, Poppers PJ. Computerization of the preoperative anesthesia interview. Int J Clin Monit Comput. 1995;12(2):71–6.

    Article  CAS  Google Scholar 

  4. VanDenKerkhof EG, Goldstein DH, Blaine WC, Rimmer MJ. A comparison of paper with electronic patient-completed questionnaires in a preoperative clinic. Anesth Analg. 2005;101(4):1075–80.

    Article  Google Scholar 

  5. Tompkins BM, Tompkins WJ, Loder E, Noonan AF. A computer-assisted preanesthesia interview: value of a computer-generated summary of patient's historical information in the preanesthesia visit. Anesth Analg. 1980;59(1):3–10.

    Article  CAS  Google Scholar 

  6. Essin DJ, Dishakjian R, Essin CD, Steen SN. Development and assessment of a computer-based preanesthetic patient evaluation system for obstetrical anesthesia. J Clin Monit Comput. 1998;14(2):95–100.

    Article  CAS  Google Scholar 

  7. Goodhart IM, Andrzejowski JC, Jones GL, Berthoud M, Dennis A, Mills GH, Radley SC. Patient-completed, preoperative web-based anaesthetic assessment questionnaire (electronic personal assessment questionnaire PreOperative): development and validation. Eur J Anaesthesiol. 2017;34(4):221–8.

    Article  Google Scholar 

  8. Beers RA, O'Leary CE, Franklin PD. Comparing the history-taking methods used during a preanesthesia visit: the HealthQuiz versus the written questionnaire. Anesth Analg. 1998;86(1):134–7.

    PubMed  CAS  Google Scholar 

  9. Howell M, Hood AJ, Jayne DG. Use of a patient completed iPad questionnaire to improve pre-operative assessment. J Clin Monit Comput. 2017;31(1):221–5.

    Article  CAS  Google Scholar 

  10. Zuidema X, Leuverink T, Houweling P. Validation of a patient self-administered pre-anaesthetic screening questionnaire. Int J Cover Surg Anaesthesiol Nurs Manag Issues Day Surg. 2014;31:181–5.

    Google Scholar 

  11. Vitkun SA, Halpern-Lewis JG, Williams SA, Gage JS, Poppers PJ. Patient's perceptions of an anesthesia preoperative computerized patient interview. J Clin Monit Comput. 1999;15(7):503–7.

    Article  CAS  Google Scholar 

  12. Zuidema X, Tromp Meesters RC, Siccama I, Houweling PL. Computerized model for preoperative risk assessment. Br J Anaesth. 2011;107(2):180–5.

    Article  CAS  Google Scholar 

  13. Lew E, Pavlin DJ, Amundsen L. Outpatient preanaesthesia evaluation clinics. Singap Med J. 2004;45(11):509–16.

    CAS  Google Scholar 

  14. Moores KL, Jones GL, Radley SC. Development of an instrument to measure face validity, feasibility and utility of patient questionnaire use during health care: the QQ-10. Int J Qual Health Care. 2012;24(5):517–24.

    Article  CAS  Google Scholar 

  15. Hilditch W, Asbury A, Jack E, McGrane S. Validation of a pre-anaesthetic screening questionnaire. Anaesthesia. 2003;58(9):876–7.

    Article  Google Scholar 

  16. Juniper EF, Langlands JM, Juniper BA. Patients may respond differently to paper and electronic versions of the same questionnaires. Respir Med. 2009;103(6):932–4.

    Article  Google Scholar 

  17. Osman T, Lew E, Lum E, Chew J, Dabas R, Sng BL, Car J. Effect of PreAnaesThesia computerized health (PATCH) assessment on duration of nurse—patient consultation and patient experience: a pilot trial. Int J Environ Res Pub Health. 2020;17(14):4972.

    Article  Google Scholar 

  18. Kruse CS, Karem P, Shifflett K, Vegi L, Ravi K, Brooks M. Evaluating barriers to adopting telemedicine worldwide: a systematic review. J Telemed Telecare. 2018;24(1):4–12.

    Article  Google Scholar 

  19. Jung C, Padman R. Virtualized healthcare delivery: understanding users and their usage patterns of online medical consultations. Int J Med Inform. 2014;83(12):901–14.

    Article  Google Scholar 

  20. Digital Influence Lab Pte Ltd. Singapore Digital Marketing Statistics. Availabe online: https://digitalinfluencelab.com/singapore-digital-marketing-stats/ (Accessed on 21 January 2020).

  21. Wosik J, Fudim M, Cameron B, Gellad ZF, Cho A, et al. Telehealth transformation: COVID-19 and the rise of virtual care. J Am Med Inform Assoc. 2020;27(6):957–62.

    Article  Google Scholar 

  22. Peiris DR, Gregor P, Alm N. The effects of simulating human conversational style in a computer-based interview. Interact Comput. 2000;12(6):635–50.

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank: Choy Jin Xiang and Lee King Chuan for helping to develop the software; Teng Han Hong April and Liu Juan for research administrative support; Xu Xuelian, Kee Hwei Min, Noor Haslinda Binte Khamis, Noraidah Binte Mansoor, Yip SeokCheng and Liew Chow Fong for the excellent nursing support at the preoperative clinic; Dr. Thach Thuan Quoc and Dr. Leong Wan Ling for reviewing the draft manuscripts.

Funding

None.

Author information

Authors and Affiliations

Authors

Contributions

TO helped with the planning of the study, data collection, compilation of statistics and the constitution of the manuscript. EL helped with the planning of the study and constitution of the manuscript. EPML helped with the planning of the study, feedback workshop for Form 1, research mentoring of TO, and constitution of the manuscript. LG helped with the planning of the study. RD and BLS helped with the planning of the study and constitution of the manuscript. JC helped with the planning of the study and constitution of the manuscript. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Eileen Lew.

Ethics declarations

Ethics approval and consent to participate

Ethics approval and consent to participate was provided by the Nanyang Technological University Institutional Review Board (Ref: IRB-2017-12-011) and the SingHealth Institutional Review Board (Ref: 2017/3002). The approval of institutional review boards was conditional on the written consent of all patients.

Consent for publication

Not applicable.

Competing interests

None.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Supplementary Material. (DOCX 270 kb)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Osman, T., Lew, E., Lum, E.PM. et al. PreAnaesThesia computerized health (PATCH) assessment: development and validation. BMC Anesthesiol 20, 286 (2020). https://doi.org/10.1186/s12871-020-01202-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12871-020-01202-8

Keywords