Article Text
Statistics from Altmetric.com
We found the concepts discussed by Nadarzynski et al on the acceptability of artificial intelligence (AI)-embedded chatbots, video consultations and live webchats in sexual and reproductive health (SRH) interesting.1 However, we think the broad conclusion made that among patients "there is currently little support for SRH chatbots" ignores potentially important uses of this technology.
A recent systematic review on AI in primary care published in BMJ Health and Care Informatics highlighted the importance of testing AI in a similar scenario to that in which it would be used.2 In Nadarzynski et al’s study, opinions were collected from patients who had already come into a sexual health clinic and registered for an appointment. It is understandable that after making their way to clinic, Nadarzynski et al found that patients would prefer to see a doctor instead of an AI alternative. Instead, we suggest that AI could be better utilised in SRH as a tool to triage patients with a barrier to attending clinic. This could be particularly useful in adolescent patients, who made up just 3% of Nadarzynski et al’s population, and who are likely to be more accepting of technological alternatives.
For many adolescents and young people, one major barrier to attending clinic can be embarrassment.3 As this is a very high-risk group, finding ways to overcome this barrier is vital.4 A study by Aicken et al in 2016 recruited 16–24-year-olds from a general UK college population, and undertook detailed interviews investigating their opinions on online sexual healthcare. The participants described avoiding sexual health clinics due to embarrassment and worries about being recognised, as well as concerns around the impacts of their friends and family knowing that they had attended a clinic. The young people approved of the ‘faceless’ nature of online consolations, and liked the ability of being able to conceal their contact with sexual healthcare.5
We believe this demonstrates that a young population would be receptive to an anonymous online tool that could help to triage their symptoms. For this reason, we do not believe that the AI was tested in a similar scenario to that in which it could be used and consequently Nadarzynski’s et al’s conclusion, namely that AI chatbots are not acceptable to all patients, should be reconsidered.
Footnotes
Twitter @_lizzywasson
Contributors EJW wrote the bulk of the letter, while the other authors aided with the formulation of ideas and proofreading.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; internally peer reviewed.