Using community preferences to inform policy: Why we shouldn’t rush to use AI in breast cancer screening.

I’m very excited to share our new paper, showing Australian women are divided on the use of artificial intelligence (AI) in breast cancer screening. While AI has the potential to enhance the accuracy of mammogram reviews and reduce healthcare costs, many women remain sceptical.

Our study, which used a discrete choice experiment to survey over 800 Australian women, sought to understand their feelings about this technology and whether it would affect their participation in breast cancer screening.

There were mixed reactions about using AI in breast cancer screening. 40% of respondents were open to using AI if it proves more accurate than human radiologists. However, 42% were strongly opposed, and 18% had reservations that need addressing.

If it is going to be implemented, women want the AI to be accurate, Australian-owned, representative of Australian women, and faster than human radiologists. We saw that up to 22% of respondents might reduce their participation in breast cancer screening if AI is implemented in a way that makes them uncomfortable.

This supports what we see in other countries. In particular, people expect AI systems to have strong evidence they perform better than current systems before implementation. This evidence is currently not available, suggesting that implementing AI now could undermine trust in breast cancer screening programs.

This study was funded through a Sydney Cancer Institute Seed Grant, and was done with a great group of collaborators: Stacy Carter, Helen Frazer, Nehmat Houssami, Mary Macheras-Magias, Genevieve Webb, and Luke Marinovich. I want to give a special shout out to our consumer and lived experience collaborators who were instrumental in developing and interpreting the results: Genevieve Webb (nominated through Health Consumers NSW) and Mary Macheras-Magias (nominated through BCNA Seat at the Table program).

Practical resources for analysing your first DCE

 

I’m relatively new to discrete choice experiments and have really enjoyed learning about the different analysis approaches and techniques used. It is such a rapidly evolving field and there is always something new to learn. While there is a lot happening to push the boundaries, I’ve recently been helping a couple of people with the analysis of their first DCE. While a lot of your analysis approach should be worked out before you begin the DCE,  when you get to the point of actually doing the analysis for the first time there is a whole lot of stuff around which commands to use that you might still need help with. I realised there are some references I just keep recommending and coming back to, so I’ve shared them here maybe you’ll find them helpful too. [Note: this post is updatted as I come across new resources].

General guidance

It often helps to know at the start what you are aiming to achieve at the end. I think this is a nice example of describing the methods and assumptions of a DCE around parental preferences for vaccination programs really clearly and succinctly. The other general information I refer people to is the ISPOR Analysis of DCE guidelines, which include the ESTIMATE checklist of things to consider when justifying your choice of approach.

Analysis approach

When I did the DCE course run through HERU in Aberdeen it was suggested that the typical approach to considering analysis of DCEs was to be to start with a simple model and then use more complex models to address specific issues that arise with your data or relate to your research question. This commonly means starting with a conditional logit model, and then considering options such as mixed logit and latent class analysis. The ISPOR Analysis of DCE guidelines have clear descriptions of the theory and assumptions of these approaches, and I found this paper interesting in comparing mixed logit and latent class approaches.

Analysis code

I am originally a SAS user, and so when I first started analysing DCE data I assumed I would do so in SAS. However, after much investigation I’ve realised this is easier said that done and have now moved to using STATA for the DCE analysis, although I’m still much more comfortable doing the data management and preparation in SAS. Using two different packages is time consuming, clunky and the opposite of “reproducible research”, so my next step is to convert managing my DCE data AND analysis in R. I haven’t got very far, so if anyone knows any good packages then please pass them on! I promise to update this page if I find something useful.

  • SAS

It is straight forward to run a conditional logit in SAS using PROC MDC (user guide). Some resources I found helpful to implement PROC MDC is this example code for conditional logit with PROC MDC and this SAS user group paper “Discrete choice modelling with PROC MDC”. The error message I’ve had most often in doing this analysis is “CHOICE=variable contains redundant alternatives” which relates to the data looking like people have chosen more than one option in a choice set. If you get this, check the cleaning and the sorting of your data!

You can do effectively the same analysis using PROC PHREG, as described by this technote, plus there is a suite of marketing research guides that describe various ways to analysis discrete choice data.

Moving on from conditional logit to mixed logit or latent class analysis is more difficult in SAS. There is a guide in this video to running conditional logit models and mixed logit models (using PROC MDC, starts at 5:30 minutes), although I could never get their mixed logit method to work (entirely possible due to user error!). I did also contact the SAS helpdesk and they said it would be difficult, but recommended using PROC BCHOICE (Bayesian Choice) for mixed logit analysis with DCE data that has multiple choice sets per participant. There is some documentation here and a worked example here.  Again, I never really got this to work but it could be my mistake.

  • STATA

Having faffed around in SAS for long enough, I caved in and transitioned to using STATA like everyone else in my research group! I found this a really nice introductory, step by step guide to analysis in STATA, including data set up and Conditional Logit and Mixed logit options. There is also this article which is a guide to analysing DCE data and model selection, and includes STATA code (as well Nlogit and Biogene) in the supplementary material. Finally, this working paper is useful for describing the theory and code for doing more advanced models, like Mixed Logit and Latent Class analysis in STATA, although the code isn’t annotated which I found frustrating as a new STATA user. I haven’t used it yet, but there was a STATA newsletter article about using the margins option to interpret MIXL choice model results, which could be useful.

For latent class analysis is STATA I found this article in the STATA journal a useful description of the command, and this was a nice example of a paper that used mixed logit and latent class models and wrote them up clearly. Finally, these three articles (one, two, three) seemed like good examples of calculating and displaying relative importance graphs.

  • R

I’m keen to analyse my next DCE in R, so have started looking at how I might do this. I have found the following resources, but if anyone has any experience with DCEs in R then please get in touch!

  • Two papers by Aizaki and Aizaki & Nishimura on designing DCEs in R, and including analysis using conditional logit models
  • Example R code and case study of mixed logit model with multiple choices per respondent, including analysis and helpful tips, written by Kenneth Train and Yves Croissant
  • An mlogit package for analysing DCE data in R, as described in Kenneth Train (2009)
  • Thanks to Nikita Khanna for pointing me to this paper & code for doing sample size calculations for a DCE in R.
  • There is also the Apollo package in R, developed by the group at the Choice Modelling Centre at the University of Leeds, with a website & manual available.

Our respondents didn’t understand these questions – do you?

Dr Alison Pearce has won a Best Poster Presentation Award at the Health Economics Study Group Winter Meeting 2016 (HESG) held in Manchester in January 2016. The award was given for Alison’s poster “Our respondents didn’t understand these questions – do you? Cognitive interviewing highlights unanticipated decision making in a discrete choice experiment.”

The poster described 17 interviews Alison conducted with cancer survivors about their care after finishing cancer treatment. During the interviews each survivor completed a survey about their care, but many found it very difficult.  Some of the problems with the survey are explained on the poster, but the poster was also interactive – conference attendees were asked to vote and comment on the survey questions. The poster received a great response, with many conference attendees voting and leaving comments about the research.

The National Cancer Registry is leading this research into cancer survivorship with a group of collaborators from Aberdeen, Dublin and Newcastle, with the aim of informing policy about the best way to structure follow-up services for survivors who have completed their cancer treatment. The Health Economics Study Group supports and promotes the work of health economists and is the oldest and one of the largest of its type.

This news article was originally posted on the 26th of January 2016 on the National Cancer Registry Ireland website: http://www.ncri.ie/news/article/registry-health-economist-wins-best-poster-presentation-award-recent-conference