Using community preferences to inform policy: Why we shouldn’t rush to use AI in breast cancer screening.

I’m very excited to share our new paper, showing Australian women are divided on the use of artificial intelligence (AI) in breast cancer screening. While AI has the potential to enhance the accuracy of mammogram reviews and reduce healthcare costs, many women remain sceptical.

Our study, which used a discrete choice experiment to survey over 800 Australian women, sought to understand their feelings about this technology and whether it would affect their participation in breast cancer screening.

There were mixed reactions about using AI in breast cancer screening. 40% of respondents were open to using AI if it proves more accurate than human radiologists. However, 42% were strongly opposed, and 18% had reservations that need addressing.

If it is going to be implemented, women want the AI to be accurate, Australian-owned, representative of Australian women, and faster than human radiologists. We saw that up to 22% of respondents might reduce their participation in breast cancer screening if AI is implemented in a way that makes them uncomfortable.

This supports what we see in other countries. In particular, people expect AI systems to have strong evidence they perform better than current systems before implementation. This evidence is currently not available, suggesting that implementing AI now could undermine trust in breast cancer screening programs.

This study was funded through a Sydney Cancer Institute Seed Grant, and was done with a great group of collaborators: Stacy Carter, Helen Frazer, Nehmat Houssami, Mary Macheras-Magias, Genevieve Webb, and Luke Marinovich. I want to give a special shout out to our consumer and lived experience collaborators who were instrumental in developing and interpreting the results: Genevieve Webb (nominated through Health Consumers NSW) and Mary Macheras-Magias (nominated through BCNA Seat at the Table program).

Work, daily activities and leisure after cancer

As well as being less likely to work, cancer survivors of working age were more likely to be limited in their daily activities and leisure compared to people who had not had cancer, our latest research finds. Similarly, older cancer survivors (aged over 65) were also more likely to be limited in their leisure pursuits compared with people without cancer.

We know that cancer can impact on people’s paid work status and participation. The percentage of people who return to work after cancer varies from 24% to 94%, and depends on several factors such as health status, socio-demographics, work characteristics and the availability of support from others.

However, there has been far less research on whether it also changes people’s daily activities and leisure. What is available suggests nearly half of people with cancer experience trouble with daily activities such as meal preparation and grocery shopping. There is also evidence that participation in leisure is reduced after cancer, although this work has largely focussed on physical activity and exercise, with very little examination of cultural activities, hobbies or socialising.

Our latest paper, published this week, addresses this gap by looking at whether people with cancer report more limitations in their daily activities and leisure compared to people without cancer. We used the PROFILES Registry, a population-based registry of short and long-term cancer survivors in the Netherlands, collected through a series of cohort studies conducted between 2004 and 2015. Our sample included nearly 2000 cancer survivors, across five cancer types: Hodgkin’s lymphoma, non-Hodgkin’s lymphoma, multiple myeloma, thyroid cancer and prostate cancer. We also had a sample of over 1600 people who had not had cancer.

Among those of working age, 55% of those in the cancer cohort reported participating in paid work, 41% experienced limitations in daily activities and 41% reported limitations in leisure. This was significantly worse than in the working age non-cancer control group, where 66% reported participating in paid work, 22% reported limitations with daily activities and 20% reported limitations with leisure. Among those of retirement age, the non-cancer control group were significantly less likely to have limitations in leisure (30%) than the cancer cohort (39% limited in leisure) but there was no difference in daily activities (40% and 36% respectively).

In particular, Hodkin’s lymphoma survivors had much lower rates of paid work and more difficulty with daily activities and leisure, likely due to the impact of the disease symptoms, extensive treatment and associated side effects, and the probability of relapse. Conversely, thyroid cancer survivors tend to be younger at diagnosis and have less intensive treatment, perhaps explaining why they were more likely in our sample, and equally likely in previous studies, to have a paid job when compared to people who had not had cancer.

While previous research suggests some people reassess their life roles and choose to reduce their work to spend more time in unpaid daily activities and leisure after cancer our results suggest those who are limited in paid work are also limited in their daily activities. This may simply reflect that many who do reassess their roles still rely on work for financial reasons and continue despite limitations in their physical or psychosocial abilities.

Further research to examine how cancer survivors can best be supported to participate in unpaid work and leisure activities is required, given the previous focus of the cancer survivorship literature on return to paid work. Factors which have been shown to improve rates of return to paid work, such as support from family and friends, support of employers and participation in specific rehabilitation programs may also apply to unpaid daily activities and leisure activities.

Participation and limitations in daily activities such as (un)paid work and leisure form significant parts of a person’s identity and are therefore an important component of survivorship care. However, the limited evidence in this area includes very few studies with larger samples or with a range of cancer types. Understanding the impact of cancer on unpaid work, daily activities and leisure may encourage clinicians and health services to take a more holistic view of cancer survivorship.

Co-authors: Marjon Faaij (University Utrecht visiting scholar to the Centre for Health Economics Research and Evaluation at UTS, funded by the Cancer Research Economics Support Team) & Dounya Schoormans (Department of Medical and Clinical Psychology, Tilburg University, Tilburg, The Netherlands).

Moving my health economics teaching online during COVID19

HPOL5000 is a core unit in the Master of Public Health program at the University of Sydney. Anne Marie Thow and I co-coordinate the unit, which covers introductory health policy and health economics.

Semester 1 2020 started on the 17th of February and we were excited to have a large cohort of nearly 300 students. The unit runs with two concurrent modes of study:

  • online (remote) learning, where students watch online lecture material, access reading material online and then participate in asynchronous tutorial activities via discussion boards, or
  • block mode (face-to-face) learning, where students access reading materials and some pre-recorded lectures online, but also attend two full day workshops of lectures and activities, and 6 x 1.5 hour face-to-face tutorial groups.

The first few weeks of semester went well, with great participation in online introductory activities, and the first face-to-face workshop day for block mode students running smoothly. We had a small cohort of international students who couldn’t travel to Australia to start the semester due to COVID19, so we set up some special online (asynchronous) tutorial groups for them to attend in the meantime.

In week 3 we were advised to prepare, just in case teaching needed to move online. In week 4 this was confirmed – due to COVID19 pandemic restrictions all teaching activities must now be online. This gave us one week to move the 2nd workshop day (held in week 5, and focussed on health economics content) online, as well as work out how to manage the rest of the semester.

Overall, I think the 2nd workshop ran well online, although it was a lot of work to set up. I learned a lot that I will use to improve future workshops, whether they are held online or face to face (or a combination) and thought it might be worth documenting what I did and how it worked.

We decided to run the workshop on the day it was scheduled, but with some tweaks for online delivery. We arranged a mixture of pre-recorded lectures and interactive Zoom sessions, and scheduled them all in a timetable similar to what students would have followed for the face to face workshop (see timetable at bottom of post).

The day started with a live Zoom meeting to introduce myself, the material and how the day would run. I used Mentimeter to do some quick polls and word cloud activities to find out a bit more about the students who were participating.

The three planned lectures were pre-recorded and uploaded for students to access a week before the workshop. This allowed students to choose if they wanted to do the full workshop day as programmed, or access the lectures in the week before and just attend the live sessions on the day.  Using pre-recorded lectures instead of doing them all live also gave me time on the workshop day to prepare for (and recover from) the more interactive sessions during the day.

Each lecture was allocated a time during the day when students could go off and watch it (if they hadn’t already) and then a zoom meeting was held afterwards for discussion, questions and some interactive activities. For the interactive activities I used Mentimeter tasks as well as Zoom breakout rooms to encourage student interactions with each other. One of these sessions worked well and one didn’t – being more organised to make sure students had access to the material for the small group discussion outside of Zoom would have been really helpful (I ended up telling students to take a screenshot/photo on their camera of the exercise on the screen so they could refer to it in the groups!)

We also had a panel discussion session. When run in the face-to-face workshop this is usually very popular with students, and I was really pleased with how it ran online. We used a meetin

g rather than a webinar Zoom meeting and this worked fine. As with the rest of the day the students were really helpful with their cameras and microphones etc, and we had good interaction via the chat function with people asking questions.

In the last interactive session of the day I used a Mentimeter quiz to check concept understanding. I had feedback that this was one of the best bits of the day. There were 5 questions about each of the 3 main topics we had covered, and the questions were designed to be relatively easy, but students only had 10 seconds to answer each one. A leader board was shown at the end of each set of 5 questions to generate a feeling of competition, and it was simple to set up.

Student feedback:

We had a lot of positive feedback about the workshop. A quick evaluation (done via Mentimeter) at the end of the day showed the Panel Discussion session and Quiz were both very popular. When asked one thing they found confusing or unclear, many people mentioned that Zoom was unstable sometimes, and in particular the breakout room activities were rushed. So next time I will allow much more time for those, and make sure I have a second person on hand to help manage the logistics. Overall the comments were positive, and made the whole experience worthwhile. Some examples:

  • “It was actually a really effective alternative to a face to face day. The timetable with spaced out live webinars kept me on track with time”
  • “The panel was really great to see the concepts we’ve gone through in the lectures and readings from a professional perspective. I’ve really enjoyed the health economics side of this course more than anticipated so thank you for this lovely teaching”
  • “The panel discussion… the experts we had onboard really enriched and contributed to the learning process”
  • “Quiz time is really useful to review”
  • “Being able to snack the entire time while listening to everyone!”

Overall, using a mixture of tools and activities was helpful to keep students (and myself!) interested and engaged. A whole day of Zoom was a lot, and I think multi-day workshops would need to be extra diligent about giving appropriate breaks, making pre-recorded material available beforehand, and mixing up the type of interaction. For a large group like this having a second person online to help with coordination and admin would be great. But, I would absolutely run a workshop like this again in the future, although hopefully with more than a week to prepare!

My top tips:

Zoom:

  • I am still not sure whether using one zoom meeting for the whole day (which is what I did) is better than setting up a separate zoom meeting for each interactive session. Different meetings would allow different settings for each session (e.g. a webinar for the panel discussion), but also means students need to log into the right room at the right time.
  • I made a slide to display on the screen in between sessions, which was helpful.
  • I wish I’d recorded every session to share with students who couldn’t join on the day. I now know that you can record multiple sections of a Zoom meeting and each downloads as a separate file.
  • I made sure I had a clear place nominated on Canvas and mentioned first thing in the morning where students should go for information if something went wrong with the technology during the day (e.g. I’ll post here [LINK] on Canvas, and I’ll send an announcement)!

Break out rooms:

  • Using the random allocation setting was easy and meant students mixed
  • They take time for students to join and introduce themselves, so allow extra time
  • Need to ensure students know what they need to do and can still access materials while in the breakout room – either pre-send slides or use Mentimeter
  • It would have been great to have a second ‘admin’ person who could manage the logistics of putting people in rooms so I could circulate through the rooms contributing to the discussion, more like the face to face setting.
  • Err on the side of having slightly larger groups than you think, because some students sign in and then turn of camera & mic and don’t participate. Suggest 4 as the minimum (likely then to get at least 2) and up to 6 or 7 still works ok.

Chat function:

  • It’s difficult to monitor while you’re presenting, but…
  • I’ve seen some really nice examples of students using it amongst themselves to share links and clarify content during a lecture.

Mentimeter:

  • Was a great way to get engagement from a large class – much more flexible than ‘raising hands’ in class or polls within Zoom
  • The quiz with the leaderboard was fun! The only problem was not being able to give away small prizes (e.g. chocolate frogs etc) that would usually happen in a face-to-face setting. I’ve been trying to think creatively about what might replace this – perhaps the winner gets a link to my favourite health economics GIF?!

Security:

  • I didn’t have any problems with security or inappropriate behaviour, although in one lecture I’ve given subsequently a student started sharing their screen of them playing a computer game during one of the breaks. But, I now add a password to most zoom meetings by default, and for any larger group meetings I think I would always try to have an administrative person online who could handle stuff like that while I’m teaching.

 

Timetable

What I learned during #AcWriMo 2019

I started #AcWriMo 2019 with all the best of intentions, but life really did get in the way this year. If you don’t know about Academic Writing Month (AcWriMo), it was started by PhD2Published, but is now a world wide gathering in November each year of academics wanting a  writing boost. There’s more detail in this post by the Thesis Whisperer, but the basic components are to set some serious writing goals for the month (such as 10,000 words total or half an hour a day), make yourself publicly accountable by posting them online, and then start writing! Throughout the month you update your online spreadsheet to stay accountable, and also get the support of a whole community of #AcWriMo participants, particularly on twitter.

I’ve participated a few times in the past with mixed success, but I was sure this year was going to be amazing. I had some really clear goals, time available and had even organised a 3-day writing retreat! But despite all this it was hard and I didn’t get as much done as I’d hoped. Overall I only did 47% of my scheduled writing time (and gave up on the 26th of November), although I managed to write 81% of my target words. These is some of the things I learned as I struggled through:

  1. Public accountability is good! Having a small group of twitter people I touched base with on a Friday really helped me to stay on track. Thanks @LaurenJchristie @AnnieMcCluskey2 @Lisa_Beatty & @lnraines1!
  2. I thought I’d love filling in my google spreadsheet, but actually much preferred colouring squares on my manual whiteboard tracker (see image above). Being able to visualise the progress was really motivational.
  3. Tracking both time and words written was important – sometimes I had put in the time but hadn’t written words because I was thinking about structure or re-arranging text. This time was still valuable, but if I’d only been tracking words it would have been disheartening.
  4. I love the concept of ‘Low hanging fruit’ to start and finish a writing session. The concept was referred in either the Acadames podcast or the Good, Bad & Ugly of Writing in Academia podcast (sorry I can’t remember or find which, but both are worth listening to!) and means leaving a note to myself about where I am up to or leaving a sentence half finished. This gave me a quick and easy way to get straight back into writing in the next session, rather than feeling like I needed to reread the whole thing to remember what I was talking about.
  5. I can accomplish a LOT in one pomodoro (25 minutes). I quickly realised that, for me, the challenge of writing is not coming up with the words, but is just sitting down, sitting still and sticking to it.
  6. I can only do about 2 or 3 pomodoros in a block. Doing one by itself often felt a bit short, but 4 in a row fried my brain. Doing 2 blocks of 3 pomodoros (so about 3 hours of writing) in a day was super productive and felt very achievable.
  7. Having a physical break, such as stretching, going for a quick walk or making a cup of tea, between pomodoros was much more refreshing than just checking email or looking at my phone.
  8. The writing retreat was cancelled in the end, but I had a great few days doing writing sessions in various cafes and libraries on campus with my writing buddy @CStatsAU. Changing the location and writing with someone else was fun and I’m thrilled that we now have a writing accountability in my office.
  9. @LaurenJchristie gave me the great idea to also track self-care activities in my tracking spreadsheet, not as yet another ‘thing to do’ but to remind me that I can support myself to write productively in ways that don’t involve sitting in front of the computer.
  10. Life gets in the way sometimes. Although this was disappointing at times, tracking how much I had been able to do and accomplish actually meant that I was less frustrated, because I could see that I was making progress despite having so many competing demands. It also helped that I had prioritised writing for the month, so even though I had to be realistic about what I could do in a day, I also had clear priorities of what to do with the time available.

Overall, #AcrWriMo 2019 was hard, but I think writing a lot in a short time is always going to be difficult. But keeping track, being accountable and having support meant that it was absolutely worthwhile, and I am already looking forward to #AcWriMo2020! The real challenge now is to implement what I learned in my daily work life, so I can maintain my momentum.

Asking about understanding in choice surveys

Half of health researchers doing choice surveys (known as discrete choice experiments) ask respondents if they understood the survey. However, only around half of these go on to analyse the answers or use the results. This variation in practice was identified in a survey of health researchers, published recently in the journal Value in Health.

Choice surveys are increasingly used as a quantitative way to measure people’s preferences for health and healthcare. They are an exciting method that generates important insights for patient-centred care, but they are not without their drawbacks. Choice surveys often use medical terminology or assume an understanding of risks and probability, meaning they might be difficult for people to fill in accurately.

Our study also found that the questions researchers asked participants to assess their understanding varied widely. This suggests researchers aren’t sure the best way to ask whether people found their survey hard. It also makes it difficult to compare the results across different surveys. There is a need for researchers to have a set of questions specifically designed to ask about difficulty and understanding of choice surveys that they can all use consistently.

Overall, our results suggest that many researchers who use choice surveys to answer important health questions think it is valuable to make sure respondents understand the survey. But, they are not clear what questions to ask, or how to use the information. Our next step is to develop and test a series of questions to include in choice surveys that we are confident can give researchers useful information about participant understanding.

Transitioning from Early-Career Researcher to Mid-Career Researcher

Recently I’ve been thinking about the transition from being an early-career researcher (ECR) to a mid-career researcher (MCR). Six months ago I finished my UTS Chancellor’s Postdoctoral Research Fellowship, which funded me for three years at the Centre for Health Economics Research and Evaluation. The idea of these fellowships is to transition early career researchers into independent, mid-career researchers. The CPDRF has a number of objectives, two of which are particularly relevant to this idea of shifting from ECR to MCR:

  • To attract and retain talented and high-achieving postdoctoral research fellows, within 5 years of the award of their PhD, who have an outstanding track record or who show evidence of excellent research potential.
  • To develop a broad range of research, engagement and communication skills in the Fellows that will equip them to become the next generation of excellent early career and mid-career researchers at UTS

When I started the fellowship this seems like the perfect fit for my career but was still quite daunting. Having completed my PhD and a post-doc in Ireland, I had been lucky enough to work with some great senior health economists, but now I needed to step up to become an independent researcher in my own right. I felt pressure to live up to the potential that had been seen in me, and a need to start demonstrating achievements.

Now that I’ve finished the fellowship, have I become a mid-career researcher? And if so, what should this look like as I plan my research and professional development in my new position, a continuing academic position with a mix of research and teaching?

I have been trying to understand what an MCR looks like, and have realised that it is a somewhat nebulous concept.

The definitions of ECR and MCR in grant schemes and professional organisations are not very helpful, as they focus on time since PhD, rather than performance (eg: ARC Future Fellows and Victorian Cancer Agency MCR Fellowships and the Australian Academy of Science). To make it even more confusing, different organisations have different definitions, so although I’m no longer an ECR if you use the 5-year post-PhD cut-off, for some schemes I am still an ECR as I’m less than 10 years post-PhD.

Perhaps it is not time so much, as what a MCR does that is different to an ECR? Is there something fundamentally different about an MCR’s research or role, or does an MCR simply do the same things as an ECR, just to a higher standard? The typical aspects on which an ECR is judged include undertaking research which is both excellent and original, having strong networks, and undertaking service roles for the University and the broader community. Perhaps an MCR simply does better research, has broader networks and contributes more to the community.

One of my mentors had a nice suggestion – he thought the main things an MCR should do that are not required of an ECR are demonstrating impact and leadership. So a MCR should be able to demonstrate that their work has relevance and can change practice, whether that be clinical, policy, or research methods. In relation to leadership, a MCR should be starting to lead teams, which might include Masters or PhD students, research assistants, or a group of peers on a research project.

Another mentor proposed that as you become more senior your research ideas must grow larger. So that as an MCR it is no longer enough to work alone on a small project, and your research ideas should be large enough to require a team to implement and ensure impact.

Overall, I still feel like I’m still evolving from an early-career researcher to a mid-career researcher. I’ve realised there is no blueprint for what an excellent mid-career researcher does to differentiate themselves from an early-career researcher, but it probably isn’t a strict cut-off based on time since PhD. As I continue to evolve into a MCR, I will start to be able to demonstrate that my research has an impact and that I can build and lead a team, but will also develop other skills and abilities that I can use to demonstrate my achievements.

Disseminating my research

Publication in a peer-reviewed journal is no longer sufficient – research findings need to be disseminated more broadly to ensure (and demonstrate) that they have impact. This means that once I’ve submitted an article for publication I immediately start working on the dissemination plan (if I haven’t already done it as a form of ‘productive procrastination‘!)

There is no one-size-fits-all approach. However, I do have a standard list of dissemination options and a general process that I use. Here it is, in case it is useful for you:

Step 1: Write different versions of your article (during article writing/immediately after submission)

  • Blog post – I usually start by writing a blog post, and this is an excellent article about how to turn your journal article into a blog post, but I’ve also found this one useful.
  • Press release – The University press office has been really helpful in structuring the story and using appropriate language for my press releases in the past (although they sometimes need help making sure the essential message isn’t lost).
  • Talking pointstalking points are a great way to prepare for a media interview. In addition, the process of identifying and refining my talking points helps to identify and refine the message, audience and purpose for my dissemination strategy. I usually come up with about 5 talking points, for example: a short sentence and a short paragraph about the main result(s), a short sentence and a short paragraph about the implications, and a short sentence about what might come next.

Step 2: Circulate your pitch (before acceptance)

You may need to modify your pitch for each of the sources below, but you can base all of them on your press release. You need to circulate your pitch to these sources before your article is accepted, because often things move quite quickly after acceptance and you want to have time to work with these people to craft the best piece, and to coordinate the release dates with them.

  • Send a pitch to The Conversation (to do this you need to log in, and use the link on the left hand side of the dashboard)
  • Send a pitch to podcasts that might be interested. Podcasts usually have a longer lead time than the general media, so better to contact them early. There are some health-specific ones (e.g. 2SER Think:Health, the Research Roundup podcast by PC4) or more general ones, such as the University of Sydney podcast ‘Open for Discussion‘.
  • Send a pitch to any other magazine, website, etc that might be relevant. For example, in the past I’ve published summaries in Cancer Professional and have flagged oncologynews.com.au and Croakey as a possible media to approach in the future.

Step 3: Prepare for release (once accepted)

Once you know your article is accepted you should get a timeline for when it will be released. At this point you should let anyone who you’ve worked with on an article (e.g. the Conversation, etc) know the date and coordinate the release. You can also:

  • Contact relevant journalists with your press release. The press office can do this for you, and/or you can use informal approaches such as twitter (list of tweeting journalists below)
  • Contact relevant professional associations about circulating a short article about your research in their newsletter etc. I usually approach groups like the HSRAANZ, AHES, ESA.
  • Finalise your talking points for any media interviews. This includes the talking points drafted earlier, as well as notes on the different ways journalists or readers could misunderstand my research, and any sticky questions I’m nervous about. Then I draft responses to these (which I usually never need, but it makes me feel less nervous knowing I’m prepared).

Step 4: Disseminate (once published)

At last! Today is the day to…

  • Publish your blogpost on your blog
  • Publish your blogpost on LinkedIn
  • Write a post with a link to your blogpost (on your blog or LinkedIn) to Facebook
  • Tweet about your research – over the day or two after publication I usually tweet a link to the original article (with a sentence summarising the main finding), tweet a link to my blog post, tweet a link to any companion pieces (e.g. an article in The Conversation), and retweet any press coverage I get. I haven’t tried this yet, but I was recently told to tag relevant journalists in some of these tweets, and so I’ve compiled the following list of potential options:

Step 5: Tracking your dissemination

As we increasingly need to report our impact, it will become more important to be able to track how and to whom our research was disseminated. Tools like Google Alerts and Altmetrics can be very useful, but I’m also going to try and take screenshots/links/copies of any press coverage etc that I get and save them in the project folder, so that I can easily find them later.

Practical resources for analysing your first DCE

 

I’m relatively new to discrete choice experiments and have really enjoyed learning about the different analysis approaches and techniques used. It is such a rapidly evolving field and there is always something new to learn. While there is a lot happening to push the boundaries, I’ve recently been helping a couple of people with the analysis of their first DCE. While a lot of your analysis approach should be worked out before you begin the DCE,  when you get to the point of actually doing the analysis for the first time there is a whole lot of stuff around which commands to use that you might still need help with. I realised there are some references I just keep recommending and coming back to, so I’ve shared them here maybe you’ll find them helpful too. [Note: this post is updatted as I come across new resources].

General guidance

It often helps to know at the start what you are aiming to achieve at the end. I think this is a nice example of describing the methods and assumptions of a DCE around parental preferences for vaccination programs really clearly and succinctly. The other general information I refer people to is the ISPOR Analysis of DCE guidelines, which include the ESTIMATE checklist of things to consider when justifying your choice of approach.

Analysis approach

When I did the DCE course run through HERU in Aberdeen it was suggested that the typical approach to considering analysis of DCEs was to be to start with a simple model and then use more complex models to address specific issues that arise with your data or relate to your research question. This commonly means starting with a conditional logit model, and then considering options such as mixed logit and latent class analysis. The ISPOR Analysis of DCE guidelines have clear descriptions of the theory and assumptions of these approaches, and I found this paper interesting in comparing mixed logit and latent class approaches.

Analysis code

I am originally a SAS user, and so when I first started analysing DCE data I assumed I would do so in SAS. However, after much investigation I’ve realised this is easier said that done and have now moved to using STATA for the DCE analysis, although I’m still much more comfortable doing the data management and preparation in SAS. Using two different packages is time consuming, clunky and the opposite of “reproducible research”, so my next step is to convert managing my DCE data AND analysis in R. I haven’t got very far, so if anyone knows any good packages then please pass them on! I promise to update this page if I find something useful.

  • SAS

It is straight forward to run a conditional logit in SAS using PROC MDC (user guide). Some resources I found helpful to implement PROC MDC is this example code for conditional logit with PROC MDC and this SAS user group paper “Discrete choice modelling with PROC MDC”. The error message I’ve had most often in doing this analysis is “CHOICE=variable contains redundant alternatives” which relates to the data looking like people have chosen more than one option in a choice set. If you get this, check the cleaning and the sorting of your data!

You can do effectively the same analysis using PROC PHREG, as described by this technote, plus there is a suite of marketing research guides that describe various ways to analysis discrete choice data.

Moving on from conditional logit to mixed logit or latent class analysis is more difficult in SAS. There is a guide in this video to running conditional logit models and mixed logit models (using PROC MDC, starts at 5:30 minutes), although I could never get their mixed logit method to work (entirely possible due to user error!). I did also contact the SAS helpdesk and they said it would be difficult, but recommended using PROC BCHOICE (Bayesian Choice) for mixed logit analysis with DCE data that has multiple choice sets per participant. There is some documentation here and a worked example here.  Again, I never really got this to work but it could be my mistake.

  • STATA

Having faffed around in SAS for long enough, I caved in and transitioned to using STATA like everyone else in my research group! I found this a really nice introductory, step by step guide to analysis in STATA, including data set up and Conditional Logit and Mixed logit options. There is also this article which is a guide to analysing DCE data and model selection, and includes STATA code (as well Nlogit and Biogene) in the supplementary material. Finally, this working paper is useful for describing the theory and code for doing more advanced models, like Mixed Logit and Latent Class analysis in STATA, although the code isn’t annotated which I found frustrating as a new STATA user. I haven’t used it yet, but there was a STATA newsletter article about using the margins option to interpret MIXL choice model results, which could be useful.

For latent class analysis is STATA I found this article in the STATA journal a useful description of the command, and this was a nice example of a paper that used mixed logit and latent class models and wrote them up clearly. Finally, these three articles (one, two, three) seemed like good examples of calculating and displaying relative importance graphs.

  • R

I’m keen to analyse my next DCE in R, so have started looking at how I might do this. I have found the following resources, but if anyone has any experience with DCEs in R then please get in touch!

  • Two papers by Aizaki and Aizaki & Nishimura on designing DCEs in R, and including analysis using conditional logit models
  • Example R code and case study of mixed logit model with multiple choices per respondent, including analysis and helpful tips, written by Kenneth Train and Yves Croissant
  • An mlogit package for analysing DCE data in R, as described in Kenneth Train (2009)
  • Thanks to Nikita Khanna for pointing me to this paper & code for doing sample size calculations for a DCE in R.
  • There is also the Apollo package in R, developed by the group at the Choice Modelling Centre at the University of Leeds, with a website & manual available.

Health economics and occupational therapy

I attended the Australian Occupational Therapy Conference last week, for the first time in nearly 15 years! I went to support some OT’s I’ve been working with on an economic evaluation, but it was lovely to catch up with friends and colleagues from my OT life before health economics. I also realised there wasn’t much health economics at the conference, and I got a few requests for some introductory resources about health economics. So, I’ve put together a brief summary of what health economics is and how it could apply to occupational therapy.

In general, health economics is about how we allocate our scarce health resources to maximise our health outcomes. There can be a misconception that economics is about cutting costs. But health economics is really about value, and therefore the benefits that can be achieved are just as important to a health economist as the costs of achieving them.

Everyone uses economic thinking in their daily lives – I recently bought a new laptop and had to work out which aspects of performance I would prioritise (memory, touch screen, processing power) to get a laptop within my budget (my constrained resources). For some great examples of how economic theory plays out in real life then I highly recommend the Freakonomics podcast! There are some episodes specific to health, such as Are you ready for a glorious sunset and How many Doctors does it take to start a healthcare revolution and How do we know what really works in healthcare, but all the episodes will teach you to think like an economist.

For a more formal reading, there is a paper by Kernick (2003) Introduction to health economics for the medical practitioner that gives a nice introduction to health economics, and the types of questions that health economists try to answer. If you want a bit more about some important economic concepts such as opportunity costs and marginal costs, then Goodacre & McCabe’s (2002) An introduction to economic evaluation and this Sanofi factsheet (2009) on What is health economics are other good resources.

You will notice that these papers talk in general about health economics, and then go straight into a discussion of economic evaluation. Economic evaluation is probably the most common method associated with health economics and is used world-wide (including by the PBS and MBS in Australia) to evaluate the cost-effectiveness of new interventions. An economic evaluation compares two (or more) interventions in terms of both the costs and the benefits.  Economic evaluations are typically trial-based (meaning they are embedded in a clinical trial) or modelled (meaning they are based on research from the literature), or a combination of both.

The previously mentioned readings are good introductions to economic evaluations, and also explain the difference between a cost-benefit, cost-effectiveness and cost-utility analysis. These terms are often used interchangeably, but in health economics they have specific meanings based on the outcome measure you are using.

If you’re interested in how you actually incorporate an economic evaluation into a clinical trial, then the factsheet Step by step guide to economic evaluation in cancer trials gives a guide and walks through an example (it is designed for cancer clinical trials, but the same steps would apply to an occupational therapy trial). If you want more detail then I would suggest the textbooks by Gray et al Applied methods of cost effectiveness in health care or Drummond et al Methods for the economic evaluation of health care programmes.

But… health economics is much more than economic evaluations. Health economists are interested in questions like: what influences health (other than healthcare), what is ‘health’ and how do we value it, how can we arrange the health workforce most efficiently, how does the way we pay doctors change their performance, how can we make health more equitable, and many more (see Alan William’s famous ‘Plumbing Diagram‘). Some of the questions I am using economic approaches to answer include:

  • How do we quantitatively measure patient preferences for health and health care (using discrete choice experiments)?
  • What aspects of quality of life are people with cancer willing to give up to increase their survival?
  • How long does it take people to return to work after a cancer diagnosis and treatment, and what makes it easier for them to do so?
  • When people stop working because of illness or injury, how can we measure the impact this has on the broader economy?
  • How do the costs of cancer treatment impact peoples emotional and physical well-being?

There are many opportunities for health economics to be used in occupational therapy, and I’ve included a list of examples at the end of this article. But three obvious areas would be: a) Economic evaluations, although a systematic review of economic evaluations in occupational therapy (Green & Lambert 2016) found only nine published economic evaluations (of varying quality), despite the increasing focus of health care systems on demonstrating cost effectiveness; b) Many occupational therapy interventions probably reduce future health resource use, so there are opportunities to use Medicare data (such as MBS and PBS payments) to examine the impact of occupational therapy (here is a good fact sheet on using Medicare data for research); and c) Discrete choice experiments (which quantitatively measure patient preferences) are an ideal method to examine people’s preferences for their health (e.g. which occupational domains they value most) and how they want their treatment delivered (e.g. what aspects of a rehab program make people most likely to adhere to a practice schedule).

Please feel free to get in touch if you have ideas or an interest in incorporating health economics into occupational therapy, of if there are other resources you’d like, or have found useful!

Examples of health economics in occupational therapy:

  • Hewitt et al (2018) An economic evaluation of the SUNBEAM programme: a falls-prevention randomized controlled trial in residential aged care [Link]
  • Kareem Brusco et al (2014) Are weekend inpatient rehabilitation services value for money? An economic evaluation alongside a randomized controlled trial with a 30 day follow up [Link]
  • Wales et al (2018). A trial based economic evaluation of occupational therapy discharge planning for older adults: the HOME randomized trial [Link]
  • Sampson et al (2014) An introduction to economic evaluation in occupational therapy: Cost-effectiveness of pre-discharge home visits after stroke [Link]
  • Laver et al (2012) Preferences for rehabilitation service delivery: A comparison of the views of patients, occupational therapists and other rehabilitation clinicians using a discrete choice experiment [Link]
  • Gallego et al (2018) Carers’ preferences for the delivery of therapy services for people with disability in rural Australia: evidence from a discrete choice experiment [Link]

11 questions to help you work with a health economist

As part of developing the ‘Integrating Health Economics In Clinical Research’ Workshop held in Vancouver in Feb 2019, we decided it would be useful to have a session on ‘how to work with a health economist’. This was because many of us had the experience of being contacted at the last minute to ‘add a paragraph about health economics’ to a grant application. This is frustrating because it undervalues the role of health economics, and doesn’t lead to good grant applications or happy health economists.

Many clinicians and researchers are hearing about the benefits of including health economics in their studies, particularly because it is something funders are increasingly looking for. However, many people may not know a lot about health economics, what it can do and what it can offer. This is compounded in many places (including Vancouver and Sydney) by a shortage of health economists and/or limited health economist availability.

By developing a checklist or worksheet we hoped to help people think about their research and how health economics might be part of it. We wanted to recognise that because it is important to talk to a health economist early in the research design process, people might not have all the answers yet, but we were hoping to avoid time wasted trying to squeeze a health economics question into a study that is already designed.

Our initial draft had five or six questions we thought were important, and I decided to run them past the #healtheconomics community on Twitter. I asked “What are the top 3 things you wish clinical people coming to a health economist with a research idea had thought about before your first meeting”. The responses came flooding in, from the serious (‘Where does the clinical uncertainty lie’ and ‘What decision are they trying to inform’) to the hilarious (‘1. What is the comparator, 2. What is the comparator, 3. What is the comparator, precisely’ and ‘1) How long until submission?, followed closely by 2) Are you kidding me?’). Overall there were a couple of key themes. Health economists wanted clinicians to have thought critically about the intervention, comparator and health resource implications of both, but they also wanted them to have started the conversation early enough that there was still scope for the health economics to inform the study design.

So the final product is a crowd-sourced worksheet of 11 Questions to Help You Work With a Health Economist. The questions cover both study design (such as intervention and comparators) and logistics (such as time frame and budget for the project). It has been produced under a Creative Commons with Attribution (CC BY) license, so please feel free to use and share it as you wish. Edited 10th October 2019 to add: If you’d like a more detailed guide to commissioning economic evaluations, you might also find this NSW Ministry of Health guide useful.

11 Questions to Help You Work With a Health Economist

The Integrating Health Economics in Clinical Research Workshop was developed with a team of health economists from British Columbia, Canada including Nick Bansback, Nick Dragojlovic, William Hall, Mark Harrison, Stephanie Harvard, Dean Regier, David Whitehurst and Wei Zhang.