Using community preferences to inform policy: Why we shouldn’t rush to use AI in breast cancer screening.

I’m very excited to share our new paper, showing Australian women are divided on the use of artificial intelligence (AI) in breast cancer screening. While AI has the potential to enhance the accuracy of mammogram reviews and reduce healthcare costs, many women remain sceptical.

Our study, which used a discrete choice experiment to survey over 800 Australian women, sought to understand their feelings about this technology and whether it would affect their participation in breast cancer screening.

There were mixed reactions about using AI in breast cancer screening. 40% of respondents were open to using AI if it proves more accurate than human radiologists. However, 42% were strongly opposed, and 18% had reservations that need addressing.

If it is going to be implemented, women want the AI to be accurate, Australian-owned, representative of Australian women, and faster than human radiologists. We saw that up to 22% of respondents might reduce their participation in breast cancer screening if AI is implemented in a way that makes them uncomfortable.

This supports what we see in other countries. In particular, people expect AI systems to have strong evidence they perform better than current systems before implementation. This evidence is currently not available, suggesting that implementing AI now could undermine trust in breast cancer screening programs.

This study was funded through a Sydney Cancer Institute Seed Grant, and was done with a great group of collaborators: Stacy Carter, Helen Frazer, Nehmat Houssami, Mary Macheras-Magias, Genevieve Webb, and Luke Marinovich. I want to give a special shout out to our consumer and lived experience collaborators who were instrumental in developing and interpreting the results: Genevieve Webb (nominated through Health Consumers NSW) and Mary Macheras-Magias (nominated through BCNA Seat at the Table program).

Moving my health economics teaching online during COVID19

HPOL5000 is a core unit in the Master of Public Health program at the University of Sydney. Anne Marie Thow and I co-coordinate the unit, which covers introductory health policy and health economics.

Semester 1 2020 started on the 17th of February and we were excited to have a large cohort of nearly 300 students. The unit runs with two concurrent modes of study:

  • online (remote) learning, where students watch online lecture material, access reading material online and then participate in asynchronous tutorial activities via discussion boards, or
  • block mode (face-to-face) learning, where students access reading materials and some pre-recorded lectures online, but also attend two full day workshops of lectures and activities, and 6 x 1.5 hour face-to-face tutorial groups.

The first few weeks of semester went well, with great participation in online introductory activities, and the first face-to-face workshop day for block mode students running smoothly. We had a small cohort of international students who couldn’t travel to Australia to start the semester due to COVID19, so we set up some special online (asynchronous) tutorial groups for them to attend in the meantime.

In week 3 we were advised to prepare, just in case teaching needed to move online. In week 4 this was confirmed – due to COVID19 pandemic restrictions all teaching activities must now be online. This gave us one week to move the 2nd workshop day (held in week 5, and focussed on health economics content) online, as well as work out how to manage the rest of the semester.

Overall, I think the 2nd workshop ran well online, although it was a lot of work to set up. I learned a lot that I will use to improve future workshops, whether they are held online or face to face (or a combination) and thought it might be worth documenting what I did and how it worked.

We decided to run the workshop on the day it was scheduled, but with some tweaks for online delivery. We arranged a mixture of pre-recorded lectures and interactive Zoom sessions, and scheduled them all in a timetable similar to what students would have followed for the face to face workshop (see timetable at bottom of post).

The day started with a live Zoom meeting to introduce myself, the material and how the day would run. I used Mentimeter to do some quick polls and word cloud activities to find out a bit more about the students who were participating.

The three planned lectures were pre-recorded and uploaded for students to access a week before the workshop. This allowed students to choose if they wanted to do the full workshop day as programmed, or access the lectures in the week before and just attend the live sessions on the day.  Using pre-recorded lectures instead of doing them all live also gave me time on the workshop day to prepare for (and recover from) the more interactive sessions during the day.

Each lecture was allocated a time during the day when students could go off and watch it (if they hadn’t already) and then a zoom meeting was held afterwards for discussion, questions and some interactive activities. For the interactive activities I used Mentimeter tasks as well as Zoom breakout rooms to encourage student interactions with each other. One of these sessions worked well and one didn’t – being more organised to make sure students had access to the material for the small group discussion outside of Zoom would have been really helpful (I ended up telling students to take a screenshot/photo on their camera of the exercise on the screen so they could refer to it in the groups!)

We also had a panel discussion session. When run in the face-to-face workshop this is usually very popular with students, and I was really pleased with how it ran online. We used a meetin

g rather than a webinar Zoom meeting and this worked fine. As with the rest of the day the students were really helpful with their cameras and microphones etc, and we had good interaction via the chat function with people asking questions.

In the last interactive session of the day I used a Mentimeter quiz to check concept understanding. I had feedback that this was one of the best bits of the day. There were 5 questions about each of the 3 main topics we had covered, and the questions were designed to be relatively easy, but students only had 10 seconds to answer each one. A leader board was shown at the end of each set of 5 questions to generate a feeling of competition, and it was simple to set up.

Student feedback:

We had a lot of positive feedback about the workshop. A quick evaluation (done via Mentimeter) at the end of the day showed the Panel Discussion session and Quiz were both very popular. When asked one thing they found confusing or unclear, many people mentioned that Zoom was unstable sometimes, and in particular the breakout room activities were rushed. So next time I will allow much more time for those, and make sure I have a second person on hand to help manage the logistics. Overall the comments were positive, and made the whole experience worthwhile. Some examples:

  • “It was actually a really effective alternative to a face to face day. The timetable with spaced out live webinars kept me on track with time”
  • “The panel was really great to see the concepts we’ve gone through in the lectures and readings from a professional perspective. I’ve really enjoyed the health economics side of this course more than anticipated so thank you for this lovely teaching”
  • “The panel discussion… the experts we had onboard really enriched and contributed to the learning process”
  • “Quiz time is really useful to review”
  • “Being able to snack the entire time while listening to everyone!”

Overall, using a mixture of tools and activities was helpful to keep students (and myself!) interested and engaged. A whole day of Zoom was a lot, and I think multi-day workshops would need to be extra diligent about giving appropriate breaks, making pre-recorded material available beforehand, and mixing up the type of interaction. For a large group like this having a second person online to help with coordination and admin would be great. But, I would absolutely run a workshop like this again in the future, although hopefully with more than a week to prepare!

My top tips:

Zoom:

  • I am still not sure whether using one zoom meeting for the whole day (which is what I did) is better than setting up a separate zoom meeting for each interactive session. Different meetings would allow different settings for each session (e.g. a webinar for the panel discussion), but also means students need to log into the right room at the right time.
  • I made a slide to display on the screen in between sessions, which was helpful.
  • I wish I’d recorded every session to share with students who couldn’t join on the day. I now know that you can record multiple sections of a Zoom meeting and each downloads as a separate file.
  • I made sure I had a clear place nominated on Canvas and mentioned first thing in the morning where students should go for information if something went wrong with the technology during the day (e.g. I’ll post here [LINK] on Canvas, and I’ll send an announcement)!

Break out rooms:

  • Using the random allocation setting was easy and meant students mixed
  • They take time for students to join and introduce themselves, so allow extra time
  • Need to ensure students know what they need to do and can still access materials while in the breakout room – either pre-send slides or use Mentimeter
  • It would have been great to have a second ‘admin’ person who could manage the logistics of putting people in rooms so I could circulate through the rooms contributing to the discussion, more like the face to face setting.
  • Err on the side of having slightly larger groups than you think, because some students sign in and then turn of camera & mic and don’t participate. Suggest 4 as the minimum (likely then to get at least 2) and up to 6 or 7 still works ok.

Chat function:

  • It’s difficult to monitor while you’re presenting, but…
  • I’ve seen some really nice examples of students using it amongst themselves to share links and clarify content during a lecture.

Mentimeter:

  • Was a great way to get engagement from a large class – much more flexible than ‘raising hands’ in class or polls within Zoom
  • The quiz with the leaderboard was fun! The only problem was not being able to give away small prizes (e.g. chocolate frogs etc) that would usually happen in a face-to-face setting. I’ve been trying to think creatively about what might replace this – perhaps the winner gets a link to my favourite health economics GIF?!

Security:

  • I didn’t have any problems with security or inappropriate behaviour, although in one lecture I’ve given subsequently a student started sharing their screen of them playing a computer game during one of the breaks. But, I now add a password to most zoom meetings by default, and for any larger group meetings I think I would always try to have an administrative person online who could handle stuff like that while I’m teaching.

 

Timetable

Disseminating my research

Publication in a peer-reviewed journal is no longer sufficient – research findings need to be disseminated more broadly to ensure (and demonstrate) that they have impact. This means that once I’ve submitted an article for publication I immediately start working on the dissemination plan (if I haven’t already done it as a form of ‘productive procrastination‘!)

There is no one-size-fits-all approach. However, I do have a standard list of dissemination options and a general process that I use. Here it is, in case it is useful for you:

Step 1: Write different versions of your article (during article writing/immediately after submission)

  • Blog post – I usually start by writing a blog post, and this is an excellent article about how to turn your journal article into a blog post, but I’ve also found this one useful.
  • Press release – The University press office has been really helpful in structuring the story and using appropriate language for my press releases in the past (although they sometimes need help making sure the essential message isn’t lost).
  • Talking pointstalking points are a great way to prepare for a media interview. In addition, the process of identifying and refining my talking points helps to identify and refine the message, audience and purpose for my dissemination strategy. I usually come up with about 5 talking points, for example: a short sentence and a short paragraph about the main result(s), a short sentence and a short paragraph about the implications, and a short sentence about what might come next.

Step 2: Circulate your pitch (before acceptance)

You may need to modify your pitch for each of the sources below, but you can base all of them on your press release. You need to circulate your pitch to these sources before your article is accepted, because often things move quite quickly after acceptance and you want to have time to work with these people to craft the best piece, and to coordinate the release dates with them.

  • Send a pitch to The Conversation (to do this you need to log in, and use the link on the left hand side of the dashboard)
  • Send a pitch to podcasts that might be interested. Podcasts usually have a longer lead time than the general media, so better to contact them early. There are some health-specific ones (e.g. 2SER Think:Health, the Research Roundup podcast by PC4) or more general ones, such as the University of Sydney podcast ‘Open for Discussion‘.
  • Send a pitch to any other magazine, website, etc that might be relevant. For example, in the past I’ve published summaries in Cancer Professional and have flagged oncologynews.com.au and Croakey as a possible media to approach in the future.

Step 3: Prepare for release (once accepted)

Once you know your article is accepted you should get a timeline for when it will be released. At this point you should let anyone who you’ve worked with on an article (e.g. the Conversation, etc) know the date and coordinate the release. You can also:

  • Contact relevant journalists with your press release. The press office can do this for you, and/or you can use informal approaches such as twitter (list of tweeting journalists below)
  • Contact relevant professional associations about circulating a short article about your research in their newsletter etc. I usually approach groups like the HSRAANZ, AHES, ESA.
  • Finalise your talking points for any media interviews. This includes the talking points drafted earlier, as well as notes on the different ways journalists or readers could misunderstand my research, and any sticky questions I’m nervous about. Then I draft responses to these (which I usually never need, but it makes me feel less nervous knowing I’m prepared).

Step 4: Disseminate (once published)

At last! Today is the day to…

  • Publish your blogpost on your blog
  • Publish your blogpost on LinkedIn
  • Write a post with a link to your blogpost (on your blog or LinkedIn) to Facebook
  • Tweet about your research – over the day or two after publication I usually tweet a link to the original article (with a sentence summarising the main finding), tweet a link to my blog post, tweet a link to any companion pieces (e.g. an article in The Conversation), and retweet any press coverage I get. I haven’t tried this yet, but I was recently told to tag relevant journalists in some of these tweets, and so I’ve compiled the following list of potential options:

Step 5: Tracking your dissemination

As we increasingly need to report our impact, it will become more important to be able to track how and to whom our research was disseminated. Tools like Google Alerts and Altmetrics can be very useful, but I’m also going to try and take screenshots/links/copies of any press coverage etc that I get and save them in the project folder, so that I can easily find them later.

Practical resources for analysing your first DCE

 

I’m relatively new to discrete choice experiments and have really enjoyed learning about the different analysis approaches and techniques used. It is such a rapidly evolving field and there is always something new to learn. While there is a lot happening to push the boundaries, I’ve recently been helping a couple of people with the analysis of their first DCE. While a lot of your analysis approach should be worked out before you begin the DCE,  when you get to the point of actually doing the analysis for the first time there is a whole lot of stuff around which commands to use that you might still need help with. I realised there are some references I just keep recommending and coming back to, so I’ve shared them here maybe you’ll find them helpful too. [Note: this post is updatted as I come across new resources].

General guidance

It often helps to know at the start what you are aiming to achieve at the end. I think this is a nice example of describing the methods and assumptions of a DCE around parental preferences for vaccination programs really clearly and succinctly. The other general information I refer people to is the ISPOR Analysis of DCE guidelines, which include the ESTIMATE checklist of things to consider when justifying your choice of approach.

Analysis approach

When I did the DCE course run through HERU in Aberdeen it was suggested that the typical approach to considering analysis of DCEs was to be to start with a simple model and then use more complex models to address specific issues that arise with your data or relate to your research question. This commonly means starting with a conditional logit model, and then considering options such as mixed logit and latent class analysis. The ISPOR Analysis of DCE guidelines have clear descriptions of the theory and assumptions of these approaches, and I found this paper interesting in comparing mixed logit and latent class approaches.

Analysis code

I am originally a SAS user, and so when I first started analysing DCE data I assumed I would do so in SAS. However, after much investigation I’ve realised this is easier said that done and have now moved to using STATA for the DCE analysis, although I’m still much more comfortable doing the data management and preparation in SAS. Using two different packages is time consuming, clunky and the opposite of “reproducible research”, so my next step is to convert managing my DCE data AND analysis in R. I haven’t got very far, so if anyone knows any good packages then please pass them on! I promise to update this page if I find something useful.

  • SAS

It is straight forward to run a conditional logit in SAS using PROC MDC (user guide). Some resources I found helpful to implement PROC MDC is this example code for conditional logit with PROC MDC and this SAS user group paper “Discrete choice modelling with PROC MDC”. The error message I’ve had most often in doing this analysis is “CHOICE=variable contains redundant alternatives” which relates to the data looking like people have chosen more than one option in a choice set. If you get this, check the cleaning and the sorting of your data!

You can do effectively the same analysis using PROC PHREG, as described by this technote, plus there is a suite of marketing research guides that describe various ways to analysis discrete choice data.

Moving on from conditional logit to mixed logit or latent class analysis is more difficult in SAS. There is a guide in this video to running conditional logit models and mixed logit models (using PROC MDC, starts at 5:30 minutes), although I could never get their mixed logit method to work (entirely possible due to user error!). I did also contact the SAS helpdesk and they said it would be difficult, but recommended using PROC BCHOICE (Bayesian Choice) for mixed logit analysis with DCE data that has multiple choice sets per participant. There is some documentation here and a worked example here.  Again, I never really got this to work but it could be my mistake.

  • STATA

Having faffed around in SAS for long enough, I caved in and transitioned to using STATA like everyone else in my research group! I found this a really nice introductory, step by step guide to analysis in STATA, including data set up and Conditional Logit and Mixed logit options. There is also this article which is a guide to analysing DCE data and model selection, and includes STATA code (as well Nlogit and Biogene) in the supplementary material. Finally, this working paper is useful for describing the theory and code for doing more advanced models, like Mixed Logit and Latent Class analysis in STATA, although the code isn’t annotated which I found frustrating as a new STATA user. I haven’t used it yet, but there was a STATA newsletter article about using the margins option to interpret MIXL choice model results, which could be useful.

For latent class analysis is STATA I found this article in the STATA journal a useful description of the command, and this was a nice example of a paper that used mixed logit and latent class models and wrote them up clearly. Finally, these three articles (one, two, three) seemed like good examples of calculating and displaying relative importance graphs.

  • R

I’m keen to analyse my next DCE in R, so have started looking at how I might do this. I have found the following resources, but if anyone has any experience with DCEs in R then please get in touch!

  • Two papers by Aizaki and Aizaki & Nishimura on designing DCEs in R, and including analysis using conditional logit models
  • Example R code and case study of mixed logit model with multiple choices per respondent, including analysis and helpful tips, written by Kenneth Train and Yves Croissant
  • An mlogit package for analysing DCE data in R, as described in Kenneth Train (2009)
  • Thanks to Nikita Khanna for pointing me to this paper & code for doing sample size calculations for a DCE in R.
  • There is also the Apollo package in R, developed by the group at the Choice Modelling Centre at the University of Leeds, with a website & manual available.

Health economics and occupational therapy

I attended the Australian Occupational Therapy Conference last week, for the first time in nearly 15 years! I went to support some OT’s I’ve been working with on an economic evaluation, but it was lovely to catch up with friends and colleagues from my OT life before health economics. I also realised there wasn’t much health economics at the conference, and I got a few requests for some introductory resources about health economics. So, I’ve put together a brief summary of what health economics is and how it could apply to occupational therapy.

In general, health economics is about how we allocate our scarce health resources to maximise our health outcomes. There can be a misconception that economics is about cutting costs. But health economics is really about value, and therefore the benefits that can be achieved are just as important to a health economist as the costs of achieving them.

Everyone uses economic thinking in their daily lives – I recently bought a new laptop and had to work out which aspects of performance I would prioritise (memory, touch screen, processing power) to get a laptop within my budget (my constrained resources). For some great examples of how economic theory plays out in real life then I highly recommend the Freakonomics podcast! There are some episodes specific to health, such as Are you ready for a glorious sunset and How many Doctors does it take to start a healthcare revolution and How do we know what really works in healthcare, but all the episodes will teach you to think like an economist.

For a more formal reading, there is a paper by Kernick (2003) Introduction to health economics for the medical practitioner that gives a nice introduction to health economics, and the types of questions that health economists try to answer. If you want a bit more about some important economic concepts such as opportunity costs and marginal costs, then Goodacre & McCabe’s (2002) An introduction to economic evaluation and this Sanofi factsheet (2009) on What is health economics are other good resources.

You will notice that these papers talk in general about health economics, and then go straight into a discussion of economic evaluation. Economic evaluation is probably the most common method associated with health economics and is used world-wide (including by the PBS and MBS in Australia) to evaluate the cost-effectiveness of new interventions. An economic evaluation compares two (or more) interventions in terms of both the costs and the benefits.  Economic evaluations are typically trial-based (meaning they are embedded in a clinical trial) or modelled (meaning they are based on research from the literature), or a combination of both.

The previously mentioned readings are good introductions to economic evaluations, and also explain the difference between a cost-benefit, cost-effectiveness and cost-utility analysis. These terms are often used interchangeably, but in health economics they have specific meanings based on the outcome measure you are using.

If you’re interested in how you actually incorporate an economic evaluation into a clinical trial, then the factsheet Step by step guide to economic evaluation in cancer trials gives a guide and walks through an example (it is designed for cancer clinical trials, but the same steps would apply to an occupational therapy trial). If you want more detail then I would suggest the textbooks by Gray et al Applied methods of cost effectiveness in health care or Drummond et al Methods for the economic evaluation of health care programmes.

But… health economics is much more than economic evaluations. Health economists are interested in questions like: what influences health (other than healthcare), what is ‘health’ and how do we value it, how can we arrange the health workforce most efficiently, how does the way we pay doctors change their performance, how can we make health more equitable, and many more (see Alan William’s famous ‘Plumbing Diagram‘). Some of the questions I am using economic approaches to answer include:

  • How do we quantitatively measure patient preferences for health and health care (using discrete choice experiments)?
  • What aspects of quality of life are people with cancer willing to give up to increase their survival?
  • How long does it take people to return to work after a cancer diagnosis and treatment, and what makes it easier for them to do so?
  • When people stop working because of illness or injury, how can we measure the impact this has on the broader economy?
  • How do the costs of cancer treatment impact peoples emotional and physical well-being?

There are many opportunities for health economics to be used in occupational therapy, and I’ve included a list of examples at the end of this article. But three obvious areas would be: a) Economic evaluations, although a systematic review of economic evaluations in occupational therapy (Green & Lambert 2016) found only nine published economic evaluations (of varying quality), despite the increasing focus of health care systems on demonstrating cost effectiveness; b) Many occupational therapy interventions probably reduce future health resource use, so there are opportunities to use Medicare data (such as MBS and PBS payments) to examine the impact of occupational therapy (here is a good fact sheet on using Medicare data for research); and c) Discrete choice experiments (which quantitatively measure patient preferences) are an ideal method to examine people’s preferences for their health (e.g. which occupational domains they value most) and how they want their treatment delivered (e.g. what aspects of a rehab program make people most likely to adhere to a practice schedule).

Please feel free to get in touch if you have ideas or an interest in incorporating health economics into occupational therapy, of if there are other resources you’d like, or have found useful!

Examples of health economics in occupational therapy:

  • Hewitt et al (2018) An economic evaluation of the SUNBEAM programme: a falls-prevention randomized controlled trial in residential aged care [Link]
  • Kareem Brusco et al (2014) Are weekend inpatient rehabilitation services value for money? An economic evaluation alongside a randomized controlled trial with a 30 day follow up [Link]
  • Wales et al (2018). A trial based economic evaluation of occupational therapy discharge planning for older adults: the HOME randomized trial [Link]
  • Sampson et al (2014) An introduction to economic evaluation in occupational therapy: Cost-effectiveness of pre-discharge home visits after stroke [Link]
  • Laver et al (2012) Preferences for rehabilitation service delivery: A comparison of the views of patients, occupational therapists and other rehabilitation clinicians using a discrete choice experiment [Link]
  • Gallego et al (2018) Carers’ preferences for the delivery of therapy services for people with disability in rural Australia: evidence from a discrete choice experiment [Link]

Best Health Services and Policy Research Papers – 2018 Award winner

I was thrilled to be awarded the Overall winner of the 2018 HSRAANZ Best Health Services and Policy Research Paper last night. These awards recognise the best scientific works in the field health services and policy research. The award was for my paper on cancer-related lost productivity in the developing countries Brazil, Russia, India, China and South Africa (see my blog post for more details).

The article impressed the judges in the scope of research undertaken and the value it will contribute to the research field, including its potential to guide local prevention and treatment strategies. (HSRAANZ)

For the paper I was responsible for leading a large, international team of researchers to conduct an analysis of productivity loss due to cancer in rapidly developing countries. I had a leading role in the conceptualisation of both the research question and the project methodology, and applied for and received funding through an EU CANWON fellowship to undertake the project. I gathered the necessary data with assistance from the international authors, and was solely responsible for the formal data analysis. As the lead author, I was also responsible for the project administration and preparation of the manuscript.

Following publication of the paper, I lead the promotion of the publication through various media channels, including The Conversation (~6,000 readers) and 44 radio, print and tv news articles (including The Guardian, Lancet Oncology News, UN News, 2SER ThinkHealth podcast, etc.) As a result, the article has gone on to be in the top 5% of all research outputs scored by Altmetrics, and the number 1 article of similar age published in Cancer Epidemiology. More importantly, I have worked with each of the international authors to ensure that the results have been disseminated to the appropriate policy and health service planning agencies and individuals in each of the BRICS countries. This has included developing country-specific specific results and graphs, assisting with presentation slides and encouraging broad dissemination lead by the other authors.

The above two paragraphs are a summary of the application I submitted to HSRAANZ for the award, and while it is true it skips the importance of this paper as part of my professional development. I was so lucky to be supported by Linda Sharp, Isabelle Soerjomataram and Paul Hanly to lead the research, and to apply for and take up funding to visit IARC and get the project started. The team we pulled together were really engaged with the project, and instrumental in pulling together and then interpreting the local and international data. I now count them as ongoing collaborators, and we already have a few papers and grant applications in the works.

But perhaps the most important lesson from this paper was resilience. I was so proud of this work once it was finished, but it took more than 12 months, an international relocation and 7 journal rejections before it was published. During that year I learnt perseverance and the value of a few days ‘cooling off’ before commencing the reformatting process, as well as how wonderful it is to have co-authors who will keep the faith in the manuscript alive when you (temporarily) run out! So thank you to everyone who helped out on the paper in whatever way – from digging out local data to offering supportive glasses of wine after another rejection! It was all worth it.

Cancer is about more than health: work and leisure after cancer

This is a guest blogpost by Marjon Faaij, who I was delighted to supervise for her Master of Pharmacy research project.  We made a great team – Marjon had a personal interest in the impact of cancer on daily life, and I had access to some data about cancer survivorship through the PROFILES registry. Even better, because Marjon was from Utrech University, she could translate the Dutch PROFILES data much more easily than I could! Marjon presented the results of her research at the NCRI conference in the UK, and we are now writing them up as a publication. In the meantime, Marjon put together this summary, and was kind enough to let me share it here.

In 2005, I lost my mother due to cancer. Before she died, she was sick for almost three years. During this period, cancer had a big impact on her daily life. Shortly after the diagnosis of cancer she could still do everything she liked; working in the hospital as a nurse, taking care of her family, cleaning our house, giving music lessons and swim lessons and socialising with friends and family. But as the time after the diagnosis increased, she became sicker, she had more pain and was more tired. She did not have the energy to do all the things she liked. She decided to work less hours until she stopped working completely. She used this time to spend more time with us and to rest more.

A lot of different factors influenced her decisions about doing work, unpaid work and leisure. One of the most important factors for her was the support from family and friends, but I can imagine that it will be different for each cancer patient.

Therefore, I decided to do a research project about the different factors of influence on cancer survivors doing daily activities, for my Master of Pharmacy. For this research I used surveys of Dutch cancer survivors, including people with Hodgkin lymphoma, non-Hodgkin’s lymphoma, multiple myeloma, thyroid or prostate cancer.

Factors of influence

From my results it is clear that cancer survivors are less likely to do paid work, and those who do work are likely to work fewer hours. Cancer survivors are also more limited in their unpaid work and leisure. However, how much cancer influences each activity is dependent on cancer type. Each cancer type has different symptoms, and has different treatments, which leads to different influence on doing daily activities.

Consistent to my mother, most cancer survivors try to keep working and fully participate in leisure and unpaid work activities. However, if they become sicker it is harder to fully participate in these activities. When they are limited in one area, they appear to be limited in all activities.

There are a lot of factors that have influence on doing daily activities. For example:

–         People were less likely to have a paid job if they were: female, had surgery, older, widowed or had multiple comorbidities.

–         People were more likely to be limited in their unpaid work if they had: non-Hodgkin lymphoma or multiple myeloma, multiple comorbidities, were female, or were never married.

–         People were more limited in their leisure activities if they had: medium education or multiple comorbidities.

It was interesting that people who received more follow-up services were no more or less likely to report difficulty with paid work, unpaid work or leisure. But people who felt satisfied with the follow-up care they received had an increased chance of participating in daily activities.

What does this mean?

These results show that there are many factors of influence on daily activities. The factors are unique for each cancer survivor, and so are the impacts. It is important for patients to know that changes can take place across all of their daily activities during cancer, so they can prepare for and react to these changes.

Doctors need to know that cancer and its treatment can influence patients’ daily activities, and that these changes can be important for quality of life. Discussing these changes with patients and providing support and referral to services that can assist patients (and their families) during this difficult time. These referrals are not possible if there is nowhere to refer patients to, and so health care systems need to ensure that services like work rehabilitation, occupational therapy and palliative care are available and appropriately funded.

Finally, the results are important for health economics. Economic evaluation using a societal perspective account for  changes in paid work due to illness (known as lost productivity) but the contribution of unpaid work usually goes unaccounted for. From these results it is clear that cancer has a big impact on both paid and unpaid work, and thus both should be considered in economic evaluations taking a societal perspective.

This research, cancer is about more than health – work and leisure after cancer, is based on data of the PROFILES Registry. This research project is carried out by Marjon Faaij. She is a Dutch Master of Pharmacy student from Utrecht University. This research project has been done at the Centre for Health Economics Research and Evaluation at the University of Technology Sydney, under the supervision of Alison Pearce and in collaboration with Dounya Schoormans of the PROFILES Registry.

Treating anxiety in people with cancer could save the health system money

It is normal to experience distress after a cancer diagnosis, but for some people distress can become so severe it affects a person’s mental health. We found that people who have anxiety as well as cancer often cost the health system more, particularly when anxiety is undiagnosed and untreated.

Cancer patients with clinical levels of anxiety often cost the health care system more. This is because they often use more health resources, such as having more tests, or staying longer in hospital. While some of these extra resources might be to treat their anxiety, many are for physical complaints, and are incurred even when the anxiety has not been formally diagnosed.

For example, one study found that men with prostate cancer tended to opt for more intensive treatment if they had anxiety. Similarly, women with breast cancer were likely to stay in hospital longer, and have more complications, if they had a psychiatric disorder like anxiety. These extra days in hospital and treatment for complications meant their care cost the health care system an extra 13%. Similar results were seen for colon, cervical, head and neck cancers.

Given that more than 1 in 3 Australians will be diagnosed with cancer during their lifetime, finding ways to make cancer treatment more effective and more efficient is critical. Health services that improve the psychological support available to cancer patients could have long term cost savings, as well as providing improved patient care.

Despite how common clinical anxiety is, this review identified only five studies that looked at the costs of anxiety in people with cancer. Four were from Canada and the USA, and one was done in Germany.

Unsurprisingly, the studies that looked at treatments for anxiety found that health care use increased in the short term – often because patients accessed mental health support services. This meant that providing support for anxiety looked more expensive than standard care. However, no-one has looked at whether reducing anxiety among cancer survivors could reduce costs in the long term.

While most people diagnosed with cancer will experience distress, this is a normal reaction to a traumatic life event. However, at least 10% of people with cancer will have clinical levels of anxiety after their diagnosis. Young people, women and people with advanced cancer are particularly at risk. This anxiety can lead to feelings of fear, loss of control and avoidance, as well as physical symptoms such as poor sleep, headaches, and fatigue.

Even though there are effective treatments for anxiety, in the midst of cancer care, anxiety often goes undetected and untreated. One reason for this is that the physical symptoms of anxiety, such as, headache or stomach upsets are often attributed to non-mental health causes. Without appropriate treatment, anxiety can impair decision making and coping. It can also lead to poorer psychological and medical outcomes, including more side effects and poor treatment compliance.

The management of anxiety disorders in the cancer context is a clear example of where the evidence base for relatively low cost and effective methods for identification and treatment is available, yet not systematically implemented.

The full paper is available here: Joanne Shaw, Alison Pearce, Anna-Lena Lopez, Melanie Price. Clinical anxiety disorders in the context of cancer: A scoping review of impact on resource use and healthcare costs. European Journal of Cancer Care.  https://onlinelibrary.wiley.com/doi/abs/10.1111/ecc.12893. Thanks to Jo Shaw for help writing this blog post.

$46 billion in productivity lost to cancer in developing countries

Premature – and potentially avoidable – death from cancer is costing tens of billions of dollars in lost productivity in a group of key developing economies that includes China, India and South Africa.

Over two-thirds of the world’s cancer deaths occur in economically developing countries, but the societal costs of cancer have rarely been assessed in these settings.

In a paper to be published in the journal Cancer Epidemiology we show that the total cost of lost productivity due to premature cancer mortality in Brazil, Russia, India, China and South Africa, collectively known as the BRICS countries, was $46.3 billion in 2012 (the most recent year for which cancer data was available for all these nations).

The largest loss was in China ($28 billion), while South Africa had the highest cost per cancer death ($101,000).

The BRICS countries are diverse but have been grouped by economists and others because of their particularly rapid demographic and economic growth. Currently the five countries combined comprise over 40% of the world’s population and 25% of global gross domestic product.

Liver and lung cancers had the largest impact on total lost productivity across the BRICS countries due to their high incidence, our research found.

But in South Africa, there are high productivity losses per death due to AIDS-related Kaposi sarcoma – an indication of the magnitude of the HIV/AIDS epidemic in Sub-Saharan Africa, and in India, lip and oral cancers dominated due to the prevalence of chewing tobacco there.

Many cancers which result in high lost productivity in the BRICS countries are amenable to prevention, early detection or treatment. Sadly, and in contrast to developed countries, most developing countries do not have such programs.

In particular, tobacco- and infection-related cancers (such as liver, cervical, stomach cancers and Kaposi sarcoma) were major contributors to productivity losses across BRICS countries.

Beyond the evident public health impact, cancer also imposes economic costs on individuals and society. These costs include lost productivity — where society loses the contribution of an individual to the market economy because they died prematurely from cancer.

Valuing this lost production gives policy- and decision-makers an additional perspective when identifying priorities for cancer prevention and control. This is particularly important in developing economies, where workforce and productivity are key resources in ensuring sustained economic growth.

Developing economies often have different demography, exposure to cancer risk factors, and economic environments than developed countries – all of which could modify the economic impact of cancer.

Locally tailored strategies are required to reduce the economic burden of cancer in developing economies. Focussing on tobacco control, vaccination programs and cancer screening, combined with access to adequate treatment, could yield significant gains for both public health and economic performance of the BRICS countries.

Country specific results

Brazil:

  • In Brazil, lung cancer resulted in the greatest productivity losses ($0.5 billion in 2012), with $402 million in lost productivity each year due to tobacco smoking, although Brazil has recently implemented successful tobacco use reduction policies.
  • Rapidly growing rates of obesity in Brazil result in up to $126 million in lost productivity due to cancer each year.

Russian Federation:

  • Total productivity lost due to cancer in the Russian Federation were $5 billion in 2012. They had the second highest cost per death of the BRICS countries.
  • Both liver and head and neck cancers contribute to the high number of excess alcohol-related deaths in the Russian Federation, with a likely considerably economic impact.

India:

  • Lip and oral cancers dominate lost productivity in India due to the relatively high prevalence of chewing tobacco. The use of smokeless tobacco, often combined with betel quid, may account for lost productivity of $486 million each year.
  • In India, the lost productivity costs per death of leukaemia are relatively high, perhaps because the advanced, multi-modality treatments required are not available, or are difficult to access

China:

  • Productivity lost due to cancer in China was $26 billion in 2012, more than all the other BRICS countries combined.
  • Two-thirds of total lost productivity costs in China were in urban areas (66%), considerably more than the proportion of people who reside in urban areas (52%).
  • In China, dietary aflatoxins in many staple foods is a major risk factor for liver cancer, and our results suggest this costs the economy $972 million annually.

South Africa:

  • In South Africa there are high productivity losses per death due to AIDS-related Kaposi sarcoma – an indication of the magnitude of the HIV/AIDS epidemic in Sub-Saharan Africa.
  • Cervical cancer represents a particularly large economic impact in South Africa. While there are new vaccinations available to prevent HPV, one of the precursors to cervical cancer, the effects of vaccination need a few decades to show impact. In the meantime, cervical cancer screening can offer an effective solution to reduce both the public health and economic burden of cervical cancer.

Our respondents didn’t understand these questions – do you?

Dr Alison Pearce has won a Best Poster Presentation Award at the Health Economics Study Group Winter Meeting 2016 (HESG) held in Manchester in January 2016. The award was given for Alison’s poster “Our respondents didn’t understand these questions – do you? Cognitive interviewing highlights unanticipated decision making in a discrete choice experiment.”

The poster described 17 interviews Alison conducted with cancer survivors about their care after finishing cancer treatment. During the interviews each survivor completed a survey about their care, but many found it very difficult.  Some of the problems with the survey are explained on the poster, but the poster was also interactive – conference attendees were asked to vote and comment on the survey questions. The poster received a great response, with many conference attendees voting and leaving comments about the research.

The National Cancer Registry is leading this research into cancer survivorship with a group of collaborators from Aberdeen, Dublin and Newcastle, with the aim of informing policy about the best way to structure follow-up services for survivors who have completed their cancer treatment. The Health Economics Study Group supports and promotes the work of health economists and is the oldest and one of the largest of its type.

This news article was originally posted on the 26th of January 2016 on the National Cancer Registry Ireland website: http://www.ncri.ie/news/article/registry-health-economist-wins-best-poster-presentation-award-recent-conference