Ms Wan-Jou She, a researcher in medical informatics and mHealth technology, specialises in creating tools promoting self-management for Prolonged Grief Disorder (PGD). With over 6 years in co-designing medical applications, she emphasises user-centric, accessible designs. Her research, in collaboration with bereavement and suicide survivor experts, led to the development of various applications, utilizing artificial intelligence (AI) tools like Natural language processing (NLP), for bereaved survivors. A standout project, the Living Memory Home, nurtures bonds between users and their deceased loved ones. With a robust background in human-computer interaction (HCI), Wan-Jou She’s research in medical contexts enhances diagnosis and treatment efficacy for heart disease and mental disorders.
In her presentation “AI Augmented Medical Technology” she brought attention to AI augmented projects related to medical technology that she has actively participated in. One of the examples being about grief care technology and the other about a medical chatbot.
The first case study, Living Memory Home (LMH), focuses on helping people with a new mental disorder called the prolonged grief disorder. Here, an e-therapy pilot project was conducted. One hundred participants used the LMH for one month and completed 7-day mandatory journals where they are prompted to reflect on their deceased loved ones. The participants were also asked to complete three rounds of surveys - in the beginning, after one week and after one month, to assess their grief level and suicidal ideation. Besides other results, this project also revealed that, for their case, current AI systems are not well developed enough to reliably detect suicidal ideation. However, the speaker also elaborated that generous funding has been provided to proceed with further developing AI systems to help cope with this very newly discovered mental disorder.
Another example is also related to grief but has a different approach. An application of a monitoring and feedback system was developed to gather information and predict whether a mourner is in the risk of developing prolonged grief disorder or suicidal ideation. The speaker explained that in essence the users would be answering questions and getting suggestions in return. Ma She also stated that one very valuable lesson she learned was a black box AI is not the best solution here, since the patients need a more transparent, interpretable and accountable system to accept the suggestions they are given. Hence, the team decided to try and add an explaining component to the AI that would provide more information on how the AI model reached the suggestion it gave.
The second case study mentioned is a medical chatbot for grief support. This chatbot, which uses LLMs, would represent a companion to the people who have lost a loved one, the presenter described. It could either represent the deceased loved one or become a new companion / friend for the mourners. Ms She showed a couple of quotes from people who have found support from the bot and felt as if they had created a real relationship, no matter the type, with a real person not a machine.
To close the topic, the speaker pointed out a few ideas to take from the presentation. One of the most important ones being whether or not LLMs can actually be applied in the medical field. The presenter stated that there would always be issues, like the question regarding what happens once AI makes a mistake and the patient, due to that mistake, either gets an unwanted result or dies - will AI be then able to take accountability or will this still fall onto the shoulders or the accompanying doctor? Therefore, AI functions more as a “rule of thumb” and can make suggestions based on that, but it is us humans who will have to make the decision whether to use that suggestion or not. We have the power to change the future, Ms She stated.
You can find full discussion on our Youtube channel. More information at conference homepage