One of the big standbys of cyberpunk literature is the notion of the “auto-doc”, a system that can diagnose a body of all ailments and then cure them. My favorite example of this is Elysium, but you can see it all over the place (even in Star Wars and Star Trek). But I’m quite curious as to how this could possibly come to be.
As I see it, there are two main parts of having a fully-functional Auto-doc:
An AI (could be a limited expert system) that could scan a body and diagnose any ailments and recommend treatment
The ability to apply the treatment on its own to repair any damage (which could include printing and “installing” new tissue, bones, and organs to remedy a particular problem)
So, in this post, I’d like to start up a discussion on what might be required to get that first part functional. Just something that a patient could lie on, get scanned (quickly), and be given a list of all the ailments discovered and possible treatments for them.
Anybody game for thinking through this with me?
I’ll start with this: I think the core part of the system would need to be a medical expert system that could do the following:
recognize bodily anomalies
factor all anomalies found together to determine if there are any interactions or dependencies (such that multiple issues could be solved by fixing one root cause, for example)
knowledge of all available treatments and the ability to recommend all valid treatments for each issue while taking all anomalies into consideration
I think this basically comes down to an expert system specifically designed to solve the Configurator problem, except that rather than configuring a Boeing 767 aircraft with all compatible options, the goal is to configure an ‘optimal’ human body.
What you’re sugesting sounds like a stateless system in a way. It doesn’t have prior knowledge of the patient’s medical history and doesn’t remember the patient on subsequent treatments.
Some medical conditions can only be accurately diagnosed with the medical history in mind. Another thing is ambiguous symptom combinations that could originate from various causes. If the auto-doc looks at the symptoms and comes to the conclusion that it’s Condition A with a probability of 87%, Condition B with a probability of 75% or Condition C with a probability of 72%, it has to consider the severity of treating the wrong one and leaving the correct one untreated. And it might have to pick a trial-and-error approach, treating the symptoms as Condition A first and if that doesn’t help it treats Condition B and so on.
Also, what happens if it makes wrong decisions? Who is responsible, who can be sued, who can make sure it won’t happen again? Another thing is consent. I want to be able to decide whether I want to receive a treatment once the machine recommends one based on its diagnosis.
Ah, this is a good catch. I hadn’t considered this before. I guess in order for this to function in the best possible way, the AI would need to have access to the patient’s full medical history, yeah? I would think that this would be necessary to effect the trial-and-error approach that you mentioned, at least until the sample size of conditions and interactions is large enough that the accuracy of the recommended treatment is high enough to make a trial-and-error approach far less necessary.
I guess that also means that all patient data would need to be anonymized and merged together into a massive data set from which inferences, analysis, and treatment efficacy could be modeled and used for predictions and recommendation. Which, of course, would get into a whole can of worms regarding patient privacy and assurances that anonymization is bullet-proof.
As to questions of privacy, culpability,and consent, I’m not trying to be glib here but I think that I’d like to ignore those for the time being as problems to solve once an actual proof of concept auto-doc has been produced. This is not to say that these questions are not important, but for now I think I’d like to focus on the “CAN it be done?” question and leave the “SHOULD it be done?” and “how can it be done RESPONSIBLY?” questions for a subsequent discussion. If it is not even possible to do, there’s not much point to discussing responsibility for doing it.
There is an app called “Ada” that pretends to be some kind of AI doctor by asking you about symptoms and then matching it with illness you could have.
From my experience it works quite fine as long as the symptoms belong to one sickness, but if one has, let’s say the flu and a broken arm, it will try to combine these two into one sickness.
It partly is interested in the medical history, but the input you can give is limited and therefore not always precise
I’ve heard about “Ada”. It’s an interesting start but as you pointed out, it’s a long way from what is required because it’s algorithm and training set seem to be pretty limited.
I think one thing that could make this more useful would be to integrate the “medical history” reporting with full-body scans across the available spectrum (CAT scan, MRI, XRAY, PET scan, ECG, EKG, ultrasound, etc.). I think Ada suffers the most from having incomplete information. If someone complains of headaches, nausea, and fatigue, having an MRI available might find that a brain tumor is the root cause rather than just dehydration or the flu. Being able to see the complete body now from inside and out plus the medical history would probably make things a lot more accurate.
And then, after a treatment is applied and is successful or not, that would need to be noted in the training data so that this particular combination of symptoms and suggested treatment efficacy is recorded properly for the next time this specific set of variables is encountered.
It seems like the real limiting factor that we’ve discussed thus far is a limitation on input and training data size. And I think right now, that training data size almost seems incomprehensibly large to cover all variable and treatment combinations.
There is this LLama LoRA model trained on medical data called MedLLama-lora-13b. It has a good textual understanding of symptoms, diagnoses and remedies. It’s easily downloaded and run with ollama.ai, which makes running AI models and putting a quick API on it very easy.
Using ollama, you can also enable visual input, but as I don’t have a qualifying gpu in my machine that’s not something I’ve tried out. It’s also not trained on MRI/CAT/XRAY.
That definitely seems like a good start. I would think that the next step would be to train it on visual diagnosis tools (CAT/PET/MRI/X-ray/Ultrasound/visible spectrum) to correlate them with the textual diagnoses and then integrate a way for it to take all of those types of inputs and match them with the trained data to present an overall diagnosis of the current condition of the patient.
I think in order for it to be truly functional though, it would need a major jump beyond just an LLM or standard ML model where it could actually take the combination of all these inputs plus the patient’s medical history and make logical leaps of a proper course of treatment as a whole with a high degree of accuracy. It would need to be able to combine all the symptoms being experienced with the actual detailed scan of the body to find the root cause, I think.
How big is the corpus of training data available to the ollama model? Do you happen to know?