Enshittification, also known as crapification and platform decay, is a process in which two-sided online products and services decline in quality over time.
As some of you may be aware, I was an Infectious Disease (ID) physician for almost 40 years, retiring 3 years ago. My practice was almost entirely concerned with taking care of patients in several acute care hospitals. So everyone whose care I was involved in was sick. ID is unusual in that most of my consults were the odd and usual, infections and diseases, that others could not manage. I once counted 1300 or so pathogens I needed to know for work. Infections can also involve any organ system. It is said the ID doctor has to be the second-best cardiologist, second-best pulmonologist, etc. There was a lot of variety in the cases I saw. If you want an idea of the scope of ID, check out the Puswhisperer, 1300 blog entries I wrote for Medscape. Available on Amazon.
What follows is how I did things, and I do not know how my routine maps on to any other physician.
So I get a consult. Usually a page to a phone number. Here I was kind of a butthead, but I usually asked if they could summarize the need for the consult in 5 words or less. Why fever? Best therapy for Staph bacteremia. Although they often could not, which I took as a sign they didn’t really understand the patient need for a consult. I did this, as I learned long ago that I did not want the bias others accompanying me into the patient’s room. I wanted, no needed, to collect the information my way. Plus, over the years, the ability of residents to give a good presentation had faded. Yeah, I am one of those old geezers who talks about what it was like back in my day, but back in my day we had to be able to summarize the complete patient history is 6 minutes and were only allowed a 3×5 card for reference.
I think the advantage of the 3×5 card limitations was that it really imprinted the patient in the mind, and the mind is where all the thinking occurs. That is going to be the theme of this entry BTW: the more you offload your brain and thinking, the more enshittified the doctor becomes.
Besides, I told the referring doctor, I am going to read the chart and do a history. Why waste time duplicating effort?
In the old days, I would write the name and room on a 3×5 card and go see the patient. With the advent of the electronic medical record (EMR) I would enter the patient into my database.
Before seeing the patient, I would skim the problem list, the admission history and labs. Emphasis on skim. I just wanted an outline of the patient. I wanted to paint the portrait myself. After checking the microbiology (my job was best defined as me find bug, me kill bug, me go home), I would see the patient and do my routine history and physical (H&P), taking written notes.
The process of writing notes helped imprint what the patient was telling me. Then I would go through the chart carefully, because the information therein always made more sense after doing the H&P.
I would always look at all the patients’ labs and X-rays myself. In the old days, that would entail walking down to radiology. With the EMR, I could pull the films right to screen, but if there was any significant pathology, I would review the films with the radiologist. Always. That way I really understood what was going on. Towards the end of my career, I was shocked to discover that most people never looked at the films much less over the films with the radiologist. They just read the report. One aspect of understanding a patient down the drain.
Then I would write up my consult. In the old days, that was by hand, pen and paper, for the assessment and plan. Then dictate the full H&P, call the consulting physician and finally go back to the patient to tell them the plan.
It was a slow, tedious process, usually taking 90 minutes from start to finish. Kind of a pain in the neck to dot the t’s and cross the i’s, but I was always of the opinion that if the day was easy, if the work day was not a pain in the neck, I was not paying attention or doing my job. But the routine was how I acquired information and in the process, thought about the patient.
With the EMR, I would type the assessment and plan and dictate the H&P using voice recognition. The EMR has features that make the job easier: cut and paste and boilerplate. I never used either. Why? Both stopped my thinking about the case and switched to thinking about how to use the EMR. And, being a crutch for the lazy, would lead to either slop in the chart or the perpetuation of errors. I saw plenty of both in the EMR. By good doctors. But in a busy, very time-pressured day, a quick copy and paste or use of boilerlplate gets the work done that much faster.
But I found that active thinking is a limited resource. And for me, if I was focused on the EMR, I was pissing away resources on trying to remember the keyboard shortcut or modifying my boilerplate to fit the patient.
I found interface changes an increasing problem in my final years. I sometimes wondered if I was spending my mental energy on trying to remember Windows, MacOs, Linux, Android, all of which were constantly changing, as was the interface of the numerous apps I needed for work. Another nice thing about retirement: fewer computer interfaces to worry about. And last week I changed my gaming laptop from Windows to Linux. Just feel the tension go away.
Now I see the EPIC, the EMR I used, is adding AI. Of course it is. And I do not see this as a good thing.
I have been skeptical of AI for a while and rarely use it. As best I can tell, AI is not intelligent, but a fancy-schmancy word guessing program, albeit an amazing if unreliable one. The few times I have used AI, I received bad information, aka hallucinations, the current euphemisms for made up bull shit. Besides, I have a bias towards reading the source material, not some LLMs idea of what I am interested in. Too often I found that the summary of a medical paper was in error when compared to the source material, As I will probably note many times, the process of learning and thinking is as important as the outcome.
The EPIC website has a chirpy video describing how AI is being integrated into the EMR. It looks dreadful.
One task: summarize the medical record “like a trusted colleague.” Would you trust any colleague whose career is based on the unethical widespread theft of intellectual property and is prone to hallucinations, aka bullshit? Not me. Hell, I never trusted my human colleagues. For consults, I always assumed that everyone else got it wrong, a good attitude to have as a consulting physician. Keeps the mind open for errors. Unfortunately for others, as the only ID doc, I had to be the trusted colleague. Who ICE’s ICE?
Reviewing the chart can be tedious. But the process is important. As you go through the chart, you start to process what is going on with the patient. You might find anomalies. You generate questions. The process helps you to understand, especially in the context of the current medical issues, what is going on. If you hand off the work to an LLM, you have stopped participating in the cognitive processes of being a doctor.
And even worse: AI will listen to the conversation in the exam room then generate the note. AI “distills the conversation” and will soon include orders and diagnosis. AI can also ignore superfluous information. Says who? It is one of the oddities of medicine that the so-called superfluous information may be important.
It just may be me, but the process of putting pen to paper or keyboard to screen has always been critical for finally deciding what I think the patient had and what to do about it. The process of writing the assessment and plan was key for finalizing my thoughts about the patient. Many times I would change my initial ideas or come up with new questions about the case while writing the note. I always found it curious how the process of writing the note (or this blog) crystallizes my understanding. Nothing quite like writing to turn vague thoughts into concrete ideas.
What this AI is going to do is offload all the cognitive processes that make one a doctor and improve/grow over time. People will always take the path of least resistance. And the result will be doctors who cannot doctor.
You don’t become a better golfer by playing Tiger Woods Golf on the Playstation and you are not going to be a better doctor by letting AI do the grunt work.
Because these AI capabilities are built directly into Epic, they draw from information captured throughout the medical record—providing answers and insights grounded in a comprehensive understanding of the patient.
It is the physician’s job to provide answers and insights grounded in a comprehensive understanding of the patient. If you let AI do all the work, you are going to become an enshittified doctor.
As an aside, there was a push towards the end of my career for concerns over work life balance by the housestaff. Not such thing in medicine if you want to be more than a mediocre physician. Work is life. Fortunately, it is also immensely fun and satisfying work. At least for me.
The EMR is already a time suck
Using bedside or point-of-care systems increased documentation time of physicians by 17.5%. In comparison, the use of central station desktops for computerized provider order entry (CPOE) was found to be inefficient, increasing the work time from 98.1% to 328.6% of physicians’ time per working shift (weighted average of CPOE-oriented studies, 238.4%).
AI coding? AI makes coders work longer.
We find that when developers use AI tools, they take 19% longer than without—AI makes them slower.
So add 19% to the above? Ohh, glad I retired.
For cognitive skills? Makes people dumber.
The effects of AI on cognitive development are already being identified in schools across the United States. In a report titled, “Generative AI Can Harm Learning,” researchers at the University of Pennsylvania found that students who relied on AI for practice problems performed worse on tests compared to students who completed assignments without AI assistance. This suggests that the use of AI in academic settings is not just an issue of convenience, but may be contributing to a decline in critical thinking skills.
Furthermore, educational experts argue that AI’s increasing role in learning environments risks undermining the development of problem-solving abilities. Students are increasingly being taught to accept AI-generated answers without fully understanding the underlying processes or concepts. As AI becomes more ingrained in education, there is a concern that future generations may lack the capacity to engage in deeper intellectual exercises, relying on algorithms instead of their own analytical skills.
Just what we want for our health care providers: AI in the workplace.
Of course, why do this?
AI Charting in Epic helps reduce time spent on documentation, administrative work, and chart navigation—so you can focus more on patient care.
So what is patient care if not understanding their issues by going through the slow process of reviewing their problems and thinking about them? Certainly not going to get there by offloading all the work on to AI. What will this focus on patient care actually be? Referring the patient to another AI?
For me, the cognition required almost daily use or it faded rapidly. More than two or three days off, and I was cognitively out of shape. Just like not exercising. If you offload your thinking to AI, it will not be long before the mind is morbidly obese, unable to get out of bed, covered in oozing bedsores, lying in its own enshittification. There is a mental picture for you.
I really think we are heading into a world where doctors are going to be mediocre at best. There are many articles about how kids today (remember, I am an old geezer) are unable to read books or even sit through a whole movie. You wonder how they are going to master, yes, master, the immense amount of information, all in dense, dry, complicated, technical textbooks, required just to get through medical school, much less become proficient in whatever medical specialty field they choose.
Now? All the cognitive processing will be offloaded into AI. And people will take the path of least resistance, in the process acquiring only the facade of understanding. And if the AI hallucinates? They are not going to know enough to recognize it.
So there will be three kinds of doctors slowing evolving into one.
A few old geezers like me who will mostly avoid AI. I say mostly. I wouldn’t mind if AI generated my billing codes. But that is it. But their medical mind will remain sharp and be wielded like a samurai blade.
Most current MDs will take the path of least resistance, use AI and their mind will be increasingly rusty and dull and at some point be useless.
And those brought up on AI? Their minds will be plastic sporks. Enshittified from the start.
As mentioned above, AI is trained on the theft of others work. I searched a database of works used to train AI, and several of my books were there. When I was in practice, I had an ID app I sold on Android and iPhone. The information was also on-line for free; still there in fact. Some a-hole took the on-line material, put a wrapper around it and sold my app as their own. I do not see how AI is any different. If you are a content creator, if you have intellectual property you made, you should do nothing to support AI.
And AI is an environmental disaster.
And AI is likely a scam that is going to fry the economy. Thanks for the links David. I’m soooo optimistic.
EMR Enshittification will lead to doctor enshittification. Physician as AI slop, coming soon to an exam room near you.
And get off my lawn punk.
Addendum. After posting I found the following:
We evaluate several frontier AI agent frameworks on RLI, utilizing a rigorous manual evaluation process to compare AI outputs against the human gold standard. The results indicate that performance on the benchmark is currently near the floor. The best-performing current AI agents achieve an automation rate of 2.5%, failing to complete most projects at a level that would be accepted as commissioned work in a realistic freelancing environment. This demonstrates that despite rapid progress on knowledge and reasoning benchmarks, contemporary AI systems are far from capable of autonomously performing the diverse demands of remote labor.
with
Common failure modes. Our qualitative analysis across roughly 400 evaluations shows that rejections predominantly cluster around the following primary categories of failure:
Technical and File Integrity Issues: Many failures were due to basic technical problems, such as producing corrupt or empty files, or delivering work in incorrect or unusable formats.
Incomplete or Malformed Deliverables: Agents frequently submitted incomplete work, characterized by missing components, truncated videos, or absent source assets.
Quality Issues: Even when agents produce a complete deliverable, the quality of the work is frequently poor and does not meet professional standards.
Inconsistencies: Especially when using AI generation tools, the AI work often shows
inconsistencies between deliverable files.
Or a 96% failure rate. Great if you are a malpractice lawyer. Not so great if you are a patient of an enshittified doctor.




