Originally posted by nastle
View Post
X
-
As a radiologist.... FINALLY someone understands that we aren't the only ones susceptible to being taken out.
If primary care of all kinds could bring back the physical exam, that would be what sets it apart from APPs and AI. Sadly, hospital metrics don't reward good care. Just 5 star reviews and throughput.
- Likes 1
Comment
-
Originally posted by Sigrid View Post
Very few, but that's because it's not their job. A person can be taught how to read an EKG. AI has been trying to read EKGs for twenty years and still tells me that third-degree heart block is "normal sinus rhythm".
point is nobody is saying a cardiologist will be replaced by AI
Comment
-
Originally posted by Brains428 View PostAs a radiologist.... FINALLY someone understands that we aren't the only ones susceptible to being taken out.
If primary care of all kinds could bring back the physical exam, that would be what sets it apart from APPs and AI. Sadly, hospital metrics don't reward good care. Just 5 star reviews and throughput.
subjective tests for wellness not relying on scores
reimbursement based on time
Comment
-
The biggest threat from AI is not from replacing physicians in treating patients. It’s because companies are trying to turn them into automated billing machines to drain all remaining healthcare dollars. It doesn’t matter if the output is accurate, if the billing code is the same.
- Likes 2
Comment
-
Originally posted by Sigrid View Post
Very few, but that's because it's not their job. A person can be taught how to read an EKG. AI has been trying to read EKGs for twenty years and still tells me that third-degree heart block is "normal sinus rhythm".
- Likes 1
Comment
-
Originally posted by Dusn View PostThe biggest threat from AI is not from replacing physicians in treating patients. It’s because companies are trying to turn them into automated billing machines to drain all remaining healthcare dollars. It doesn’t matter if the output is accurate, if the billing code is the same.
Comment
-
After all, the term "artificial intelligence" doesn’t delineate specific technological advances. A term like “nanotechnology” classifies technologies by referencing an objective measure of scale, while AI only references a subjective measure of tasks that we classify as intelligent. For instance, the adornment and “deepfake” transformation of the human face, now common on social media platforms like Snapchat and Instagram, was introduced in a startup sold to Google by one of the authors; such capabilities were called image processing 15 years ago, but are routinely termed AI today. The reason is, in part, marketing. Software benefits from an air of magic, lately, when it is called AI. If “AI” is more than marketing, then it might be best understood as one of a number of competing philosophies that can direct our thinking about the nature and use of computation.
A clear alternative to “AI” is to focus on the people present in the system. If a program is able to distinguish cats from dogs, don’t talk about how a machine is learning to see. Instead talk about how people contributed examples in order to define the visual qualities distinguishing “cats” from “dogs” in a rigorous way for the first time. There's always a second way to conceive of any situation in which AI is purported. This matters, because the AI way of thinking can distract from the responsibility of humans.
AI might be achieving unprecedented results in diverse fields, including medicine, robotic control, and language/image processing, or a certain way of talking about software might be in play as a way to not fully celebrate the people working together through improving information systems who are achieving those results. “AI” might be a threat to the human future, as is often imagined in science fiction, or it might be a way of thinking about technology that makes it harder to design technology so it can be used effectively and responsibly. The very idea of AI might create a diversion that makes it easier for a small group of technologists and investors to claim all rewards from a widely distributed effort. Computation is an essential technology, but the AI way of thinking about it can be murky and dysfunctional.At its core, "artificial intelligence" is a perilous belief that fails to recognize the agency of humans.
Jaron Lanier in Wired. I'd urge you to read any of his books instead of getting caught up in false techbro ideology.
Comment
-
Originally posted by AR View Post
You're probably referring to old technology. All of the most positive studies I've read about this are talking about AI that is still in research stages (i.e., unless you are part of a study, then you haven't used it) and is not widely distributed.
Look, I'm an academic doc. I do research. I'm NIH funded. And I know quite well that most research never produces anything beyond the "that looks promising" phase. (Mine included.) Getting things to work in medicine is hard. There is undoubtedly a role for machine learning in medicine -- but it's not there yet, and the people who keep talking to me about machine learning a) are mostly spouting hype; and b) have been talking about machine learning in medicine for a long time, and so far all of their predictions about how close we are have been inaccurate. I don't know how many more five year blocks we're going to have where "AI in medicine is five years away", but so far it's been an eternally moving target.
I get that a lot of this thread boils down to, "Primary care APPs are bad at their job, so maybe they can be replaced by machines", but the thing is, machines are also bad at their jobs, and it's proving a lot hard to teach a machine than teach a person.
- Likes 2
Comment
-
Originally posted by legobikes View Post
https://www.wired.com/story/opinion-...-a-technology/
Jaron Lanier in Wired. I'd urge you to read any of his books instead of getting caught up in false techbro ideology.
seems like a “Saul trying to become Paul” case
Comment
-
Originally posted by Dusn View PostThe biggest threat from AI is not from replacing physicians in treating patients. It’s because companies are trying to turn them into automated billing machines to drain all remaining healthcare dollars. It doesn’t matter if the output is accurate, if the billing code is the same.
Comment
-
Originally posted by Sigrid View Post
Well, then, when it gets widely distributed, I'll start listening to people about AI in medicine.
Look, I'm an academic doc. I do research. I'm NIH funded. And I know quite well that most research never produces anything beyond the "that looks promising" phase. (Mine included.) Getting things to work in medicine is hard. There is undoubtedly a role for machine learning in medicine -- but it's not there yet, and the people who keep talking to me about machine learning a) are mostly spouting hype; and b) have been talking about machine learning in medicine for a long time, and so far all of their predictions about how close we are have been inaccurate. I don't know how many more five year blocks we're going to have where "AI in medicine is five years away", but so far it's been an eternally moving target.
I get that a lot of this thread boils down to, "Primary care APPs are bad at their job, so maybe they can be replaced by machines", but the thing is, machines are also bad at their jobs, and it's proving a lot hard to teach a machine than teach a person.
If both ( APN and AI) are bad then it’s better to have machines , they can run longer ,without huge salaries and no PTO
quantity has a quality of its own
Comment
-
Originally posted by nastle View Post
Comment
-
AI and machines are productivity tools. Just like typewriters and copying machines were replaced by PC's. Not to mention things like spreadsheets replaced a ton of punch cards and adding machines. Now it is on your phone. No more operators and no more real secretaries taking short hand. Tools to make your judgments easier.
Comment
Channels
Collapse
Comment