Announcement

Collapse
No announcement yet.

AI based mid level provider

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by nastle View Post

    How many primary care midlevels can ?
    Very few, but that's because it's not their job. A person can be taught how to read an EKG. AI has been trying to read EKGs for twenty years and still tells me that third-degree heart block is "normal sinus rhythm".

    Comment


    • #17
      As a radiologist.... FINALLY someone understands that we aren't the only ones susceptible to being taken out.

      If primary care of all kinds could bring back the physical exam, that would be what sets it apart from APPs and AI. Sadly, hospital metrics don't reward good care. Just 5 star reviews and throughput.

      Comment


      • #18
        Originally posted by Sigrid View Post

        Very few, but that's because it's not their job. A person can be taught how to read an EKG. AI has been trying to read EKGs for twenty years and still tells me that third-degree heart block is "normal sinus rhythm".
        I can show you many primary care MDs ( 20 yrs in practice ) who can’t properlyread ekg dont even get me started on APN
        point is nobody is saying a cardiologist will be replaced by AI

        Comment


        • #19
          Originally posted by Brains428 View Post
          As a radiologist.... FINALLY someone understands that we aren't the only ones susceptible to being taken out.

          If primary care of all kinds could bring back the physical exam, that would be what sets it apart from APPs and AI. Sadly, hospital metrics don't reward good care. Just 5 star reviews and throughput.
          Physical exam
          subjective tests for wellness not relying on scores
          reimbursement based on time



          Comment


          • #20

            The biggest threat from AI is not from replacing physicians in treating patients. It’s because companies are trying to turn them into automated billing machines to drain all remaining healthcare dollars. It doesn’t matter if the output is accurate, if the billing code is the same.

            Comment


            • #21
              Originally posted by Sigrid View Post

              Very few, but that's because it's not their job. A person can be taught how to read an EKG. AI has been trying to read EKGs for twenty years and still tells me that third-degree heart block is "normal sinus rhythm".
              You're probably referring to old technology. All of the most positive studies I've read about this are talking about AI that is still in research stages (i.e., unless you are part of a study, then you haven't used it) and is not widely distributed.

              Comment


              • #22
                Originally posted by Dusn View Post
                The biggest threat from AI is not from replacing physicians in treating patients. It’s because companies are trying to turn them into automated billing machines to drain all remaining healthcare dollars. It doesn’t matter if the output is accurate, if the billing code is the same.
                Remote patients monitoring is like that too

                Comment


                • #23
                  After all, the term "artificial intelligence" doesn’t delineate specific technological advances. A term like “nanotechnology” classifies technologies by referencing an objective measure of scale, while AI only references a subjective measure of tasks that we classify as intelligent. For instance, the adornment and “deepfake” transformation of the human face, now common on social media platforms like Snapchat and Instagram, was introduced in a startup sold to Google by one of the authors; such capabilities were called image processing 15 years ago, but are routinely termed AI today. The reason is, in part, marketing. Software benefits from an air of magic, lately, when it is called AI. If “AI” is more than marketing, then it might be best understood as one of a number of competing philosophies that can direct our thinking about the nature and use of computation.

                  A clear alternative to “AI” is to focus on the people present in the system. If a program is able to distinguish cats from dogs, don’t talk about how a machine is learning to see. Instead talk about how people contributed examples in order to define the visual qualities distinguishing “cats” from “dogs” in a rigorous way for the first time. There's always a second way to conceive of any situation in which AI is purported. This matters, because the AI way of thinking can distract from the responsibility of humans.

                  AI might be achieving unprecedented results in diverse fields, including medicine, robotic control, and language/image processing, or a certain way of talking about software might be in play as a way to not fully celebrate the people working together through improving information systems who are achieving those results. “AI” might be a threat to the human future, as is often imagined in science fiction, or it might be a way of thinking about technology that makes it harder to design technology so it can be used effectively and responsibly. The very idea of AI might create a diversion that makes it easier for a small group of technologists and investors to claim all rewards from a widely distributed effort. Computation is an essential technology, but the AI way of thinking about it can be murky and dysfunctional.
                  https://www.wired.com/story/opinion-...-a-technology/

                  Jaron Lanier in Wired. I'd urge you to read any of his books instead of getting caught up in false techbro ideology.

                  Comment


                  • #24
                    Originally posted by AR View Post

                    You're probably referring to old technology. All of the most positive studies I've read about this are talking about AI that is still in research stages (i.e., unless you are part of a study, then you haven't used it) and is not widely distributed.
                    Well, then, when it gets widely distributed, I'll start listening to people about AI in medicine.

                    Look, I'm an academic doc. I do research. I'm NIH funded. And I know quite well that most research never produces anything beyond the "that looks promising" phase. (Mine included.) Getting things to work in medicine is hard. There is undoubtedly a role for machine learning in medicine -- but it's not there yet, and the people who keep talking to me about machine learning a) are mostly spouting hype; and b) have been talking about machine learning in medicine for a long time, and so far all of their predictions about how close we are have been inaccurate. I don't know how many more five year blocks we're going to have where "AI in medicine is five years away", but so far it's been an eternally moving target.

                    I get that a lot of this thread boils down to, "Primary care APPs are bad at their job, so maybe they can be replaced by machines", but the thing is, machines are also bad at their jobs, and it's proving a lot hard to teach a machine than teach a person.

                    Comment


                    • #25
                      Originally posted by legobikes View Post

                      https://www.wired.com/story/opinion-...-a-technology/

                      Jaron Lanier in Wired. I'd urge you to read any of his books instead of getting caught up in false techbro ideology.
                      https://www.smithsonianmag.com/innov...web-165260940/
                      seems like a “Saul trying to become Paul” case

                      Comment


                      • #26
                        Originally posted by Dusn View Post
                        The biggest threat from AI is not from replacing physicians in treating patients. It’s because companies are trying to turn them into automated billing machines to drain all remaining healthcare dollars. It doesn’t matter if the output is accurate, if the billing code is the same.
                        Now that is an argument I actually agree with.

                        Comment


                        • #27
                          Originally posted by Sigrid View Post

                          Well, then, when it gets widely distributed, I'll start listening to people about AI in medicine.

                          Look, I'm an academic doc. I do research. I'm NIH funded. And I know quite well that most research never produces anything beyond the "that looks promising" phase. (Mine included.) Getting things to work in medicine is hard. There is undoubtedly a role for machine learning in medicine -- but it's not there yet, and the people who keep talking to me about machine learning a) are mostly spouting hype; and b) have been talking about machine learning in medicine for a long time, and so far all of their predictions about how close we are have been inaccurate. I don't know how many more five year blocks we're going to have where "AI in medicine is five years away", but so far it's been an eternally moving target.

                          I get that a lot of this thread boils down to, "Primary care APPs are bad at their job, so maybe they can be replaced by machines", but the thing is, machines are also bad at their jobs, and it's proving a lot hard to teach a machine than teach a person.
                          I agree that there is too much inertia in medical field and machines are overhyped as they are considered sexy or futuristic but
                          If both ( APN and AI) are bad then it’s better to have machines , they can run longer ,without huge salaries and no PTO
                          quantity has a quality of its own

                          Comment


                          • #28
                            Originally posted by nastle View Post

                            https://www.smithsonianmag.com/innov...web-165260940/
                            seems like a Saul trying to become Paul case
                            When encountering an idea, the expedient thing to do is dismiss it by snap judgement of the person communicating the idea.

                            Comment


                            • #29
                              Originally posted by nastle View Post
                              If both ( APN and AI) are bad then it’s better to have machines , they can run longer ,without huge salaries and no PTO
                              quantity has a quality of its own
                              We are going to have to disagree on this.

                              Comment


                              • #30
                                AI and machines are productivity tools. Just like typewriters and copying machines were replaced by PC's. Not to mention things like spreadsheets replaced a ton of punch cards and adding machines. Now it is on your phone. No more operators and no more real secretaries taking short hand. Tools to make your judgments easier.

                                Comment

                                Working...
                                X