• deliriousdreams@fedia.io
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    8 hours ago

    I used that as a singular example of how AI is actually not doing as good a job with diagnostics in medicine as articles appear to portray but you should probably read the link I linked as well as the one at the bottom of this comment.

    In using AI to augment medical diagnostics we are literally seeing a decline in the abilities of diagnosticians. That means doctors are becoming worse at doing the job they are trained to do which is dangerous because it means they (the people most likely to be able to quality assure the results the AI spits out) are becoming less able to act as a check and balance against AI when it’s being used.

    This isn’t meant to be an attack on the tool, just to point out that the use cases of these AI in medical fields are also being exaggerated or misrepresented and nobody seems to be paying attention to that part.

    I would also caution you to ask yourself whether or not everyone being screened in this way would be a detriment by causing more work for doctors who’s workloads are already astronomical for a lot of false positive results.

    I understand that that may seem like a better result in the long run because it means more people may have their medical conditions caught earlier which lead to better treatment outcomes. But that isn’t a guarantee, and it may also lead to worse outcomes, especially if the decline in diagnostic ability in doctors continues or increases.

    What happens when the AI and the doctor both get it wrong?

    https://hms.harvard.edu/news/researchers-discover-bias-ai-models-analyze-pathology-samples

    • Sterile_Technique@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 hours ago

      Recent nursing school graduate here! We had a lot of assignments to find and present data on some disease process or drug or intervention etc. Actually finding credible sources and picking out the data we need and putting it on paper is a super tedious process, and my classmates LOVED zipping through that stuff with some AI shit. And they’d get 100s on their assignments, and everything was just rainbows and unicorn farts… up until test day, where they’d fail or barely pass. Now several of them are struggling to pass the NCLEX.

      Drives me insane. Like, you mother fuckers aren’t here to get a grade, you’re here to learn this shit so you know what to do when you see it in whatever hospital hires your dumb ass.

      Definitely doesn’t paint a pretty picture about the future of medicine.

       

      Why come you no have tattoo??

      • faythofdragons@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        Drives me insane. Like, you mother fuckers aren’t here to get a grade, you’re here to learn this shit so you know what to do when you see it in whatever hospital hires your dumb ass.

        This is happening because the job market is absolutely fucked. Students are under the impression that grades are what will drive job prospects, because nobody is hiring on merit any more.

        My SIL has been a nurse in the cardiac surgery department for nearly a decade, and even her hospital is now using AI to screen potential new hires.

        We’re so cooked.

      • deliriousdreams@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        6 hours ago

        My hope is that the ones who don’t build the skills to work in medicine don’t pass. Because at least then they don’t get to make decisions that affect a person’s health (even in non-life or death situations).

        But my trust in schools is waining as more and more of them sign up for chatgpt and other LLM’S, essentially forcing them on students.

        The entire schooling system including post secondary education is handling this pretty poorly from what I can see.

        Using LLM’S to detect if something is plagiarism, using it to detect if something is written by an LLM, using it to detect cheating, using it to write lesson plans, using it to offload work onto that are pretty significant portions of your job, encouraging students to use it without safeguards for making sure they do their own work and their own thinking.

        I can’t imagine going to school in this day and age, and having so many adults speak out of both sides of their mouth about LLM’s this way.

        How can you be a teacher or professor, assigning classwork written entirely by an AI and at the same time tell students to use it “responsibly”.

        We don’t even teach students the pitfalls of it. We don’t express how to use it responsibly. We don’t explain how to spot it, and tools to use to prevent ourselves from falling victim to the worst parts of it.

    • MissesAutumnRains@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      I read both articles you linked, but I’m not really seeing how they support your point. The first article seemed to support the idea that healthcare staff would welcome more seamless, user-friendly AI tools in the field and the second discussed biases within tools they selected for cancer diagnoses and a tool they used to reduce those biases. Am I misunderstanding what you’re saying somewhere?

      Also, with regard to the reduction in diagnostic accuracy of diagnosticians with AI, I would need to see the specific article to be sure, but if it’s the one that was posted across reddit a few months back, I read through that one as well. It seemed to agree with a similar article about students writing papers with and without the use of ChatGPT (group A writes with it, group B writes without it, and afterwards they are asked to both write without the LLM. Group B’s essay was shown to be better. This is a hugely reductive description of the experiment, but gets the idea across). Again, it makes sense that if you use a tool to facilitate an action, that tool is replacing that skill and you get “rusty”. It does not mean that the existence of a tool would reduce skill in those who do not use it, though. My suggestion of using it as a screening tool wouldn’t affect the diagnostician’s skill unless they also used it, which sorta defeats the purpose of them being a human check on the process, post-screening flag.

      I can’t speak to your other points as they’re hypothetical. Obviously, I wouldn’t advocate for an inaccurate tool that causes an already overworked field to take on more work. I’m only suggesting that ML is a tool that has use-cases and can be used to supplement current processes to improve outcomes. They can, and are, being improved constantly. If they’re employed thoughtfully, I just think they can be a huge benefit.

      • deliriousdreams@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        6 hours ago

        First question. What happens when the old cohort who don’t use AI die out? We are not seeing a decrease in adoption of AI use in these fields but an increase. And that increase is compounded by the people who never learn such skills in the first place because they use AI to do the work for them that gets them through the schooling that would teach them such skills.

        Second question did you read the parts about how news media is portraying studies, or the parts about how studies are using miniscule (entirely too small) sample sizes, or the parts where the studies aren’t being peer reviewed before the articles relating to them spread misinformation about them?

        The tools aren’t ready for prime time use, but they are being used in medicine.

        You seem to have glossed right over the detriments that doctors and researchers are already experiencing with Generative AI LLM’S (you keep saying ML, and that’s not exactly the subject we’re talking about here), And the fact that it takes extensive experience, and a knowlegable expert to fix, in a world where the AI LLM’S are contributing to a significant decline in the number of people who can do that, meaning that correcting LLM outputs will happen less and less over time because they require people to correct them, people to create the data sets, and people to understand and have expert knowledge in the data sets/subjects in order to verify the outputs and fix them.

        I can appreciate you not wanting to speak on a hypothetical but that just doesn’t ring true to me either because it means you haven’t thought about the implications of this tech and it’s effect on the industry being discussed or you have and you are ignoring it.

        Not weighing the huge benefits of a tech against its detriments is dangerous and a very naive way to look at the world.

        • MissesAutumnRains@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          For your first question, what you’re describing is a problem with education and staffing, not a problem of the tool itself. I’m not suggesting you keep around ‘one old man who hates AI’, my pitch you bar the use of AI for human-level checks.

          For your second, yes I saw the part about how news and media are representing AI in healthcare, but I don’t really see how news or media are relevant here. Could you explain this a bit for me?

          I don’t intend to gloss over the issues with Generative AI/LLMs, I tried to be specific in my separation of ML from them in my original comment where I said LLMs in their public facing version (ChatGPT, Claude, whatever) aren’t very useful.

          The original comment I replied to asked “is “AI” even useful (etc)” but also mentioned LLMs. I was trying to make the point that LLMs aren’t the only type of AI and that others can be employed to great effect. If that was unclear, that’s my bad but that was my intention.

          The reason I don’t want to engage with a hypothetical is because I could just as easily counter with “what if it diagnoses at a 100% success rate? What if fear of losing skills results in doctors never wanting to use AI, resulting in more deaths?” Neither hypothetical argument is really very helpful for the discussion. I promise you I’ve thought about this a lot (but again, I’m not an expert, nor am I in the field), but more importantly I have friends finishing doctorates in the bioinformatics field whom I get some insight from, and I’m, at least at this point, convinced of the benefits.