Skimming the article, this would seem like another case of the explanainability problem, no? the conversation with the llm makes the results "easier to understand" (which is a requirement for real use-cases) but loses accuracy? Still good if we have more studies confirming this tradeoff to be the case.
Link to the study: https://www.nature.com/articles/s41591-025-04074-y
Co-author here and happy to answer questions!
Skimming the article, this would seem like another case of the explanainability problem, no? the conversation with the llm makes the results "easier to understand" (which is a requirement for real use-cases) but loses accuracy? Still good if we have more studies confirming this tradeoff to be the case.