Wo ich schon mal da bin

JuDerm BVDD
Since 09/2024 4 Episoden

03 Wo ich schon mal da bin - Prof. Dr. Kerstin Prechel

KI, Dystopien und süße kleine Robben

01.11.2024 54 min

Zusammenfassung & Show Notes

In unserer dritten Folge sprechen wir mit KI- und Ethik-Expertin Prof. Dr. Kerstin Prechel über die neuen Herausforderungen an Ärzte, aber auch an uns alle, die durch das Fortschreiten der KI-Technologien auf uns warten. Es geht außerdem um kleine Roboter mit Strickmützchen, niedliche Robben und Hollywood - irgendwie. 
Eine Folge, nach der es definitiv mehr Fragen als vorher gibt ... 

Relevante Studien zum Thema KI und Medizin: 
(Buchempfehlung nicht wissenschaftlich: Marc-Uwe Kling "Views")
Abdul-Kader, S. A., & Woods, J. (2015).
Survey on chatbot design techniques in speech conversation systems. International Journal of Advanced Computer Science and Applications, 6(7), 72-80.
Alvarado, R. (2022).
What kind of trust does AI deserve, if any? AI and Ethics. DOI: 10.1007/s43681-022-00224-x.
Amann, J., Vetter, D., Blomberg, S.N., Christensen, H.C., Coffee, M., Gerke, S., Gilbert, T.K., Hagendorff, T., Holm, S., Livne, M., Spezzatti, A., Strümke, I., Zicari, R.V., & Madai, V.I. (2022).
To explain or not to explain? Artificial intelligence explainability in clinical decision support systems. PLOS Digital Health 1(2), e0000016. DOI: 10.1371/journal.pdig.0000016 [Open Access].
Arbelaez Ossa, L., Starke, G., Lorenzini, G., Vogt, J.E., Shaw, D.M., & Elger, B.S. (2022).
Re-focusing explainability in medicine. Digital Health, 8. DOI: 10.1177/20552076221074488 [Open Access].
Babushkina, D. (2022).
Are we justified attributing a mistake in diagnosis to an AI diagnostic system? AI and Ethics. DOI: 10.1007/s43681-022-00189-x [Open Access].
Baile, W. F., Buckman, R., Lenzi, R., Glober, G., Beale, E. A., & Kudelka, A. P. (2000).
SPIKES—A six-step protocol for delivering bad news: Application to the patient with cancer. The Oncologist, 5(4), 302-311.
Benrimoh, D., Hawco, C., & Fratila, R. (2020).
Using artificial intelligence to support patients facing cancer: From chatbot to clinical decision-making tools. Current Oncology Reports, 22(11), 1-8.
Bickmore, T. W., & Giorgino, T. (2006).
Health dialog systems for patients and consumers. Journal of Biomedical Informatics, 39(5), 556-571.
Bickmore, T. W., & Schulman, D. (2011).
Practical approaches to comforting patients with relational agents. Interacting with Computers, 23(3), 279-288.
Bleher, H., & Braun, M. (2022).
Diffused responsibility: Attributions of responsibility in the use of AI-driven clinical decision support systems. AI and Ethics, 2(4), 747-761. DOI: 10.1007/s43681-022-00135-x [Open Access].
Chen, H., Gomez, C., Huang, C.-M., & Unberath, M. (2022).
Explainable medical imaging AI needs human-centered design: Guidelines and evidence from a systematic review. npj Digital Medicine, 5, 156. DOI: 10.1038/s41746-022-00699-2 [Open Access].
Combi, C., Amico, B., Bellazzi, R., Holzinger, A., Moore, J.H., Zitnik, M., & Holmes, J.H. (2022).
A manifesto on explainability for artificial intelligence in medicine. Artificial Intelligence in Medicine, 133, 102423. DOI: 10.1016/j.artmed.2022.102423 [Open Access].
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017).
Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115-118.
Floridi, L., & Cowls, J. (2019).
A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1), 1-15.
Friedrich, A.B., Mason, J., & Malone, J.R. (2022).
Rethinking explainability: Toward a postphenomenology of black-box artificial intelligence in medicine. Ethics and Information Technology, 24, 8. DOI: 10.1007/s10676-022-09631-4.
Funer, F. (2022).
Accuracy and Interpretability: Struggling with the Epistemic Foundations of Machine Learning-Generated Medical Information and Their Practical Implications for the Doctor-Patient Relationship. Philosophy & Technology 35(5). DOI: 10.1007/s13347-022-00505-7 [Open Access].
Funer, F. (2022).
The Deception of Certainty: how Non-Interpretable Machine Learning Outcomes Challenge the Epistemic Authority of Physicians. A deliberative-relational Approach. Medicine, Health Care and Philosophy, 25, 167–178. DOI: 10.1007/s11019-022-10076-1 [Open Access].
Gardner, A., Smith, A.L., Steventon, A., Coughlan, E., & Oldfield, M. (2022).
Ethical funding for trustworthy AI: Proposals to address the responsibility of funders to ensure that projects adhere to trustworthy AI practice. AI and Ethics, 2, 277–291.
Grote, T., & Berens, P. (2020).
On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics, 46(3), 205-211.
Hallowell, N., Badger, S., Sauerbrei, A., Nellåker, C., & Kerasidou, A. (2022).
“I don’t think people are ready to trust these algorithms at face value”: Trust and the use of machine learning algorithms in the diagnosis of rare disease. BMC Medical Ethics, 23, 112. DOI: 10.1186/s12910-022-00842-4 [Open Access].
Hasani, N., Morris, M.A., Rhamim, A., Summers, R.M., Jones, E., Siegel, E., & Saboury, B. (2022).
Trustworthy Artificial Intelligence in Medical Imaging. PET Clin, 17(1), 1–12. DOI: 10.1016/j.cpet.2021.09.007.
Herzog, C. (2022).
On the risk of confusing interpretability with explicability. AI and Ethics, 2, 219–225.
Herzog, C. (2022).
On the ethical and epistemological utility of explicable AI in medicine. Philosophy & Technology, 35, 50. DOI: 10.1007/s13347-022-00546-y [Open Access].
Hatherley, J., Sparrow, R., & Howard, M. (2022).
The virtues of interpretable medical artificial intelligence. Cambridge Quarterly of Healthcare Ethics. DOI: 10.1017/S0963180122000305 [Open Access].
Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., ... & Wang, Y. (2017).
Artificial intelligence in healthcare: Past, present and future. Stroke and Vascular Neurology, 2(4), 230-243.
Jobin, A., Ienca, M., & Vayena, E. (2019).
The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
Kawamleh, S. (2022).
Against explainability requirements for ethical artificial intelligence in health care. AI and Ethics. DOI: 10.1007/s43681-022-00212-1.
Kawamleh, S. (2022).
Against explainability requirements for ethical artificial intelligence in health care. AI and Ethics. DOI: 10.1007/s43681-022-00212-1.
Kemp, H., Freyer, N., & Nagel, S.K. (2022).
Justice and the normative standards of explainability in healthcare. Philosophy & Technology 35, 100. DOI: 10.1007/s13347-022-00598-0 [Open Access].
Kerasidou, C., Kerasidou, A., Buscher, M., & Wilkinson, S. (2022).
Before and beyond trust: Reliance in medical AI. Journal of Medical Ethics, 48(11), 852–856. DOI: 10.113
Kiseleva, A., Kotzinos, D., & De Hert, P. (2022).
Transparency of AI in healthcare as a multilayered system of accountabilities: Between legal requirements and technical limitations. Frontiers in Artificial Intelligence, 5, 879603. DOI: 10.3389/frai.2022.879603.
Lütge, C. (2020).
Ethik der Künstlichen Intelligenz. Springer.
Lütge, C., & Maas, J. (2021).
The Ethics of AI and Robotics: A German Perspective. In Ethics of Artificial Intelligence and Robotics: Fundamentals and Applications (pp. 33-51). Springer.
McDougall, R. J. (2019).
Computer knows best? The need for value-flexibility in medical AI. Journal of Medical Ethics, 45(3), 156-160.
McTear, M. F., Callejas, Z., & Griol, D. (2016).
The role of conversational agents in healthcare: A literature review. Journal of Medical Systems, 40(7), 1-12.
Milne-Ives, M., de Cock, C., Lim, E., Shehadeh, M. H., de Pennington, N., Mole, G., & Meinert, E. (2020).
The effectiveness of artificial intelligence conversational agents in health care: Systematic review. Journal of Medical Internet Research, 22(10), e20346.
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020).
From what to how: An initial review of publicly available AI ethics tools, methods, and research to translate principles into practices. Science and Engineering Ethics, 26(4), 2141-2168.
Ott, T., & Dabrock, P. (2022).
Transparent human – (non-)transparent technology? The Janus-faced call for transparency in AI-based health care technologies. Frontiers in Genetics 13, 902960. DOI: 10.3389/fgene.2022.902960 [Open Access].
Petch, J., Di, S., & Nelson, W. (2022).
Opening the black box. The promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38(2), 204–213. DOI: 10.1016/j.cjca.2021.09.004 [Open Access].
Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., ... & Ng, A. Y. (2017).
CheXNet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv preprint arXiv:1711.05225.
Salahuddin, Z., Woodruff, H.C., Chatterjee, A., & Lambin, P. (2022).
Transparency of deep neural networks for medical image analysis. A review of interpretability methods. Computers in Biology and Medicine 140, 105111. DOI: 10.1016/j.compbiomed.2021.105111 [Open Access].
Sand, M., Durán, J.M., & Jongsma, K.R. (2022).
Responsibility beyond design: Physicians’ requirements for ethical medical AI. Bioethics 36(2), 162–169. DOI: 10.1111/bioe.12887 [Open Access].
Schmitz, R., Werner, R., Repici, A., Bisschops, R., Meining, A., Zornow, M., Messmann, H., Hassan, C., Sharma, P., & Rösch, T. (2022).
Artificial intelligence in GI endoscopy: Stumbling blocks, gold standards and the role of endoscopy societies. Gut 71(3), 451–454. DOI: 10.1136/gutjnl-2020-323115.
Shickel, B., Tighe, P. J., Bihorac, A., & Rashidi, P. (2018).
Deep EHR: A survey of recent advances in deep learning techniques for electronic health record (EHR) analysis. IEEE Journal of Biomedical and Health Informatics, 22(5), 1589-1604.
Starke, G., & van den Brule, R., Elger, B.S., & Haselager, P. (2022).
Intentional machines: A defence of trust in medical artificial intelligence. Bioethics 36, 154–161.
Starke, G., & Ienca, M. (2022).
Misplaced trust and distrust: How not to engage with medical artificial intelligence. Cambridge Quarterly of Healthcare Ethics. DOI: 10.1017/S0963180122000445 [Open Access].
Topol, E. J. (2019).
High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56.
Ursin, F., Timmermann, C., & Steger, F. (2022).
Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary? Bioethics 36(2), 143–153. DOI: 10.1111/bioe.12918 [Open Access].
Verdicchio, M., & Perin, A. (2022).
When doctors and AI interact: On human responsibility for artificial risks. Philosophy & Technology 35, 11. DOI: 10.1007/s13347-022-00506-6 [Open Access].
Wadden, J.J. (2022).
Defining the undefinable: The black box problem in healthcare artificial intelligence. Journal of Medical Ethics 48(10), 764–768. DOI: 10.1136/medethics-2021-107529.
Winter, P., & Carusi, A. (2022).
‘If you’re going to trust the machine, then that trust has got to be based on something’: Validation and the co-constitution of trust in developing artificial intelligence (AI) for the early diagnosis of pulmonary hypertension (PH). Science & Technology Studies 35(4), 58–77. DOI: 10.23987/sts.102198 [Open Access].
Winter, P.D., & Carusi, A. (2022).
(De)troubling transparency: Artificial intelligence (AI) for clinical applications. Medical Humanities. DOI: 10.1136/medhum-2021-012318.
Yoon, C.H., Torrance, R., & Scheinerman, N. (2022).
Machine learning in medicine: Should the pursuit of enhanced interpretability be abandoned? Journal of Medical Ethics 48(9), 581–585. DOI: 10.1136/medethics-2020-107102 [Open Access].
Yu, K. H., Beam, A. L., & Kohane, I. S. (2018).
Artificial intelligence in healthcare. Nature Biomedical Engineering, 2(10), 719-731.