EMPATHY IN THE ERA OF DIGITAL MEDICINE: CAN ALGORITHMS REPLACE HUMAN CONTACT?

ЭМПАТИЯ В ЭПОХУ ЦИФРОВОЙ МЕДИЦИНЫ: МОЖЕТ ЛИ АЛГОРИТМ ЗАМЕНИТЬ ЧЕЛОВЕЧЕСКИЙ КОНТАКТ?
Цитировать:
EMPATHY IN THE ERA OF DIGITAL MEDICINE: CAN ALGORITHMS REPLACE HUMAN CONTACT? // Universum: медицина и фармакология : электрон. научн. журн. Jarkenbekova D. [и др.]. 2026. 3(132). URL: https://7universum.com/ru/med/archive/item/22142 (дата обращения: 12.03.2026).
Прочитать статью:

 

ABSTRACT

Background: The rapid growth of Artificial Intelligence (AI) and digital health technologies has significantly transformed modern healthcare systems. While AI demonstrates superior capabilities in data-intensive tasks like medical imaging and predictive analytics, its integration into the humanistic dimension of medicine—specifically clinical empathy—remains a subject of intense debate.

Objective: This review aims to evaluate the multidimensional relevance of empathy in medicine and determine whether algorithmic systems can authentically replicate or replace human empathy in clinical practice.

Methods: The paper examines the conceptual framework of empathy as a multidimensional construct, incorporating psychological and neurobiological perspectives. It synthesizes recent empirical evidence, including cross-sectional studies and systematic reviews, regarding AI-driven communication, natural language processing, and the clinical outcomes associated with empathic care.

Key Findings: Research indicates that AI can successfully simulate cognitive and linguistic markers of empathy, often producing responses rated higher in quality and empathy than those of physicians in structured, text-based settings. However, AI lacks the neurobiological foundations, "embodied neural processes," and subjective emotional resonance essential for genuine affective empathy. Furthermore, AI systems carry zero legal or ethical responsibility for clinical outcomes and risk creating "echo chambers" or "digital safety behaviors" in mental health contexts.

Conclusion: Algorithms can provide a "functional simulation" of empathy but cannot replace the "genuine emotional process" and authentic human presence required for a therapeutic alliance. The future role of AI in healthcare should be to augment human empathy by reducing administrative burdens, thereby allowing clinicians more time for direct patient engagement.

АННОТАЦИЯ

Введение: Быстрое развитие искусственного интеллекта (ИИ) и цифровых медицинских технологий значительно изменило современные системы здравоохранения. Хотя ИИ демонстрирует превосходные возможности в задачах, требующих больших объемов данных, таких как медицинская визуализация и предиктивная аналитика, его интеграция в гуманистическое измерение медицины — в частности, клиническая эмпатия — остается предметом интенсивных дискуссий.

Цель: Данный обзор направлен на оценку многомерной значимости эмпатии в медицине и определение того, могут ли алгоритмические системы достоверно воспроизводить или заменять человеческую эмпатию в клинической практике.

Методы: В статье рассматривается концептуальная основа эмпатии как многомерного конструкта, включающая психологические и нейробиологические перспективы. В ней обобщаются последние эмпирические данные, включая кросс-секционные исследования и систематические обзоры, касающиеся коммуникации с использованием ИИ, обработки естественного языка и клинических результатов, связанных с эмпатическим уходом.

Ключевые выводы: Исследования показывают, что ИИ может успешно имитировать когнитивные и лингвистические маркеры эмпатии, часто выдавая ответы, которые оцениваются как более качественные и эмпатичные, чем ответы врачей в структурированных текстовых условиях. Однако ИИ не обладает нейробиологическими основами, «воплощенными нейронными процессами» и субъективным эмоциональным резонансом, необходимыми для подлинной аффективной эмпатии. Более того, системы ИИ не несут никакой юридической или этической ответственности за клинические результаты и рискуют создавать «эхо-камеры» или «цифровое безопасное поведение» в контексте психического здоровья.

Вывод: Алгоритмы могут обеспечить «функциональную симуляцию» эмпатии, но не могут заменить «подлинный эмоциональный процесс» и подлинное человеческое присутствие, необходимые для терапевтического альянса. Будущая роль ИИ в здравоохранении должна заключаться в усилении человеческой эмпатии за счет снижения административной нагрузки, что позволит врачам уделять больше времени непосредственному взаимодействию с пациентами.

 

Keywords: Digital medicine, Artificial intelligence, Clinical empathy, Patient-centered care, Telemedicine, Cognitive empathy, Affective empathy, Chatbot, Clinical Decision Support Systems, AI-generated communication

Ключевые слова: Цифровая медицина, Искусственный интеллект, Клиническая эмпатия, Пациентоориентированная помощь, Телемедицина, Когнитивная эмпатия, Аффективная эмпатия, Чат-бот, Системы поддержки принятия клинических решений, Коммуникация, генерируемая ИИ

 

Introduction. Empathy is recognized as a fundamental component of effective medical practice. It is defined as the ability to understand and share the feelings and perspectives of another individual while maintaining professional objectivity, represents a cornerstone of patient-centered care. In clinical settings, empathy facilitates meaningful communication and makes contact between physicians and patients to trust each other. As healthcare systems evolve and integrate advanced technologies, the humanistic dimension of medicine remains critically important. Beyond its ethical significance, empathy plays a measurable role in improving clinical outcomes, strengthening the doctor–patient relationship, enhancing diagnostic accuracy, and reducing professional burnout.

This paper examines the multidimensional relevance of empathy in medicine and its importance in contemporary healthcare systems increasingly influenced by technological advancements.

The rapid growth of Artificial Intelligence and digital health technologies has significantly changed modern healthcare systems. Advanced technologies in computational power, availability to make big data analytics, and connectivity have enabled the integration of intelligent systems into clinical practice, research, and public health management.

Artificial Intelligence refers to computer systems capable of performing tasks that typically require human intelligence, such as learning, reasoning, pattern recognition, and decision-making. In healthcare, AI applications include: medical imaging and diagnostics, predictive analytics, personalized medicine and drug discovery. For example, AI algorithms enhance interpretation of radiological images (e.g., X-rays, CT, MRI), improving early detection of diseases such as cancer and tuberculosis, machine learning models analyze large datasets to predict disease risk, treatment outcomes, and hospital readmissions. AI accelerates pharmaceutical research by identifying potential drug targets and predicting molecular interactions. In addition, medicine and AI encompasses a broad range of technologies designed to improve healthcare delivery and patient outcomes. These might  include telemedicine and telehealth which can remote clinical consultations and  increase access to healthcare, especially in rural or underserved areas.

Conceptual Framework and Theoretical Background

Empathy is widely regarded as a foundational element of human interaction, particularly in healthcare. Rather than being a single ability, empathy is better understood as a multidimensional construct that includes both cognitive and affective components.

Cognitive empathy refers to the capacity to understand another person’s internal state — their thoughts, emotions, and perspective — without necessarily sharing those feelings. In medicine, this dimension enables clinicians to interpret patients’ concerns accurately, recognize emotional cues, and contextualize symptoms within the patient’s lived experience.

Affective empathy, in contrast, involves an emotional response to another person’s feelings. It reflects the ability to resonate emotionally with another individual’s suffering or distress. While affective attunement can strengthen the therapeutic bond, excessive emotional identification may lead to emotional exhaustion. Therefore, clinical empathy is often conceptualized as emotionally informed but cognitively regulated understanding.

This distinction is particularly relevant when discussing digital medicine. Algorithms may simulate perspective-taking through data processing, but whether they can replicate affective resonance remains an open question.

Psychological and Neurobiological Foundations of Empathy

From a psychological standpoint, empathy emerges through the interaction of emotional processing, perspective-taking, and self–other differentiation. Effective empathy requires the ability to recognize another person’s state while maintaining awareness of one’s own boundaries. Without this regulatory capacity, emotional contagion may replace constructive empathic engagement.

Neuroscientific research suggests that empathy is supported by partially distinct but interconnected neural systems. Emotional sharing is associated with activation in regions such as the anterior insula and anterior cingulate cortex, which are involved in processing pain and affective states. Meanwhile, cognitive perspective-taking engages higher-order cortical areas, including the medial prefrontal cortex and temporoparietal junction.

Importantly, these systems are dynamically regulated. Clinicians, for example, may show neural activation related to patients’ pain while simultaneously engaging regulatory circuits that prevent overwhelming distress. This neurobiological balance allows professionals to remain compassionate without becoming emotionally incapacitated.

In contrast, artificial intelligence systems operate through pattern recognition and probabilistic modeling. They lack embodied neural processes, affective experience, and self–other representation. Thus, while AI may approximate certain cognitive aspects of empathy (e.g., identifying emotional language), it does not possess neurobiological mechanisms of emotional resonance.

Impact of Empathy on Clinical Outcomes

Empathy in healthcare extends beyond ethical ideals; it has measurable clinical consequences. Research consistently demonstrates associations between physician empathy and improved patient outcomes.

Empathic communication has been linked to greater patient satisfaction, improved adherence to treatment recommendations, and reduced levels of anxiety. In chronic disease management, higher physician empathy has been associated with improved metabolic control and fewer acute complications. These findings suggest that empathy influences not only patient experience but also physiological and behavioral outcomes.

Several mechanisms may explain this relationship. First, empathy enhances trust, which strengthens the therapeutic alliance. Second, patients who feel understood are more likely to disclose relevant information, improving diagnostic accuracy. Third, emotional validation may reduce stress responses, potentially influencing biological pathways.

In the context of digital medicine, these findings raise a critical question: if empathy contributes directly to health outcomes, can algorithmic systems — even highly accurate ones — replicate the relational mechanisms through which empathy exerts its therapeutic effect?

Definition and Types of Medical Algorithms in Digital Healthcare

A medical algorithm can be defined as a structured computational procedure designed to support clinical reasoning, diagnosis, or treatment decisions. Historically, such algorithms were rule-based systems derived from clinical guidelines and decision trees.

With the advancement of digital medicine, medical algorithms now include several categories:

1. Rule-based decision systems

These operate on predefined logical pathways and are commonly integrated into early clinical decision-support tools.

2. Clinical Decision Support Systems (CDSS)

CDSS integrate patient-specific data with evidence-based recommendations to assist clinicians in diagnostic and therapeutic decisions. They function as supportive tools rather than autonomous agents.

3. Artificial Intelligence and Machine Learning models

These systems rely on large datasets to detect patterns, generate predictions, and continuously improve performance. Applications include image interpretation, risk stratification, natural language processing, and predictive analytics.

Unlike human clinicians, however, AI systems do not possess subjective awareness, emotional processing, or lived experience. Their operations are computational rather than experiential. While they may simulate empathic language or identify emotional indicators, this simulation does not equate to genuine empathic engagement.

Therefore, the conceptual tension at the heart of digital medicine lies not in diagnostic accuracy alone, but in whether relational dimensions of care — particularly empathy — can be authentically reproduced by algorithmic systems.

Digital Medicine and AI in Clinical Practice

The integration of artificial intelligence (AI) into healthcare systems has significantly reshaped contemporary clinical practice. Digital medicine increasingly influences diagnostic interpretation, patient communication, and risk prediction models. Rather than functioning as isolated technological tools, AI systems are progressively embedded within clinical workflows, affecting how medical knowledge is processed and applied (Topol, 2019).

Artificial intelligence demonstrates particular effectiveness in data-intensive medical specialties. In radiology and pathology, deep learning models analyze complex imaging datasets and identify subtle abnormalities with high sensitivity (Esteva et al., 2017; Rajpurkar et al., 2017). These systems rely on pattern recognition and large training datasets rather than intuitive reasoning.

Beyond imaging, predictive models applied to electronic health records enable early detection of clinical deterioration, including sepsis and organ failure (Rajkomar et al., 2019). Such systems enhance efficiency and support clinical decision-making; however, their outputs remain probabilistic. Algorithms do not contextualize diagnostic findings within a patient’s lived experience, social background, or ethical preferences. Clinical judgment therefore remains dependent on human interpretation and responsibility (Topol, 2019).

Advances in natural language processing have enabled the development of AI-driven chatbots and virtual health assistants capable of symptom triage, appointment coordination, and structured psychological support. In mental health contexts, conversational agents have demonstrated potential in delivering components of cognitive behavioral therapy and monitoring mood patterns (Fitzpatrick, Darcy, & Vierhile, 2017).

These systems simulate empathic language by identifying emotional cues and generating contextually appropriate responses. However, such responses are produced through computational modeling rather than subjective emotional awareness. The distinction between simulated empathy and genuine emotional engagement is therefore ethically significant (Bickmore & Picard, 2005). While some patients may perceive AI-mediated communication as non-judgmental and accessible, concerns remain regarding its limitations in complex emotional or crisis situations.

Telemedicine platforms have expanded access to healthcare services, particularly in remote or underserved regions. Virtual consultations and remote monitoring technologies enhance continuity of care and reduce logistical barriers (WHO, 2019). During public health emergencies, telehealth has played a critical role in maintaining healthcare delivery.

However, digital interfaces modify the traditional dynamics of physician–patient interaction. Empathy and trust are often facilitated by embodied presence, nonverbal communication, and physical examination. The shift to screen-mediated consultations may attenuate certain relational aspects of care. Additionally, disparities in digital access contribute to what has been described as the “digital divide,” potentially exacerbating existing health inequities (WHO, 2019).

At the systemic level, predictive analytics enables healthcare providers to identify patients at elevated risk for adverse outcomes. By integrating demographic data, clinical history, and behavioral indicators, AI systems support preventive strategies and resource allocation (Rajkomar et al., 2019).

Nevertheless, algorithmic models may inherit biases present in historical healthcare data. Research has demonstrated that certain widely used risk assessment tools underestimated the healthcare needs of marginalized populations due to biased training variables (Obermeyer et al., 2019). These findings highlight the importance of transparency, fairness, and ethical oversight in the deployment of predictive algorithms.

Can Algorithms Simulate Empathy?

The rapid integration of artificial intelligence (AI), particularly large language models and conversational agents, into healthcare has intensified discussion about whether algorithms can simulate empathy in clinically meaningful ways. Empathy has traditionally been regarded as a core component of effective clinical care, contributing to patient satisfaction, trust, treatment adherence, and improved health outcomes. With the emergence of AI systems capable of generating human-like communication, researchers have begun to examine whether empathic interaction can be replicated through computational means.

Recent empirical evidence suggests that AI systems can successfully reproduce linguistic and behavioral markers associated with empathic communication. In a cross-sectional study published in JAMA Internal Medicine, Ayers et al. (2023) compared physician responses with responses generated by an AI chatbot to patient health questions posted online. The study found that the AI responses were rated significantly higher in both quality and empathy than physician responses, with evaluators preferring AI responses in nearly 80% of cases. These findings suggest that AI systems can effectively generate language perceived as empathic by human evaluators. This capability is primarily driven by advances in natural language processing, which allow algorithms to analyze emotional tone, recognize distress-related language, and generate contextually appropriate responses.

Similarly, research has shown that conversational agents can provide emotional support and improve patient engagement in areas such as mental health and chronic disease management. A systematic review published in the Journal of Medical Internet Research found that healthcare chatbots were capable of providing emotional support, increasing patient engagement, and improving perceived communication quality, particularly when designed with empathic conversational frameworks (Blease et al., 2022). These findings indicate that algorithmic systems can simulate functional aspects of cognitive empathy, particularly emotional recognition and supportive communication.

Despite these capabilities, important distinctions remain between simulated empathy and human empathy. Human empathy involves subjective emotional experience, emotional resonance, and embodied awareness, which are supported by complex neurobiological processes. In contrast, AI systems generate responses through statistical modeling and pattern recognition without conscious awareness or emotional experience. As noted by Topol (2023) in Nature Medicine, AI systems do not possess emotional understanding but instead generate responses based on learned associations between language patterns and emotional contexts. Consequently, while AI can simulate empathic behavior, it does not experience empathy in a psychological or neurobiological sense.

The clinical significance of this distinction lies in the relational nature of empathy. Empathy contributes to therapeutic alliance, which plays a central role in patient outcomes. Patients who perceive their clinicians as empathic demonstrate greater adherence to treatment recommendations, improved psychological well-being, and higher satisfaction with care (Montori, 2022). These effects are mediated not only by verbal communication but also by patients’ perception of authentic human concern. Whether simulated empathy can produce equivalent therapeutic effects remains an open question.

At the same time, AI may indirectly enhance empathic care by addressing systemic barriers that limit empathic interaction. Administrative burden, time constraints, and clinician burnout have been identified as major obstacles to empathic clinical practice. AI systems can reduce documentation workload, automate routine tasks, and assist with clinical decision-making, thereby allowing clinicians to spend more time engaging directly with patients (Topol, 2023).

In this way, AI may function as a facilitator of empathy rather than a replacement for empathic clinicians.

Ethical considerations also play an important role in evaluating simulated empathy. The World Health Organization (2021) emphasizes that AI systems must be used in ways that respect human dignity, autonomy, and transparency. If patients are unaware that empathic communication originates from an algorithm rather than a human, this may raise ethical concerns related to trust and informed consent. Furthermore, overreliance on algorithmic communication may risk reducing opportunities for authentic human interaction, which remains central to patient-centered care.

Patient perceptions of AI empathy also vary. A study in Patient Education and Counseling found that while many patients were open to receiving information from AI systems, they expressed greater trust in human clinicians for emotionally sensitive interactions (Nadarzynski et al., 2021). This suggests that while AI can simulate empathic communication, patients may still value human empathy as uniquely meaningful.

In conclusion, current evidence demonstrates that algorithms can simulate observable features of empathic communication, particularly through advanced language modeling and conversational design. These systems can recognize emotional cues, generate supportive responses, and improve patient engagement. However, they do not possess emotional awareness, subjective experience, or neurobiological mechanisms of empathy. Algorithmic empathy should therefore be understood as a functional simulation rather than a genuine emotional process. The future role of AI in healthcare is likely to involve augmenting human empathy rather than replacing it, supporting clinicians in delivering more effective and compassionate care.

Ethical and Clinical Implications

As AI-developed assistive technologies become increasingly integrated into clinical practice -ranging from diagnostic systems to clinical decision support -they offer the potential to decrease medical errors, streamline processes, and ease clinicians’ cognitive load. However, these anticipated benefits are accompanied by significant and logical downsides.

AI-generated communication and introductions, no matter how clever and thoughtfully made, lack the fundamental human connection and empathy that underpins all relationships. Depersonalization can have a profound psychological impact on the recipient. The AI lacks the metacognitive, psychological and ethical foundation needed to question these attributions or provide corrective feedback. Instead, its responses may unintentionally support or strengthen the some users projections, including deluded ones. (1)

The therapeutic alliance serves as a means of bringing about cognitive change from a cognitive behavioral perspective. It is the therapist’s responsibility to strike a balance between gentle empiricism and empathy, using Socratic questioning to undermine solutions and ideas while upholding confidence. Conversely, frequently the default setting for AI systems (which are intended for user entertainment and nonconfrontational communication) can give unreliable answers. This implies that the AI might implicitly support rather than contradict the narrative when a user communicates grandiose, persecutory, or referential information. This validation loop can serve as a type of digital safety behavior over time, meeting short-term emotional demands while impeding the ability to learn remedial actions. Therefore, a potentially helpful alliance could become a reinforcing echo chamber in the absence of behavioral exploration or therapist-driven guided discovery. Additionally, every doctor is responsible to the patient for every word, action or method of treatment and can perform each procedure personally, as at this time AI can only provide a text treatment option and does not have many years of experience treating patients in this way and carry zero responsibility.

Current regulatory and ethical models do not formally recognize AI systems as bearers of shared responsibility, even when their outputs materially influence patient care. (2) In addition to this, even when datasets appear anonymized, generative models may reconstruct, share or infer deeply personal attributes and medical data (given by user), effectively producing secondary privacy harms. For instance, Large Language Models may unintentionally retain sensitive textual information -such as personal addresses or clinical identifiers - included in their training data, which can later be exposed to other users in response to carefully crafted prompts unintentionally. (3)

Discussion

The integration of Artificial Intelligence (AI) into clinical practice presents a paradox: while algorithms can outperform humans in specific cognitive and linguistic tasks, they remain fundamentally devoid of the biological and emotional essence of empathy.

AI demonstrates superior capabilities in data-intensive tasks such as radiological interpretation and predictive analytics for sepsis or cancer. Furthermore, AI can simulate empathic communication that human evaluators sometimes rate more highly than physician responses due to its consistent, non-judgmental, and structured nature. By automating administrative burdens, AI may actually "rescue" human empathy by allowing clinicians more face-to-face time with patients.

Unlike humans, AI lacks "embodied neural processes" and the ability to resonate emotionally with a patient’s suffering. The "digital divide" and algorithmic biases also pose risks of exacerbating health inequities. Most critically, AI carries zero legal or ethical responsibility for its "decisions," whereas a physician is personally accountable for every aspect of a patient's care.

Current research, such as the study by Ayers et al. (2023), highlights that AI can successfully mirror the language of empathy. However, this "functional simulation" is often mistaken for "genuine emotional engagement". Literature suggests that while patients find AI useful for information, they still prefer humans for emotionally sensitive or complex crisis situations.

Most studies on AI empathy are cross-sectional or involve online text-based interactions rather than long-term clinical relationships. There is a lack of longitudinal data on whether "simulated empathy" can produce the same physiological and behavioral benefits—such as improved metabolic control in chronic disease—as authentic human empathy.

Conclusion

Empathy is a multidimensional construct involving both cognitive understanding and affective resonance. While AI can effectively simulate the cognitive and linguistic markers of empathy—often exceeding human performance in structured communication—it lacks the neurobiological foundations and subjective awareness required for true emotional connection.

Algorithms cannot replace human contact in the fullest sense. While AI can serve as a powerful tool for diagnostic accuracy and administrative support, it functions as a "functional simulation" rather than a "genuine emotional process". The therapeutic alliance relies on an authentic human presence and shared responsibility that algorithms cannot replicate.

Future studies should investigate the long-term psychological impact of "simulated empathy" on patient trust and the potential for "digital safety behaviors" or "echo chambers" in AI-mediated mental health care. Additionally, research is needed to establish regulatory frameworks that address the "responsibility gap" when AI-generated outputs influence clinical outcomes.

 

References:

  1. Ayers, J. W., Poliak, A., Dredze, M., Leas, E. C., Zhu, Z., Kelley, J. B., Faix, D. J., Goodman, A. M., Longhurst, C. A., Hogarth, M., & Smith, D. M. (2023). Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Internal Medicine, 183(6), 589–596. https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2804309
  2. Blease, C., Bernstein, M. H., Gaab, J., Kaptchuk, T. J., Kossowsky, J., Mandl, K. D., & Halamka, J. (2022). Computer-mediated communication and empathy in healthcare: A systematic review. Journal of Medical Internet Research, 24(3), e32366. https://diabetes.jmir.org/2022/3/e32366
  3. Decety, J., & Jackson, P. L. (2004). The functional architecture of human empathy. Behavioral and Cognitive Neuroscience Reviews, 3(2), 71–100. https://doi.org/10.1177/1534582304267187
  4. Decety, J., & Lamm, C. (2006). Human empathy through the lens of social neuroscience. The Scientific World Journal, 6, 1146–1163. https://doi.org/10.1100/tsw.2006.221
  5. Derksen, F., Bensing, J., & Lagro-Janssen, A. (2013). Effectiveness of empathy in general practice: A systematic review. British Journal of General Practice, 63(606), e76–e84. https://doi.org/10.3399/bjgp13X660814
  6. Hojat, M. (2007). Empathy in patient care: Antecedents, development, measurement, and outcomes. Springer.
  7. Hojat, M., Mangione, S., Nasca, T. J., Cohen, M. J. M., Gonnella, J. S., Erdmann, J. B., Veloski, J., & Magee, M. (2002). Physician empathy: Definition, components, measurement, and relationship to gender and specialty. American Journal of Psychiatry, 159(9), 1563–1569.
  8. Hojat, M., Louis, D. Z., Markham, F. W., Wender, R., Rabinowitz, C., & Gonnella, J. S. (2011). Physicians’ empathy and clinical outcomes for diabetic patients. Academic Medicine, 86(3), 359–364. https://doi.org/10.1097/ACM.0b013e3182086fe1
  9. Montori, V. M. (2022). The patient-clinician relationship. BMJ, 376, o988. https://www.bmj.com/content/377/bmj.o988
  10. Nadarzynski, T., Miles, O., Cowie, A., & Ridge, D. (2021). Acceptability of artificial intelligence in healthcare: A systematic review. Patient Education and Counseling, 104(5), 1108–1115.
  11. Singer, T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R. J., & Frith, C. D. (2004). Empathy for pain involves the affective but not sensory components of pain. Science, 303(5661), 1157–1162. https://doi.org/10.1126/science.1093535
  12. Sutton, R. T., Pincock, D., Baumgart, D. C., Sadowski, D. C., Fedorak, R. N., & Kroeker, K. I. (2020). An overview of clinical decision support systems: Benefits, risks, and strategies for success. NPJ Digital Medicine, 3, 17. https://doi.org/10.1038/s41746-020-0221-y
  13. Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books.
  14. Topol, E. (2023). The convergence of human and artificial intelligence in medicine. Nature Medicine, 29, 44–56.
  15. World Health Organization. (2021). Ethics and governance of artificial intelligence for health. https://www.who.int/publications/i/item/9789240029200

 

Conflict of Interest Statement

The author declares no conflict of interest.

Funding

Specify funding sources if applicable.

Информация об авторах

Assistant Professor, Kazakh National Medical University named after S.D. Asfendiyarov, Kazakhstan, Almaty

ассистент профессора, Казахского национального медицинского университета имени С.Д. Асфендиярова, Казахстан, г. Алматы

Student, Kazakh National Medical University named after S.D. Asfendiyarov, Kazakhstan, Almaty

студент Казахского национального медицинского университета имени С.Д. Асфендиярова, Казахстан, г. Алматы

Student, Kazakh National Medical University named after S.D. Asfendiyarov, Kazakhstan, Almaty

студент Казахского национального медицинского университета имени С.Д. Асфендиярова, Казахстан, г. Алматы

Student, Kazakh National Medical University named after S.D. Asfendiyarov, Kazakhstan, Almaty

студент Казахского национального медицинского университета имени С.Д. Асфендиярова, Казахстан, г. Алматы

Student, Kazakh National Medical University named after S.D. Asfendiyarov, Kazakhstan, Almaty

студент Казахского национального медицинского университета имени С.Д. Асфендиярова, Казахстан, г. Алматы

Student, Kazakh National Medical University named after S.D. Asfendiyarov, Kazakhstan, Almaty

студент Казахского национального медицинского университета имени С.Д. Асфендиярова, Казахстан, г. Алматы

Student, Kazakh National Medical University named after S.D. Asfendiyarov, Kazakhstan, Almaty

студент Казахского национального медицинского университета имени С.Д. Асфендиярова, Казахстан, г. Алматы

Журнал зарегистрирован Федеральной службой по надзору в сфере связи, информационных технологий и массовых коммуникаций (Роскомнадзор), регистрационный номер ЭЛ №ФС77–64808 от 02.02.2016
Учредитель журнала - ООО «МЦНО»
Главный редактор - Конорев Марат Русланович.
Top