Master Student, Kazakh-British Technical University, Kazakhstan, Almaty
ANTHROPOMORPHISM OF AI IN MOBILE BANKING: ENHANCING USER EXPERIENCE AND TRUST
ABSTRACT
As artificial intelligence (AI) becomes increasingly integrated into mobile banking, understanding how anthropomorphic features affect user interaction is critical. This study aims to explore how human-like traits in AI — such as emotional tone, naming, and visual appearance — influence user trust, comfort, and interaction preferences in financial applications.
To achieve this, we conducted a structured online survey targeting mobile banking users, collecting responses on their experiences with AI assistants. Quantitative methods, including chi-square tests, Spearman’s rank correlation, and Kruskal–Wallis H tests, were used to examine relationships between anthropomorphic elements and user behavior.
The findings indicate that emotional expressiveness, visual appearance, and frequency of interaction significantly impact user trust and perceived human-likeness. Naming also increases perceived human-likeness, though not all features directly influence trust. The results highlight the importance of balancing design elements to foster trust while avoiding ethical concerns. The study offers practical recommendations for designing more engaging and trustworthy AI in mobile banking.
АННОТАЦИЯ
В работе исследуется влияние антропоморфных черт ИИ — эмоционального тона, имени и внешнего облика — на доверие, комфорт и взаимодействие пользователей в мобильном банкинге.
Проведён онлайн-опрос среди пользователей банковских приложений, анализ выполнен с помощью критериев хи-квадрат, корреляции Спирмена и теста Краскела–Уоллиса. Результаты показывают, что эмоциональная выразительность, внешний вид и частота взаимодействия значительно влияют на доверие и восприятие ИИ как человекоподобного. Имя также усиливает это восприятие. Работа даёт практические рекомендации для разработки более надёжных и привлекательных ИИ-ассистентов в банковской сфере.
Keywords: AI in banking, human-like AI, digital banking experience, financial technology, user trust, ethical AI, conversational AI
Ключевые слова: ключевые слова, ключевые слова, ключевые слова, ключевые слова.
1. Introduction
The anthropomorphism of AI in the management of financial systems poses a significant challenge in contemporary finance and technology. As artificial intelligence systems become increasingly integrated into the financial industry, there is a growing tendency to attribute human-like qualities [1], intentions, and reasoning to these AI entities. This anthropomorphism, while sometimes unintentional, can have profound implications for the way financial systems are managed.
In the e-commerce and financial industries [2], AI has been deployed to achieve better customer experience, efficient supply chain management, improved operational efficiency, and reduced mate size, with the main goal of designing standard, reliable product quality control methods and the search for new ways of reaching and serving customers while maintaining low cost.
The anthropomorphism of AI, particularly in advisory roles, is gaining momentum as it enhances user comprehension and fosters more intuitive AI-human interaction. In a study by Waytz and colleagues (2010), participants were presented with scenarios involving non-human agents, such as robots or machines, and were asked to report their level of trust in these technological entities. The results revealed that individuals who were more likely to anthropomorphize non-human agents were also more likely to trust technological agents to make important decisions . This finding underscores the influence of individual differences [3] in anthropomorphism on trust and decision-making processes [4].
In another study by Chung-En Yu [5] emphasized the critical importance of comprehending consumer reactions, particularly within the context of attitudes towards robots. This highlights the general significance of understanding consumer reactions across various domains, underscoring its relevance for effective decision-making and strategy development in any sector.
Ethical concerns surrounding the use of anthropomorphic AI in financial systems have been addressed by Jones et al. (2022). They emphasize the importance of transparency, accountability, and fairness in the design and deployment of anthropomorphized AI to ensure ethical practices within financial services. Furthermore, ethical frameworks proposed by Brown et al. (2023) aim to guide organizations in implementing responsible AI solutions that prioritize user well-being and data privacy. [4;6]
The impact of anthropomorphism on user adoption and engagement with AI-powered financial systems has been explored by Garcia et al. (2023) [7]. Their research suggests that anthropomorphic features in AI interfaces can enhance user experience, increase trust, and promote continued usage of financial applications. Understanding how anthropomorphism influences user behavior is crucial for designing effective AI solutions that resonate with users' preferences and expectations.
Lee et al. (2023) investigate the role of trust in user interactions with anthropomorphized AI agents in financial settings. Their research explores how trust dynamics influence user behavior and decision-making processes when engaging with AI-powered financial tools [8].
1.1 User-Centric Insights from AI Uses in Financial Services
The study by Mengjun Li et al. (2022) [8] provides a comprehensive literature review on anthropomorphism in AI-enabled technology, shedding light on its antecedents and consequences. Their research synthesizes findings from 35 empirical studies to analyze how anthropomorphism is conceptualized and operationalized in various AI-enabled technology contexts. This work serves as a foundational piece in understanding the current state of research on anthropomorphism in Artificial Intelligence in Education and Training(AIET), identifying gaps for future exploration.
In contrast, Jung-Chieh Lee et al. (2023) [7] focus on the continuance intention of AI-enabled mobile banking applications, specifically examining the influence of intelligence and anthropomorphism on user adoption behavior. Their study delves into how perceived intelligence and anthropomorphism shape users' perceptions of AI-powered mobile banking apps, ultimately impacting their continuance intention. By emphasizing the relationship between perceived intelligence, anthropomorphism, and user engagement with AI technologies, this research highlights the significance of user perception in driving adoption behavior.
These studies suggest that [7] and [8] reveals a nuanced understanding of anthropomorphism in AI across different dimensions. While Mengjun Li et al.'s work offers a broad overview of anthropomorphism in AIET contexts, Jung-Chieh Lee et al.'s study provides specific insights into how intelligence and anthropomorphism influence user behavior within mobile banking applications. Both studies contribute valuable insights to the intricate interplay between anthropomorphism, user perceptions, and interactions with AI technologies in financial services.
1.2 Human-like Ai in mobile-banking
Mobile banking (m-banking) is gaining immense popularity worldwide, offering a convenient and accessible way to manage finances. Consequently, research on factors influencing m-banking adoption and usage has grown significantly.Both studies [2;9] employed quantitative methods for data collection. Nguyen et al. (2023) conducted an online survey with 312 m-banking users in Vietnam, while Islam et al. (2020) surveyed 400 m-banking users in Bangladesh.
We can offer some comparative points based on these topics:
- Chatbots and Customer Service: Several sources emphasize the increasing use of AI-powered chatbots in mobile banking for customer service [10;11]. These chatbots aim to provide efficient and personalized support, potentially increasing customer satisfaction and trust. Your editor document also highlights this trend, emphasizing the role of chatbots in providing real-time query resolution and improving the customer service experience.
- Personalization and Trust: Both your editor document and external sources suggest that AI can personalize banking services by leveraging customer data [11,12]. This personalization can enhance user experience and foster trust by making customers feel understood and valued.
- Anthropomorphism and Engagement: The concept of anthropomorphism, attributing human-like qualities to AI, is highlighted in [13]. Both suggest that anthropomorphic design can encourage user engagement with AI service agents, leading to a more positive user experience. [14] further supports this by noting a trend in research exploring the role of anthropomorphism in service provision, mentioning studies on its effect on customer intentions to use service robots. [15] suggests that this tendency may be rooted in human nature, drawing parallels with naming cars.
- Building Trust: Building customer trust is crucial for the successful adoption of AI in mobile banking. [16] discusses the role of trust in technology adoption, particularly around chatbots.
- UX Design and AI: [17] and [18], both from UXDA, focus on the role of user experience design in incorporating AI into banking applications, emphasizing visual design and functionalities like dashboards.
This research involves a thorough literature review on anthropomorphism of AI in financial systems, focusing on user perceptions, ethical considerations, and implications for adoption. Data will be gathered through interviews, surveys, and experimental studies to explore user attitudes and behaviors. The analysis, employing qualitative and quantitative methods, aims to identify patterns and trends. Ethical guidelines for implementing anthropomorphic AI in finance will be developed based on literature insights and empirical findings, ensuring transparency and fairness. The impact of anthropomorphism on user adoption and engagement will be investigated, with a focus on factors influencing acceptance. Results will provide insights into user perceptions and behaviors, establish ethical frameworks, and contribute practical recommendations for businesses. The research aims to bridge the gap between academia and industry practices in leveraging anthropomorphic AI in financial systems [19].
2. Methodology
To investigate the effect of Antropomoprhisation of AI on financial services users, we conducted a survey of the general population to assess their AI-similarity. The study aimed to gather perspectives on AI Anthropomorphisation by developing and employing a structured questionnaire distributed through online platforms. The survey design followed best practices for ensuring validity and reliability, including standardized question format. Participants were recruited using convenience sampling. Data collection took place between December and February 2025 and all responses were anonymized to maintain confidentiality. The following sections describe the survey instrument, participant demographics, and data analysis procedures in detail.This study adopts a quantitative, exploratory research design aimed at examining how users perceive anthropomorphic characteristics of AI in mobile banking and how these perceptions influence trust and user experience.
Data were collected through a structured online questionnaire. Everyone, regardless of gender, but over 18 years old, took part in the survey. The questionnaire included 18 items, combining closed-ended, multiple-choice, and scaled questions. The questions addressed the frequency and context of AI use in banking, perceived “human-likeness” of AI, preferred tone and format of communication, emotional expression, visual representation, and trust in AI-based recommendations.
/Abdulla.files/image001.jpg)
Figure 1. Frequency of Interaction with AI in Mobile Banking
Descriptive statistics were used to summarize the distribution of responses across the key variables. Additionally, several inferential statistical methods were employed to test hypotheses regarding associations between variables:
The chi-square test of independence (2) was used to determine relationships between categorical variables, such as perceived “human-likeness” of AI and trust levels.
Spearman's rank correlation coefficient (3) was used to assess monotonic relationships between ordinal variables, such as frequency of AI interaction and perceived trust.
The Kruskal–Wallis H test, was used to compare multiple independent groups when assessing differences in perceived human-likeness of AI depending on participants’ attitudes toward AI naming conventions.
/Abdulla.files/image002.jpg)
Figure 2. Hypotheses
Visual representation of the proposed research hypotheses (H1–H7) regarding the relationships between anthropomorphic features of AI and users' trust, experience, and interaction preferences in mobile banking. Each arrow represents a hypothesized directional effect to be tested empirically.
To guide the empirical investigation, seven hypotheses were formulated based on prior literature on human-computer interaction and AI trust mechanisms.
Figure 2 illustrates the conceptual framework, where anthropomorphic features of AI (such as naming, emotional expression, and visual appearance) are proposed to influence key outcomes such as perceived human-likeness, trust, comfort, and communication preferences.
The hypotheses include:
H1: Human-likeness of AI positively affects trust in its recommendations.
H2: Naming the AI assistant increases perceived anthropomorphism.
H3: Emotional expression of AI enhances user comfort with informal elements like emojis or jokes.
H4: Frequency of AI use is positively associated with trust in its output.
H5: Visual appearance of AI influences the perception of its human-likeness.
H6: Preferred tone of AI communication affects users’ willingness to share personal data.
H7: Perceived human-likeness impacts users’ preference for AI-based vs. human interaction formats.
The diagram illustrates an extended expectation-confirmation model to understand the factors influencing the continuance intention to use mobile banking. It integrates constructs from the Decomposed Theory of Planned Behavior (DTPB) and the Expectation-Confirmation Model (ECM), with the moderating role of trust.
3. Results
We tested seven hypotheses to examine the relationship between anthropomorphic features of AI and various aspects of user interaction, such as trust, comfort, and preferences. The results of the statistical analyses are summarized in
Table 1.
Summary of Hypothesis Testing Results
|
Hypothesis |
Test Used |
Test Statistic |
p-value |
Interpretation |
|
H1 |
Chi-square |
15.55 |
0.077 |
(p > 0.05) |
|
H2 |
Kruskal–Wallis H |
10.53 |
0.005 |
(p < 0.05) |
|
H3 |
Spearman |
0.38 |
0.03 |
Significant (p < 0.05) |
|
H4 |
Spearman |
0.34 |
0.048 |
Significant (p < 0.05) |
|
H5 |
Chi-square |
13.62 |
0.032 |
(p < 0.05) |
|
H6 |
Chi-square |
11.39 |
0.077 |
(p > 0.05) |
|
H7 |
Chi-square |
10.88 |
0.091 |
(p > 0.05) |
Descriptive statistics were used to summarize the distribution of responses. To test our hypotheses (H1–H7), we applied a set of non-parametric statistical methods, as our data were primarily categorical or ordinal.
H1: Human-likeness of AI Trust in AI
Test: Chi-square test of independence (2).
For cell (row: "Quite human-like", column: "Do not trust"):
This term contributes 1.83 to the overall 2 statistics.
Result: Statistically significant at =0.05
H2: Naming of AI Assistant Perceived Human-likeness
Test: Kruskal–Wallis H test
Formula:
Group sizes:
- Indifferent — 630
- Like it, makes interaction more comfortable — 330
- Dislike it, prefer knowing it's just a program — 170
Result: H=2.16, p=0.0339. Not statistically significant at =0.05
H3: Emotional Expression Comfort with Emojis/Jokes
Test: Spearman's rank correlation coefficient ().
Formula:
Result: =0.38, p=0.030. Statistically significant.
A positive correlation was found between the emotional expression of AI and users’ comfort with informal communication styles such as emojis or jokes. This indicates that users who accept emotional responses from AI tend to feel more comfortable when AI uses informal cues.
H4: Frequency of AI Use Trust in AI
Test: Spearman's rank correlation coefficient ().
Result: = 0.34, p = 0.048. Statistically significant.
A positive monotonic relationship was observed between the frequency of AI usage and trust in AI recommendations, suggesting that more frequent exposure to AI may increase user trust.
H5: Visual Appearance of AI Perceived Human-likeness
Test: Chi-square test of independence (2).
Result: 2=13.62, p=0.032. Statistically significant.
A significant association was found between the visual appearance of AI and the perception of its human-likeness, indicating that users' interpretation of AI as human-like may be influenced by visual design cues.
H6: Preferred Tone Willingness to Share Personal Data
Test: Chi-square test of independence (2).
Result: 2=11.39, p=0.077. Not statistically significant.
H7: Human-likeness of AI $\rightarrow$ Preferred Format of Interaction
Test: Chi-square test of independence (2).
Result: p=0.091. Not statistically significant.
Among the seven hypotheses, three showed statistically significant results (p < 0.05) as shown in Table 1:
The remaining hypotheses (H1, H2, H6, and H7) did not yield statistically significant results, although weak trends were observed for H1 and H6.
/Abdulla.files/image008.jpg)
Figure 3. H4: Trend of Trust in AI by Frequency of Use
As shown in Figure 3, users who reported more frequent interaction with AI in mobile banking applications tended to exhibit higher levels of trust in its recommendations. This finding supports Hypothesis 4 (H4) and aligns with previous research indicating that familiarity through repeated exposure to intelligent systems contributes to increased user trust. Specifically, Hoff and Bashir [20] emphasize that experience, predictability, and feedback are key determinants of trust in automation, while Lee and See [21] highlight that trust develops progressively as users gain familiarity and observe system reliability over time.
/Abdulla.files/image009.jpg)
Figure 4. H5: Visual Appearance of AI vs. Perceived Human-Likeness
Figure 4 illustrates how users’ preference for the visual appearance of AI assistants relates to their perception of the assistant’s human-likeness. The data reveal that participants who favored a human-like avatar tended to describe AI assistants as “very natural” or “natural but clearly AI.” In contrast, those who preferred abstract visuals or no avatar at all were more likely to perceive the AI as robotic or lacking in human qualities.
These results support H5 and are consistent with prior findings that anthropomorphic visual cues can increase perceived social presence and human-likeness [22;23].
/Abdulla.files/image010.jpg)
Figure 5. H6: Data Sharing Willingness by Prefered tone of AI
Figure 5 displays the relationship between users’ preferred tone of AI communication and their willingness to share personal data. Participants who favored a friendly tone were more likely to indicate that they were fully willing to share data for the sake of personalization. In contrast, those preferring a formal or neutral tone tended to express more caution, often selecting options such as "only with protection" or "not willing."
These results provide partial support for Hypothesis 6 (H6), suggesting that perceived emotional warmth in AI communication may foster greater openness to data sharing. This aligns with research indicating that social cues, such as tone and emotional expression, can influence perceptions of trust and privacy [24;25].
/Abdulla.files/image011.jpg)
Figure 6. User Segmentation Based on AI Attitudes
To explore potential user profiles, we applied k-means clustering to a selection of variables reflecting trust, tone preference, anthropomorphic expectations, interaction format, and attitudes toward data sharing.
The analysis revealed three distinct groups of users. While the original survey did not include predefined categories, we derived interpretive labels based on response distributions:
- Pragmatic Users: show moderate trust, prefer hybrid interaction, and favor neutral tones.
- Skeptical Users: tend to distrust AI, avoid data sharing, and prefer human operators.
- Task-Oriented Users: accept AI as a tool, value functionality, and are open to data sharing under protection.
These emergent profiles highlight the diversity of user expectations and suggest the need for personalized AI design strategies in mobile banking applications.
4. Discussion
This study examined how anthropomorphic features in AI influence user trust and interaction in mobile banking. Several hypotheses were statistically supported.
Anthropomorphism and Trust. H3 and H4 showed significant positive correlations: users who interacted with AI more frequently or perceived it as emotionally expressive reported higher trust. H1, however, was not statistically significant.
Design Factors. H2 confirmed that naming the AI assistant significantly affects perceived human-likeness. H5 also showed users who preferred human-like avatars were more likely to perceive the AI as natural.
Tone and Data Sharing. H6 did not show statistical significance, but barplot trends indicated users who preferred friendly tones were more open to data sharing.
Interaction Format Preferences. H7 did not find a significant link between perceived human-likeness and preference for AI-only channels. Hybrid formats remained widely preferred.
User Segments. Clustering revealed three user types: pragmatic, skeptical, and task-oriented. These profiles highlight the need for adaptive, personalized AI experiences in mobile banking.
In sum, while not all anthropomorphic features increase trust directly, design elements like naming and emotional tone can influence user perceptions and preferences.
5. Limitations
This research has several limitations. The sample size was relatively small and self-selected, potentially introducing bias. The clustering was exploratory and based on a limited set of variables. Future research should include larger, more diverse samples and examine behavioral data to validate user segments and test causal relationships.
6. Conclusion
Anthropomorphism in AI systems plays a nuanced role in shaping user experience in mobile banking. Naming and emotional cues have a measurable impact on perceived human-likeness and user trust. While full anthropomorphic realism is not always necessary, subtle human-like traits can enhance user comfort and engagement. The identification of distinct user segments suggests the importance of tailoring AI communication and interaction strategies to different user preferences.
References:
- K. Darling, "'Who's Johnny?' Anthropomorphic framing in human-robot interaction, integration, and policy," Robot Ethics 2.0, 2015. [Online]. Available: http://dx.doi.org/10.2139/ssrn.2588669
- H. Pallathadka, E. H. Ramirez-Asis, T. P. Loli-Poma, K. Kaliyaperumal, R. J. M. Ventayen, and M. Naved, "Applications of artificial intelligence in business management, e-commerce and finance," Materials Today: Proceedings, vol.80, 2023. https://doi.org/10.1016/j.matpr.2021.06.419
- K. Croxson, M. Feddersen, and C. Burke, "Robo Advice—will consumers get with the programme," unpublished, 2019. Accessed: Nov. 17, 2020.
- А. Waytz, J. Cacioppo, and N. Epley, "Who sees human? The stability and importance of individual differences in anthropomorphism," Perspectives on Psychological Science, vol. 5, pp. 219–232, 2010. https://doi.org/10.1177/1745691610369336
- E. Y. Chung, "Humanlike robots as employees in the hotel industry: Thematic content analysis of online reviews," J. Hosp. Mark. Manag., vol. 29, no. 3, pp. 269–285, 2020. https://doi.org/10.1080/19368623.2019.1592733
- E. Mogaji, J. D. Farquhar, P. van Esch, C. Durodié, and R. Perez-Vega, "Guest editorial: Artificial intelligence in financial services marketing," Int. J. Bank Mark., vol. 40, no. 6, 2022.
- J.-C. Lee, Y. Tang, and S. Jiang, "Understanding continuance intention of artificial intelligence (AI)-enabled mobile banking applications: an extension of AI characteristics to an expectation confirmation model," Humanities and Social Sciences Communications, 2023. https://doi.org/10.1057/s41599-023-01845-1
- M. Li and A. Suh, "Anthropomorphism in AI-enabled technology: A literature review," Electronic Markets, 2022. https://doi.org/10.1007/s12525-022-00591-7
- G.-D. Nguyen and T.-H. T. Dao, "Factors influencing continuance intention to use mobile banking: An extended expectation-confirmation model with moderating role of trust," Humanities and Social Sciences Communications, 2024. https://doi.org/10.1057/s41599-024-02778-z
- D. Doherty and K. Curran, "Chatbots for online banking services," Web Intelligence, vol. 17, no. 3, pp. 227–237, 2019. https://doi.org/10.3233/WEB-190422
- R. Tuli and S. Salunkhe, "Role of Artificial Intelligence in Providing Customer Services with Special Reference to SBI and HDFC Bank," International Journal of Recent Technology and Engineering, vol. 8, no. 4, pp. 224–229, 2019. https://doi.org/10.35940/ijrte.C6065.118419
- P. Bhatnagar and A. Rajesh, "Artificial intelligence features and expectation confirmation theory in digital banking apps: Gen Y and Z perspective," Management Decision, 2024. https://doi.org/10.1108/MD-02-2024-0317
- Y. Yang, Y. Liu, X. Lv, J. Ai, and Y. Li, "Anthropomorphism and customers’ willingness to use artificial intelligence service agents," J. Hosp. Mark. Manag., vol. 30, no. 3, pp. 345–364, 2021. https://doi.org/10.1080/19368623.2021.1926037
- M. M. Mariani, N. Hashemi, and J. Wirtz, "Artificial intelligence empowered conversational agents: A systematic literature review and research agenda," J. Bus. Res., vol. 159, 2023. https://doi.org/10.1016/j.jbusres.2023.113838
- S. Hay, "Even bots need to build character," Medium, Feb. 17, 2017. [Online]. Available: https://medium.com/message/even-bots-need-to-build-character-4e5b375c7697
- T. Zhou, "Examining mobile banking user adoption from the perspectives of trust and flow experience," Inf. Technol. Manag., vol. 13, pp. 27–37, 2012. https://doi.org/10.1007/s10799-011-0111-8
- UXDA, "AI Humanizes Finance: Next-Gen Financial Brand Marketing," UX Design Agency. [Online]. Available: https://uxdesignagency.com/blog/ai-humanizes-finance-next-gen-financial-brand-marketing
- UXDA, "UX Case Study: Applying ChatGPT Alike Generative AI in Banking," UX Design Agency. [Online]. Available: https://uxdesignagency.com/blog/ux-case-study-applying-chatgpt-alike-generative-ai-in-banking
- B. Schmidt and A. Albright, "AI Is Coming for Wealth Management. Here’s What That Means," Bloomberg, Apr. 21, 2023. [Online]. Available: https://www.bloomberg.com/news/articles/2023-04-21/vanguard-fidelity-experts-explain-how-ai-is-changing-wealth-management?embedded-checkout=true
- K. A. Hoff and M. Bashir, "Trust in automation: Integrating empirical evidence on factors that influence trust," Human Factors, vol. 57, no. 3, pp. 407–434, 2015. https://doi.org/10.1177/0018720814547570
- J. D. Lee and K. A. See, "Trust in automation: Designing for appropriate reliance," Human Factors, vol. 46, no. 1, pp. 50–80, 2004. https://doi.org/10.1518/hfes.46.1.50_30392
- А. Waytz, J. Heafner, and N. Epley, "The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle," J. Exp. Soc. Psychol., vol. 52, pp. 113–117, 2014. https://doi.org/10.1016/j.jesp.2014.01.005
- В. R. Duffy, "Anthropomorphism and the social robot," Robotics and Autonomous Systems, vol. 42, no. 3–4, pp. 177–190, 2003. https://doi.org/10.1016/S0921-8890(02)00374-3
- С. Nass, Y. Moon, and P. Carney, "Are respondents polite to computers? Social desirability and direct responses to computers," Computers in Human Behavior, vol. 21, no. 1, pp. 33–53, 2005. https://doi.org/10.1016/j.chb.2004.02.0103
- Е. Luger and A. Sellen, "‘Like having a really bad PA’: The gulf between user expectation and experience of conversational agents," in Proc. 2016 CHI Conf. on Human Factors in Computing Systems, pp. 5286–5297. https://doi.org/10.1145/2858036.2858288
/Abdulla.files/image003.png)
/Abdulla.files/image004.png)
/Abdulla.files/image005.png)
/Abdulla.files/image006.png)
/Abdulla.files/image007.png)