DEVELOPING HIGH-PERFORMANCE ENGINEERING TEAMS: A PRACTICAL FRAMEWORK FOR MENTORSHIP AND TECHNICAL GROWTH

РАЗВИТИЕ ВЫСОКОЭФФЕКТИВНЫХ ИНЖЕНЕРНЫХ КОМАНД: ПРАКТИЧЕСКАЯ ОСНОВА ДЛЯ НАСТАВНИЧЕСТВА И ТЕХНИЧЕСКОГО РОСТА
Lvov E.
Цитировать:
Lvov E. DEVELOPING HIGH-PERFORMANCE ENGINEERING TEAMS: A PRACTICAL FRAMEWORK FOR MENTORSHIP AND TECHNICAL GROWTH // Universum: технические науки : электрон. научн. журн. 2026. 2(143). URL: https://7universum.com/ru/tech/archive/item/21931 (дата обращения: 08.03.2026).
Прочитать статью:

 

ABSTRACT

This study analyzes mechanisms for forming high-performing teams based on a synthesis of empirical data from DORA 2024 reports, Gartner and McKinsey analytics, as well as a corpus of IEEE/ACM academic works. The purpose of the study is to formulate the Integrated Technical Growth Framework (ITGF), which conceptually and practically links operational DevOps metrics with institutionalized mentoring practices and hiring for growth strategies. The working hypothesis posits that, under conditions of automation of routine programming, the critical predictor of organizational resilience is not the nominal speed of code production, but the effectiveness of tacit knowledge transfer through formalized mentoring mechanisms and internal developer platforms (IDP). Analytical data processing demonstrates that companies that build hiring practices on candidate potential and reinforce them with structured mentoring achieve 1.9 times higher aggregate productivity and increase key employee retention metrics by approximately 50%. The study proposes an applied model for retuning onboarding processes and the architecture of technical leadership aimed at reducing the risks of the AI productivity paradox and ensuring the continuity of expert knowledge under conditions of transformation of the role of Junior developers.

АННОТАЦИЯ

В данном исследовании анализируются механизмы формирования высокоэффективных команд на основе синтеза эмпирических данных из отчетов DORA 2024, аналитических отчетов Gartner и McKinsey, а также корпуса академических работ IEEE/ACM. Цель исследования — сформулировать интегрированную структуру технического роста (ITGF), которая концептуально и практически связывает операционные метрики DevOps с институционализированными практиками наставничества и стратегиями найма для роста. Рабочая гипотеза предполагает, что в условиях автоматизации рутинного программирования критическим фактором организационной устойчивости является не номинальная скорость создания кода, а эффективность передачи неявных знаний посредством формализованных механизмов наставничества и внутренних платформ разработчиков (IDP). Анализ данных показывает, что компании, которые строят практику найма на основе потенциала кандидатов и подкрепляют её структурированным наставничеством, достигают в 1,9 раза более высокой совокупной производительности и увеличивают ключевые показатели удержания сотрудников примерно на 50%. В исследовании предлагается прикладная модель для перестройки процессов адаптации и архитектуры технического лидерства, направленная на снижение рисков парадокса производительности ИИ и обеспечение непрерывности экспертных знаний в условиях трансформации роли младших разработчиков.

 

Keywords: high-performing engineering teams, mentoring in software engineering, DORA and SPACE DevOps metrics, internal developer platforms, hiring for potential, generative AI, integrated technical growth framework, platform engineering and Developer Experience.

Ключевые слова: высокоэффективные инженерные команды, наставничество в разработке программного обеспечения, метрики DORA и SPACE DevOps, внутренние платформы для разработчиков, найм на основе потенциала, генеративный ИИ, интегрированная структура технического роста, разработка платформ и опыт разработчиков.

 

Introduction

The software engineering industry is entering a phase of structural transformation whose depth of consequences is comparable to the spread of the internet or the institutionalization of Agile approaches. Reports for 2024 record an ambivalent picture: on the one hand, generative AI tools (GenAI) demonstrate a sharp increase in the individual productivity of developers, while on the other hand the system-level efficiency of engineering organizations either does not change or deteriorates [1]. This phenomenon, referred to as the AI productivity paradox, is gradually becoming one of the key management challenges for CTOs and VPs of Engineering on a global scale. According to the State of DevOps Report 2024 by Google Cloud (DORA), the introduction of AI tools does indeed reduce the time required to write code, but this does not directly correlate with an increase in the frequency of successful deployments or a reduction in time to restore services after incidents [3]. Moreover, researchers observe a strengthening trend of degradation in software product quality, described by the term enshittification, that is, the systematic deterioration of user properties of platforms and services in favor of short-term growth of metrics or profit [2, 4]. In development practices this is expressed in the growing volume of syntactically correct but architecturally fragile code, which accelerates the accumulation of technical debt and increases the cognitive load on senior engineers who perform code review [5].

At the same time, technological changes are accompanied by a worsening of personnel imbalance. A Gartner survey shows that 48% of HR leaders consider the rate of change in required skills to be higher than the ability of organizations to restructure their learning and hiring systems [6]. In these conditions, the traditional recruitment model focused on already formed competencies becomes economically unsustainable and strategically risky. Given Gartner’s forecast that by 2027 up to 80% of engineering staff will require significant reskilling under the impact of GenAI, [7] the ability of a company to form and develop talent from within is turning from a competitive advantage into a condition for survival.

Particular concern is caused by the transformation of the labor market for entry-level specialists. Empirical data indicate that tools such as GitHub Copilot and Claude Code are increasingly used to automate tasks of basic complexity that have historically served as a key field of practical training for junior developers [8]. If in 2023 about 75% of developers were already using AI tools in their work, then by 2025 the configuration of usage is shifting from an augmentation mode (support and collaboration between humans and AI) to full automation of typical and simple tasks [8]. This shift creates a risk of a break in the chain of professional socialization and knowledge transfer: deprived of the opportunity to accumulate experience on simple tasks according to the principle of learning by doing, future engineers are limited to superficial interaction with the system and do not form deep architectural and engineering competencies [9]. As a result, the industry may face a shortage of qualified senior engineers within 3–5 years, since the traditional elevator of career growth, from solving routine tasks to participating in complex projects, is dismantled or significantly deformed.

The existence of a substantial body of publications on DevOps practices, CI/CD, platform engineering, and related technological disciplines contrasts with the fact that these aspects are often analyzed in isolation from socio-organizational mechanisms such as mentoring, career management, and the development of Developer Experience (DevEx).

The novelty of the research approach considered here lies in an attempt to integrate these typically fragmented domains. It is proposed to view Internal Developer Platforms (IDP) not only as a tool for standardizing and automating engineering infrastructure, but also as a potential carrier of institutionalized mentoring, when best practices, architectural principles, and guardrails are embedded into the platform in the format Mentorship as Code. [11]

The objective is to develop a comprehensive framework that, firstly, synthesizes DORA and SPACE metrics to assess the genuine rather than nominal productivity of hybrid teams in which developers and AI agents interact; secondly, substantiates the economic feasibility of mentoring as a tool for increasing retention and reducing the total costs of hiring and onboarding personnel; thirdly, formulates strategies for adapting career ladders for Senior and Staff+ levels, taking into account the new configuration of requirements for technical leadership and the role of the engineer as a bearer of institutional memory.

Within this paradigm, the hypothesis is advanced that the integration of formalized mentoring practices and a hire for potential strategy, supported by the implementation of Internal Developer Platforms (IDP), constitutes a stronger predictor of teams achieving Elite performers status (in the DORA classification) than the introduction of generative AI tools by itself. It is assumed that mentoring acts as the critical catalyst that converts the growth of individually AI-augmented productivity into sustainable collective effectiveness, reduced operational risks, and increased stability of the engineering system as a whole.

Materials and Methods

To test the formulated hypothesis and subsequently develop the framework, a systematic review method with subsequent meta-analytic data synthesis was employed. The methodological design of the study is based on the principle of triangulation and combines three mutually complementary components: quantitative indicators from industry reports, qualitative case study analysis, and theoretical models presented in the academic literature on software engineering.

The data sources are structured around three categories of empirical and conceptual data, which makes it possible to ensure both comprehensive coverage and increased internal validity of the conclusions. The first category includes global industry reports, among which a central place is occupied by the Google Cloud DORA State of DevOps Report 2024 as a key source of information on the relationship between engineering practices and business outcomes on a data set including tens of thousands of respondents worldwide; the Gartner HR & Technology Research 2024–2025 studies, which provide numerical estimates of skill shortages, transformations in hiring strategies, and the impact of AI on workforce configuration; as well as the McKinsey Developer Velocity Index, which records the relationship between parameters of the development environment and tooling and companies’ financial performance. The second category is represented by academic studies from the IEEE Xplore and ACM Digital Library databases, devoted to mentoring in software engineering, the role of tacit knowledge transfer in ensuring the quality of software systems, and the educational aspects of Agile team work, as well as works analyzing the effectiveness of using such performance metrics as the SPACE Framework. The third category includes expert materials and qualitative case studies: technical documentation and engineering blogs of GitLab, Atlassian, Microsoft, and Cortex, which describe practices for implementing career ladders and Internal Developer Platforms (IDP), as well as materials from the StaffEng and LeadDev communities that reveal the specifics of activities and workload profiles of Staff+ level engineers in real organizational contexts.

Results and Discussion

The analysis of the DORA 2024 reports and Gartner materials indicates a profound transformation of the criteria for the success of engineering teams. Whereas delivery speed previously served as the dominant benchmark, in 2024 resilience, the ability for rapid recovery, as well as the relative autonomy of throughput and stability metrics, are emerging as the key determinants of performance. The empirical data for 2024 record a radical divergence between elite (Elite) and lagging teams. Moreover, this gap is not linear but exponential in nature, which is particularly evident in metrics related to quality and recovery.

Table 1 below presents the comparative characteristics of Elite and Low Performers.

Table 1.

Comparative characteristics of Elite and Low Performers (compiled by the author based on [1, 3, 4,5 ]).

Metric (DORA)

Description

Elite Performers

Low Performers

Efficiency multiplier (Elite vs Low)

Deployment Frequency

Frequency of deploying code to production

On demand (several times per day)

Once a month or less often (up to once every 6 months)

182x

Lead Time for Changes

Time from commit to deployment

Less than 1 hour

From 1 to 6 months

127x

Time to Restore Service

Time to recover after a failure

Less than 1 hour

From 1 week to 1 month

2293x

Change Failure Rate

Percentage of deployments requiring hotfixes

~5%

45–60%

8x (reduction in failure frequency)

 

The most indicative difference is observed in the Time to Restore Service metric, which reaches approximately 2293x. This indicates that elite teams have invested not only in automation of the delivery pipeline (CI/CD), but also in advanced observability capabilities (Observability) and, critically, in the systematic dissemination of knowledge about how the system operates. Rapid recovery is in principle unattainable when critically important knowledge about how to fix failed components is localized in the heads of a few key specialists (the bus factor problem). Thus, the hypothesis that institutionalized knowledge transfer (mentoring) has a direct impact on operational resilience is empirically confirmed. In addition, elite teams demonstrate a pronounced correlation between throughput and stability, whereas in groups with an intermediate level of maturity these metrics often diverge, indicating a degradation of quality when attempting to increase speed without relying on an appropriate engineering culture [5, 14].

Despite the practical significance of DORA metrics, the 2024 reports underscore the need for a more holistic, multidimensional perspective. The SPACE framework (Satisfaction, Performance, Activity, Communication, Efficiency) is de facto becoming the standard for assessing the health of engineering teams [21]. Research shows that the adoption of AI tools leads to an increase in individual Activity (volume of generated code), but can have a negative impact on Communication and Efficiency at the team level due to the increased effort required to review generated code and the accelerated accumulation of technical debt. The key insight is that teams that ignore satisfaction metrics (Satisfaction) and focus exclusively on DORA indicators substantially increase the risk of burnout. Instability and frequent shifts in priorities increase the likelihood of burnout by 40%, even in the presence of strong leadership [18, 19].

In a context where around 50% of organizations are unable to effectively leverage the existing skills of their employees, and the recruitment of fully trained specialists becomes economically highly burdensome [6], mentoring ceases to be an instrument of social responsibility only and turns into one of the central business drivers. Gartner formulates the concept of Hiring for Promise, that is, hiring candidates with basic foundational training and high learnability instead of seeking a perfect match to current requirements. According to these data, employees selected based on potential are 1.9 times more likely to become high performers over the long term compared to those who were hired primarily for already developed skills. However, only 28% of organizations systematically employ this approach. The key constraint is the absence of internal institutions (mentoring, training programs) capable of bringing such employees to a level of proficiency within a short time frame [6, 17].

The accumulated quantitative data clearly demonstrate that the presence of a well-designed mentoring system statistically significantly improves employee retention indicators. For illustrative purposes, Table 2 reflects the impact of mentoring programs on retention and engagement metrics.

Table 2.

Impact of mentoring programs on retention and engagement metrics (compiled by the author based on [5, 6, 21, 23]).

Metric

Group with mentoring

Control group (without mentoring)

Effect

Retention Rate

72% (mentees), 69% (mentors)

49%

+23 p.p.

Probability of promotion

5x (mentees), 6x (mentors)

1x (baseline)

500–600%

Job satisfaction

91%

< 60%

+30 p.p.

Turnover risk

25%

> 40%

-15 p.p.

 

The key conclusion is that mentoring has a bidirectional effect: participation in mentoring practices proves beneficial both for mentees and for mentors themselves. The latter demonstrate a retention rate of about 69%, which is almost comparable to the mentee indicator of 72%. This phenomenon is interpreted through the protégé effect: recognition of the mentor’s expert status and the opportunity to realize individual leadership potential create additional meaning for the mentor and increase the subjective value of remaining within the organization. Given that replacement of a senior developer may cost a company up to 200% of that employee’s annual income, the overall return on investment (ROI) in a well-designed mentoring program can reach approximately 600% [20, 22].

Against this background, the results of studies conducted in 2024–2025 record the emergence of a structural threat to engineering demography. An erosion effect at the entry level is forming: historically, junior developers mastered the profession through routine tasks such as writing unit tests, creating simple CRUD interfaces, and performing refactoring. At present, a significant share of such tasks is shifted to AI systems. It is indicative that 79% of interactions with models of the Claude Code class fall into the automation category (when AI performs the task in full), and only 21% into augmentation, that is, support for developer activity [8]. Companies, optimizing costs and timelines, reduce the volume of junior hiring, preferring to rely on seniors working in tandem with AI tools. As a result, a risk of a disappearing middle class of engineers is forming over a 3–5 year horizon [9].

In parallel, new development paradigms are emerging, such as Chat-Oriented Programming (CHOP) and so-called Vibe Coding, that is, coding based on intuitive impressions and dialog with the model, without deep command of language syntax and structure [10]. In the absence of meaningful mentoring support and architectural oversight, the results of such practice turn into a black box, that is, a codebase that is extremely difficult to maintain and evolve.

One of the key responses to the growth of cognitive load and the need for accelerated onboarding is the development of platform engineering and internal developer platforms (IDP). Their implementation makes it possible to significantly reduce the time required for a new employee to reach a productive level: the Time to First Meaningful PR indicator is reduced from several months to a matter of days. In one case study, the time required for initial environment setup was reduced from two weeks to two hours [15, 16]. The concept of best-fit approaches implemented within IDPs provides developers with standardized templates and tools; essentially, this is a form of mentorship through tooling, where the platform itself guides the engineer along the trajectory of best practices, reducing the need for constant participation of a live mentor [11]. An important consequence is the elimination of the Approval Hell phenomenon: up to 60% of traditional onboarding time is spent waiting for access provisioning and approvals, whereas IDPs automate these processes and free mentors to work on architectural and conceptual aspects rather than on bureaucratic ones.

For a visual representation of the identified patterns and empirical verification of the formulated hypotheses based on the results of data analysis, Figure 1 will be presented below, demonstrating the productivity gap.

 

Figure 1. Performance Gap (Elite vs. Low Performers) (compiled by the author based on [5, 6, 21]).

 

The chart shown in Figure 1 illustrates the DORA 2024 data, demonstrating the logarithmic scale of the superiority of elite teams. Figure 2 presents the impact of mentoring on personnel retention.

 

Figure 2. The impact of mentoring on personnel retention (compiled by the author based on [5, 6, 21]).

 

Thus, the study demonstrates that maintaining high performing teams requires a radical reconsideration of the role of senior engineers. The model in which the Senior Engineer is perceived primarily as a fast and experienced executor of code has in fact ceased to correspond to reality. According to StaffEng data and analysis of career frameworks of leading technology companies (Dropbox, Stripe, GitLab), Staff+ and Principal level positions imply a shift of focus from individual execution to systemic influence and engineering enablement, that is, the creation of conditions in which other engineers work effectively. Within this logic, Staff engineers should act not merely as strong individual contributors, but as systemic architects of sociotechnical systems. Their key function is not to write code in place of the team, but to design the environment (through the development of IDPs, mentoring practices, and architectural reviews) in which the entire team consistently produces high quality code. This is confirmed by empirical data showing that elite teams deliberately reduce manifestations of Approval Hell and optimize the development flow, which are zones of direct responsibility of Staff engineers working on Developer Experience [12, 13].

Achieving high values of DORA metrics for Time to Restore (service restoration in less than 1 hour) is fundamentally impossible without a stable culture of psychological safety. In teams where errors are strictly sanctioned, engineers tend to conceal incidents and delay escalation of problems, which inevitably increases recovery time. Mentoring in this context acts as a key mechanism for forming Blameless Culture. When a mentor, as a figure with recognized authority, speaks openly about personal failures and conducts postmortems in a format focused on analysis of systemic causes rather than the search for those to blame, this normalizes learning from errors for the entire team.

Indiscriminate and uncontrolled introduction of AI tools without mentoring oversight leads to the gradual degradation of codebase quality. Junior developers who actively rely on AI are able to generate code that seems to work, but remains conceptually opaque to them. In this reality, by 2025 mentoring must evolve into the format of AI Assisted Code Review: the role of the mentor is not to check syntax and trivial defects (this is delegated to linters and the AI models themselves), but to assess the depth of the mentees understanding of what exactly and why the AI has generated. This approach returns critical thinking to the development process and prevents the transformation of the codebase into an opaque artifact that is formally operable but architecturally unmanageable.

Conclusion

The conducted study empirically confirms the proposed hypothesis that, under the conditions of the AI revolution and a structural shortage of qualified personnel, mentoring should be regarded not as an auxiliary, but as a system-forming component that carries the function of resilience in high-performing teams.

Elite teams differ radically from lagging ones not so much in absolute development speed as in their ability to recover from failures: the gap in recovery speed amounts to 2293 times. This indicates that the key factor is an established culture of knowledge sharing and collective self-healing of the system, rather than exclusively the level of process automation.

A hiring strategy focused on potential, followed by targeted mentoring, demonstrates effectiveness 1.9 times higher than the model of attracting ready-made stars. Owing to increased retention and reduced loss of organizational memory, this model can provide ROI of up to 600%, which makes investments in mentoring practices economically without viable alternatives.

Internal platforms (IDPs) should be considered a key infrastructural mechanism for scaling mentoring. They make it possible to codify and replicate the best practices of the team, which leads to a 67% reduction in onboarding time and decreases dependence on individual knowledge holders.

Based on the conducted analysis, organizations are recommended to implement an Integrated Technical Growth Framework (ITGF) that includes:

– Formalization of mentoring as a mandatory element of the system of Objectives and Key Results (OKRs) for Senior-level roles and above, with explicit responsibility for the development of less experienced specialists.

– Implementation of internal IDP platforms with best-fit components that provide a standardized and partially automated onboarding trajectory, minimizing cognitive load and risk of errors.

– Revision of Code Review processes with an explicit shift of focus toward verification of AI-generated code, including checks of correctness, robustness, and compliance with the team’s architectural principles.

– Use of SPACE metrics alongside DORA to monitor signals of burnout and decreasing team resilience, which makes it possible to timely detect productivity degradation that is masked by short-term increases in speed.

Only such a comprehensive and integrated approach makes it possible to overcome the paradox of AI productivity and to form teams that are capable not only of short-term acceleration, but also of maintaining sustainable development and adaptability in the long term.

 

References:

  1. DORA. Accelerate State of DevOps. Retrieved from: https://dora.dev/research/2024/dora-report/2024-dora-accelerate-state-of-devops-report.pdf (date accessed: October 10, 2025).
  2. Yes, you can measure software developer productivity. Retrieved from: https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/yes-you-can-measure-software-developer-productivity (date accessed: October 10, 2025).
  3. Announcing the 2024 DORA report. Retrieved from: https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report (date accessed: October 10, 2025).
  4. Timpka, T. (2024). The “enshittification” of online information services obligates rigorous management of scientific journals. Journal of Science and Medicine in Sport, 27(10), 665-666.
  5. DORA Report 2024 – A Look at Throughput and Stability. Retrieved from:  https://redmonk.com/rstephens/2024/11/26/dora2024/ (date accessed: October 12, 2025).
  6. Gartner HR Survey Reveals Hiring for Promise Instead of Proficiency is More Effective and Efficient for Closing Skills Gaps. Retrieved from: https://www.gartner.com/en/newsroom/press-releases/2025-03-11-closing-skills-gaps-at-scale (date accessed: October 13, 2025).
  7. Gartner Says Generative AI will Require 80% of Engineering Workforce to Upskill Through 2027. Retrieved from:  https://www.gartner.com/en/newsroom/press-releases/2024-10-03-gartner-says-generative-ai-will-require-80-percent-of-engineering-workforce-to-upskill-through-2027 (date accessed: October 14, 2025).
  8. Impact of AI on the 2025 Software Engineering Job Market. Retrieved from:   https://www.sundeepteki.org/advice/impact-of-ai-on-the-2025-software-engineering-job-market (date accessed: October 14, 2025).
  9. Kim, J. Y., & Heo, W. (2022). Artificial intelligence video interviewing for employment: perspectives from applicants, companies, developer and academicians. Information Technology & People, 35(3), 861-878. https://doi.org/10.1108/ITP-04-2019-0173.
  10. Acharya, V. (2025). Generative AI and the Transformation of Software Development Practices. arXiv preprint arXiv:2510.10819. https://doi.org/10.48550/arXiv.2510.10819.
  11. Allam, H. (2024). Developer Portals and Golden Paths: Standardizing DevOps with Internal Platforms. International Journal of AI, BigData, Computational and Management Studies, 5(3), 113-128. https://doi.org/10.63282/3050-9416.IJAIBDCMS-V5I3P112.
  12. Clements, Z., Parmar, R., & Thomas, L. D. (2022). Measuring platform return on participation. Business Horizons, 65(2), 193-204. https://doi.org/10.1016/j.bushor.2021.02.036.
  13. Naumov, V., Zagirova, D., Lin, S., Xie, Y., Gou, W., Urban, A., ... & Zhavoronkov, A. (2025). Dora ai scientist: Multi-agent virtual research team for scientific exploration discovery and automated report generation. bioRxiv. https://doi.org/10.1101/2025.03.06.641840.
  14. Gartner HR Research Finds Organizations’ Current Talent Management Efforts Inhibit Optimal Employee and Organizational Performance. Retrieved from:   https://www.gartner.com/en/newsroom/press-releases/2024-09-18-gartner-hr-research-finds-orgs-current-talent-management-efforts-inhibit-optimal-employee-and-org-performance (date accessed: October 14, 2025).
  15. Re:think: Can software developer productivity really be measured? . Retrieved from:   https://www.mckinsey.com/~/media/mckinsey/email/rethink/2024/05/2024-05-01d.html  (date accessed: October 14, 2025).
  16. Developer Velocity: How software excellence fuels business performance. Retrieved from:   https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/developer-velocity-how-software-excellence-fuels-business-performance  (date accessed: October 15, 2025).
  17. Cernuzzi, L. (2024, August). Autumn of Code UC: an Experience Teaching Software Engineering Contributing to Open Source Projects. In 2024 L Latin American Computer Conference (CLEI) (pp. 1-9). IEEE.https://doi.org/10.1109/CLEI64178.2024.10700211.
  18. Azanza, M., Pereira, J., Irastorza, A., & Galdos, A. (2024, July). Can LLMs facilitate onboarding software developers? an ongoing industrial case study. In 2024 36th International Conference on Software Engineering Education and Training (CSEE&T) (pp. 1-6). IEEE. https://doi.org/10.1109/CSEET62301.2024.10662989.
  19. Tacit Knowledge Transfer in Agile Software Development . Retrieved from:   https://www.diva-portal.org/smash/get/diva2:1748945/FULLTEXT02.pdf (date accessed: October 16, 2025).
  20. Liu, Y., Abi Aad, A., Maalouf, J., & Abou Hamdan, O. (2021). Self-vs. other-focused mentoring motives in informal mentoring: conceptualizing the impact of motives on mentoring behaviours and beneficial mentoring outcomes. Human Resource Development International, 24(3), 279-303. https://doi.org/10.1080/13678868.2020.1789401.
  21. Becker, J., Rush, N., Barnes, E., & Rein, D. (2025). Measuring the impact of early-2025 AI on experienced open-source developer productivity. arXiv preprint arXiv:2507.09089. https://doi.org/10.48550/arXiv.2507.09089.
  22. Afroz, S., Feng, Z., Kimura, K., Trinkenreich, B., Steinmacher, I., & Sarma, A. (2025). Developer Productivity with GenAI. arXiv preprint arXiv:2510.24265. https://doi.org/10.48550/arXiv.2510.24265.
  23. Engineering Ladders: Introduction. Retrieved from:   http://www.engineeringladders.com/  (date accessed: October 18, 2025).
Информация об авторах

Head of engineering, senior full stack architect, Georgia, Batumi

руководитель инженерного отдела, старший full stack архитектор, Грузия, г. Батуми

Журнал зарегистрирован Федеральной службой по надзору в сфере связи, информационных технологий и массовых коммуникаций (Роскомнадзор), регистрационный номер ЭЛ №ФС77-54434 от 17.06.2013
Учредитель журнала - ООО «МЦНО»
Главный редактор - Звездина Марина Юрьевна.
Top