Competence Penalty is a Barrier to the Adoption of New Technology

Competence Penalty Is a Barrier to the Adoption of New Technology (Gai, Hou, Tu) reports a three-part study examining why generative AI coding tools remain under-adopted even in environments where traditional barriers such as access, training, and infrastructure have largely been removed.

Study 1 draws on digital trace data from 28,698 software engineers following the internal launch of a proprietary AI coding assistant. These data logs captured all interactions with the tool, including prompts sent and AI-generated code copied, avoiding the distortions of self-report. Despite extensive company-wide promotion, adoption reached only 41% after twelve months.

In the first month, adoption was 9% among male engineers and 5% among female engineers: a 4-percentage-point gap. That gap widened by month twelve: 43% of male engineers had adopted the tool compared to 31% of female engineers. Age gaps were also present, with mature-age engineers (above the median age of 32) adopting at lower rates, though the disparity was smaller than the gender gap.

Even among adopters, female and mature-age engineers used the tool less intensively, sending fewer prompts and copying fewer lines of AI-generated code.

Study 2 introduces the competence penalty. In a pre-registered code-review experiment (N = 1,026), participants evaluated identical code snippets, with or without disclosed AI assistance, attributed to either a male or female engineer.

Purported AI use reduced perceived competence despite no effect on perceived code quality. The penalty was larger for female engineers. Male non-adopters imposed the harshest penalties, particularly on female engineers. The implication is that using productivity-enhancing AI can signal reduced independent ability, especially for those already subject to heightened scrutiny.

Study 3 tests whether engineers anticipate this penalty and avoid AI use. Anticipated competence penalty strongly predicts non-adoption and explains gender and age gaps, and more robustly than perceived learning cost or productivity beliefs. Adoption drops from 61% among those anticipating minimal penalty to 33% among those anticipating high penalty.

There are two implications.

First, AI will not distribute its productivity gains evenly. Those who could benefit may not adopt because they operate under stricter competence scrutiny.

Second, mandatory AI disclosure (which are now common across organisations and academic institutions) may inadvertently amplify inequities. Transparency increases visibility, and visibility increases evaluative bias. For some groups, disclosure may be professionally riskier than silence.

For anyone working on AI governance, evaluation systems, or organisational rollout, this paper is a reminder that adoption is not just a technology initiative, but a social one.

Link to paper