Interpersonal Trust Development in GenAI-augmented Organisations (Norkin) examines how interpersonal trust forms and shifts when GenAI becomes embedded in knowledge-intensive team workflows.
GenAI is reshaping work at a pace that often exceeds organisations’ ability to recalibrate processes, norms, and expectations. While productivity gains are evident, the uncertainty introduced by autonomous content generation raises questions about reliability, accountability, and the changing nature of interpersonal relationships at work. Norkin’s study provides structure to this emerging space by situating these dynamics within established organisational trust theory, particularly how ability, integrity, and benevolence are evaluated when GenAI becomes embedded in daily workflows.
Methodologically, the study takes a qualitative approach based on semi-structured interviews. A purposive sample of nine participants was recruited for this pilot study, reflecting roles within knowledge-intensive organisations actively using GenAI. Interview data was analysed using inductive thematic analysis, with codes generated bottom-up from the data rather than imposed a priori.
One of the most interesting insights is the risk of uneven and potentially inequitable trust distribution. Employees who engage in critical, high-effort GenAI use tend to be trusted more, while uncritical or opaque use can erode trust and create additional burdens for colleagues. If unmanaged, disparities can contribute to workplace polarisation.
This research has since informed work within our User Centred Design function, acting as the catalyst to define how GenAI is used across roles and teams. We have begun developing broad UCD usage principles alongside role-specific guidance that clarifies expectations around transparency and critical evaluation. By introducing a set of usage guidelines, we hope to establish more consistent norms, and reduce uneven trust dynamics as GenAI becomes part of everyday practice.