Study explores why teams adopt AI or not…

Teams increasingly seek to leverage the potential of technology and artificial intelligence (AI) to improve the way they collaborate and perform. A new study highlights that teams' attitudes to AI, and the autonomy they have in using such tools, can have vastly differing outcomes in terms of the way they are adopted.

Date

04/19/2023

Temps de lecture

3 min

Share

Modern workforces increasingly rely on the use of new technologies, including AI, which have also become a central component of teamwork.  Teams can leverage tools designed to support meeting management, workflow, or productivity, or engage with stand-alone AI tools (including the virtual assistant Alexa or translation tools for example). They have many opportunities, therefore, to work with AI whether by their own choice or as the result of an organizational/managerial decision.

A team of researchers from the University of Buffalo, Rutgers University, Simon Fraser University & IÉSEG School of Management have recently studied teams’ attitudes to AI and the way such technologies are implemented in organizations. Evangeline YANG, a professor at IÉSEG (People, organisations & negotiation department) and one of the coauthors of the study*, explains that much previous research had focused on the technical side of AI including the way it operates.

“As a HR researcher I was particularly interested in the human side and way individuals and teams interact with AI”.

The team of researchers decided to carry out a review of top tier journals across different disciplines to study attitudes to AI and new technology and the way its implemented.

Forcing teams can dampen enthusiasm for AI

They found that a group that is already positively inclined to AI will have their “enthusiasm to collaborate dampened by being forced to use it”. In contrast, a group with negative views about AI was more likely to collaborate with AI if they were forced to use it.

By comparison, the results were more extreme when AI adoption was voluntary. Teams that were already positive were even more inclined to use AI while teams’ negative perceptions were even less likely to use AI.

“This highlights the importance of feelings and trust in AI, when implementation is left to the team,” adds Professor YANG.

Assessing attitudes before making organizational decisions

She believes that her research shows that organizations can therefore seek to gauge employees’ attitudes to AI before taking a decision on how, or if, its use can be encouraged or even mandated. 

This could be in the form of a survey such as the attitudes toward technology questionnaire (Edison & Geissler, 2003). In addition, certain characteristics such as employees’ age and their prior experience working with AI can also inform their inclination to work with AI (Hancock et al., 2011).

The literature shows overall that attitudes to AI still remain fairly mixed and sometimes this can be due to fairly simple human misconceptions. Viewpoints may even vary widely within a team.

Professor YANG explains that she is now carrying out further research on this top to untangle the intricacies of attitudes within teams and how this can impact the implementation of such technologies.

She is also looking at some of the potential biases of AI (for example in terms of gender) and how this can impact implementation.

*More information is available in the full paper: Bezrukova, K., Griffith, T. L., Spell, C., Rice, V., & Yang, H. E. (2023). Artificial Intelligence and Groups: Effects of Attitudes and Discretion on Collaboration. Group & Organization Management, 48(2), 629–670.


Category (ies)

Big Data & AIManagement & Society


Contributors

Huiru (Evangeline) YANG

People, Organizations and Negotiation

Full biography
IÉSEG Insights

IÉSEG Insights

Editorial

Full biography