A new report from Axios details how widespread the use of AI technology has become in the workplace, pointing to a number of recent studies showing that workers generally make use of it far more than they would be willing to admit publicly. One from Ivanti released earlier this month revealed that 41% of office workers used generative AI tools at work, but that around only a third were willing to admit it to their employer. The top reason for hiding their use of the technology, declared by 36% of the respondents who do, was that they liked a “secret advantage.” Another 30% said that they feared losing their job, or that their boss might give them more work if they knew. 27% reported not wanting to have their abilities questioned, and 24% said they hid their use of the tools due to “negative perception” of the technology.
Other data reveals that the use of genAI tools is becoming increasingly common for job interviews as well, as a study from Blind – a self-described “anonymous community app for professionals” – of 2,510 verified professionals found that 20% of them had used genAI surreptitiously in their interviews. Their analysis also reveals changing attitudes about availing oneself of these tools. “The line we’re drawing between cheating and not is kinda arbitrary,” an employee from Envestnet told Blind. “You can’t use AI, but knowledge dumps from recent interviews are okay.”
With attitudes and culture growing more accommodating towards the use of these tools as they become more commonplace, signs of drawbacks are also emerging. An Amazon employee who spoke to Blind said that they found colleagues who had cheated in their interviews to be of a lower quality than those that did not, saying “they were so low-skilled on the job and dragged their feet for six months before resigning.”
Another concern is the quality of work produced with genAI. A 2022 study from Stanford found that programmers who used AI assistants generated code that was “significantly less secure” than those who did not–a potentially troubling notion when one considers that over half of programmers currently avail themselves of these tools. Another study from the same year by GitClear found that the rise in the use of these tools correlates with that of code that needs to be fixed two weeks after the fact. In computer programming as well as other domains, genAI tools continue to suffer from “hallucinations” (i.e. made-up information) and faulty math.
Despite all the issues, people are clearly still finding it useful in a broad range of domains, and are unlikely to go without. The issue of genAI use then comes down to how it’s used rather than if it is used. “Our research tells us that leadership plays a big role in setting the tone for creating a culture that fosters AI experimentation,” Rajeev Rajan, CTO of the software firm Atlassian, told Axios. “Be honest about the gaps that still exist.”
Perhaps in the near future its use will no longer be the dark secret it is today.