Skip to main content

Long read: How TikTok's most intriguing geolocator makes a story out of a game

Where in the world is Josemonkey?

If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Hello Neighbor publisher outlines troubling future of employee monitoring in AI keynote

But insists talk was just "hypothetical".

Image credit: TinyBuild

Alex Nichiporchik, CEO of Hello Neighbor publisher TinyBuild, has raised eyebrows during a keynote in which he outlined potential uses of AI tools in the workplace, including the monitoring of employee Slack messages and meeting transcriptions to help identify "potential problematic players" - a discussion he has since insisted was "hypothetical".

Nichiporchik (as reported by WhyNowGaming) was speaking at this week's Develop: Brighton conference, in a talk titled 'AI in Gamedev: Is My Job Safe?' which promised an "in-depth [look] into how [TinyBuild] adopted AI in daily practices to exponentially increase efficiency".

One part of the presentation in particular, focusing on "AI for HR", has proved especially contentious since news of its contents began to spread around the internet. Here, Nichiporchik discussed how AI could be used by HR to spot burnout (later described as being synonymous with "toxicity") among employees by first identifying "potential problematic team members" then collating and running their Slack messages and automatic transcriptions from the likes of Google Meet and Zoom through Chat GPT, in a process he terms "I, Me Analysis".

Newscast: Is Microsoft's Activision Blizzard acquisition now a done deal?Watch on YouTube

"There is a direct correlation between how many times someone uses 'I' or 'me' in a meeting," Nichiporchik posited, "compared to the amount of words they use overall, to the probability of the person going to a burnout."

According to Nichiporchik, by identifying employees who 'talk too much about themselves', who 'suck up too much time in meetings' so that "nothing gets done", and who recieve negative feedback in 360-degree peer reviews, it's then possible to "identify someone who is on the verge of burning out, who might be the reason the colleagues who work with that person are burning out, and you might be able to identify it and fix it early on."

This is where the exact reach of Nichiporchik's somewhat dystopian vision starts to become unclear. WhyNowGaming reports the CEO as saying TinyBuild has been experimenting with the technology retroactively on workers who've already left the company, and is now starting to use it proactively, highlighting a "first case last week, where a studio lead was not in a good place, no one told us. Had we waited for a month, we would probably not have a studio."

In a statement later provided to the website, however, Nichiporchik contradicts WhyNowGaming's account, insisting "the HR part of my presentation was a hypothetical, hence the Black Mirror reference" and that TinyBuild does "not monitor employees or use AI to identify problematic ones."

"I could've made it more clear for when viewing out of context," Nichiporchik said in a statement provided to the publication following its report. "We do not monitor employees or use AI to identify problematic ones. The presentation explored how AI tools can be used, and some get into creepy territory. I wanted to explore how they can be used for good."

The takeaway, though, seems to be pretty straightforward. Regardless of Nichiporchik's intentions and TinyBuild's internal practices, there will doubtless be other CEO's considering the use of AI in at least equally nefarious ways. AI is a hot topic right now, and, as Eurogamer's Chris Tapsell recently found out, it's one that those in the games industry, across all disciplines, have very mixed feeling about - but it's clearly not an issue that's going away.

Read this next