I’ve recently found myself back on the job market after my department was made redundant. My new life looks like this: refresh Seek; go on LinkedIn; regret the life choices that made me go on LinkedIn; refresh Seek; try to sound normal in a cover letter; cry; refresh Seek. The usual job-hunting stuff.
Late last week I was minding my own business (refreshing Seek) and I saw a job that seemed perfect for me. A disability services organisation was looking for a writer to create accessible content, which is my particular area of interest. Excited, I penned an application. I struck just the right balance of extremely knowledgeable and easy to get along with. I uploaded my CV. I submitted the form, quietly confident that I might be in with a chance.
A reply email popped up almost instantly. The weekend had definitely already begun, but I was invited to take the next step in the application process and answer some “personality” questions in a sort of pre-interview. Not with a person – with a chatbot.
Now, having been a socially isolated teenager in the 90s, I’m no stranger to intimate conversations with chatbots. And I love any opportunity to talk about myself. So I clicked the link. Sure, I thought. I can answer personality questions.
They were standard interview fare. Tell us how you overcome unexpected obstacles. Tell us how you work in a team. Tell us why you want this job. I answered them honestly, chucking in a few jokes in case the chatbot was a ruse and a real person was reading. I love being part of a team. I try to be friendly and helpful in the workplace. My biggest weakness is needing to set phone reminders to remember basic tasks. Submit. By now it was after knock-off time.
Another email soon arrived. This one wasn’t from the hiring organisation but a third-party AI platform. The ominous subject line read, “Your personality insights, Anna.”
Nothing in the chatbot process had mentioned personality “insights”. I hadn’t opted into anything extra. And to be honest with you, being underemployed has not been terrific for my self-esteem, so I wasn’t exactly craving a Friday-night character assessment by AI.
Six “insights” were listed inside. Some of them were fine. It told me I’m always up for a challenge. I’m a positive, confident and enthusiastic person. Thanks, robot overlords, I thought. But the more I read, the more targeted they felt. The AI platform told me not everyone likes positive, confident and enthusiastic people. Actually, had I considered I might be kind of abrasive? Why did I keep getting defensive when other people made suggestions? And have I ever – ever – tried just listening for a change?
At the bottom of the email, “Coaching Tips” suggested I adapt my working style to be less unnerving for people. By this stage of the process, no human, as far as I could tell, had seen or vetoed anything.
I like to think I’m pretty resilient (don’t tell the bot). I’ve been to enough therapy to mostly be open to criticism, often reflecting and sometimes even acting on it. But I was not prepared to be wiped out by an AI villain deployed by a disability nonprofit on the weekend.
I put down my non-alcoholic spritzer. I opened the company’s website. Sure enough, it’s an automated platform that uses AI to interview, screen and assess applicants. Many, many companies in Australia use this service; the site lists big brands including supermarkets, airlines, department stores and major sporting governing bodies as some of its clients.
This AI platform is driven, it reckons, by the number one complaint from job hunters, which is never hearing back after applying. Its strategy is to make sure everyone gets a response, even if that response is to tear their delicate heart to shreds. In addition to text and video chats, it wants to make every candidate “feel seen” with “personalised insights”.
In the soulless machine’s defence, I did feel seen. I’m literally proving its point by writing a defensive op-ed about its suggestion that I might get defensive about suggestions. It had tapped directly into my deepest workplace insecurities and rummaged around. It had flayed me alive and exposed my greatest fears for my career and the future of my industry. The problem wasn’t that it didn’t see me. It was that it’s a robot.
Studies show long-term unemployed people are at least twice as vulnerable to mental illness, with high risk of depression, anxiety and suicide. In this no-longer-hypothetical situation, it seems only a matter of time before an AI platform sends an unsolicited “better than no response” personality assessment to one of these people, with no supports in place if they need them.
I’ve only been job hunting for a few weeks. I still have hope. But the triple nightmare of a slow market, high cost of living and Centrelink payments below the poverty line can get serious really fast. On what planet is it preferable to be told, via large language models, that the problem might actually be you?
I thought I was only morally opposed to AI because it’s destroying the planet and stealing indiscriminately from artists. As it turns out, its impersonation of “asshole boss from my first ever job” is right up there, too.
Source link
lol