The last three months has seen a rapid shift in the employment market. Most of the large tech players have laid off 5% or more of their global workforce, which was unthinkable in the summer of 2022. While the medium to long-term problem of severe skills shortages in Data Science and engineering will continue, for now, the recruiting pendulum has shifted (albeit temporarily) in favour of the hiring company. As more Data Scientists enter the market looking for jobs, you will inevitably see an increase in applications. The heavy lifting in the recruitment process has now shifted from attraction to assessment.
Given the current macroeconomic volatility, and the expectation to do more with less, it’s even more important that you avoid making hiring mistakes.
What GPT 3 and other AI tools mean for hiring Data Scientists/Engineers now and in the future
Just like in every other sector, AI will change recruitment significantly in the next five years. But what does this mean for you if you are hiring and assessing candidates right now?
Setting aside the early hype, it’s far too early to say exactly what Chat GPT3 means for hiring. However, we can be fairly certain about the future direction of travel and also what the immediate effect could be on candidate assessments. Take-home tests, which are standard in most hiring processes for Data Scientists and Engineers could become problematic very quickly.
The education sector runs more tests than probably any other sector, so it might give us some clues about where we are going. Kevin Bryan, a University of Toronto Associate professor, posted on Twitter recently: “You can no longer give take-home exams. I think chat.openai.com may actually spell the end of writing assignments”.
Schools in the USA have already reacted by banning the use of Chat GPT3. Educators are so worried that in many areas, they have stopped giving out take-home essays and tests that were previously completed on home computers and are insisting that essays are completed in school with a pen and paper.
The problem with assessing candidates using take-home coding tests is that Chat GPT3 can already write basic-intermediate levels of code in several languages. It’s also reasonably competent at generating explanations of how the code works. The technology is prone to error and the code needs to be checked by a human, but it’s probably still good enough to score 60-70% in basic coding tests at the more junior end of the spectrum.
Chat GPT4 is just around the corner which means that at some point in the very near future (if not already), many take-home tests are likely to be unreliable when predicting job performance in relation to coding. This is especially true if the tests are the more basic type of coding challenges, or they are generic in nature.
So what can you do now to improve your assessment process?
To understand how to improve your assessment process, take a look at the full article from Damien here:
https://issuu.com/datasciencetalent/docs/the_data_scientist_mag_issue_3_digital/s/24371635