It seems like they're only evaluating phone screen methods against their pre-designed coding interview problem? But what if there are issues with that problem?
There seems to be a big assumption that "our programming questions are going to be good and predictive, even if everyone else's are bad." What if being able to describe in-depth a past (real) project correlates just as well (or better) to on-the-job performance as being able to design and code one of their artificial ones? Or what if those artificial ones just don't correlate that well with on-the-job performance in the first place?
It is definitely harder to BS-detect/grade, though.
They want to re-run against actual job performance in the future, that's nice, but it seems like they're throwing ideas out awfully early, then.
There seems to be a big assumption that "our programming questions are going to be good and predictive, even if everyone else's are bad." What if being able to describe in-depth a past (real) project correlates just as well (or better) to on-the-job performance as being able to design and code one of their artificial ones? Or what if those artificial ones just don't correlate that well with on-the-job performance in the first place?
It is definitely harder to BS-detect/grade, though.
They want to re-run against actual job performance in the future, that's nice, but it seems like they're throwing ideas out awfully early, then.