The webapp tracks a lot of stuff - including a play-by-play video of what you type in. It maps out when you're on the pad, when you're out of the pad, when you paste something, when you're looking at an external source, etc. [They go into pretty great detail outlining their tracking and assessment features here] (https://www.youtube.com/watch?v=fUbtr35YfAg).
It never crossed my mind that these interviewing tools were any different than a live collaborative editor until today. This is all spurred by a rejection (today) from a tech screen that had pretty simple questions and couldn't figure out why they had rejected me outright and so early without any feedback. Admittedly - it could have been for anything and realistically, I absolutely could have done something wrong in their eyes. But - determined to figure out what the problem could have been (having now garnered mild imposter syndrome), I showed my S.O. the take home code project and they pointed out that the reviewer may be unnecessarily cold and vague because I copied the code out to write it in VIM. From their perspective, it looks like I was searching the internet for 40 minutes and then copied some code over that worked the first time.
Again - they absolutely may have cut me for a plethora of other reasons. But at a minimum this experience has opened up a new world of potential pitfalls related to the web editor itself (especially if you do most of your work outside of their editor). In retrospect, I'm not sure why I didn't think about this (slightly embarrassed) but I figured that there might be others who thought/do the same.
Anyways - know that interview code pads aren't all the same - some track a ton of stuff - even live video. If you're like me and like to copy the code out and into your own environment to use your own custom tools - just know that you might look pretty suspicious from the reviewer's point of view.