This episode currently has no reviews.
Submit ReviewAbout Corey Quinn
Over the course of my career, I’ve worn many different hats in the tech world: systems administrator, systems engineer, director of technical operations, and director of DevOps, to name a few. Today, I’m a cloud economist at The Duckbill Group, the author of the weekly Last Week in AWS newsletter, and the host of two podcasts: Screaming in the Cloud and, you guessed it, AWS Morning Brief, which you’re about to listen to.
Links
Transcript
Corey: Welcome to AWS Morning Brief: Whiteboard Confessional. I’m Cloud Economist Corey Quinn. This weekly show exposes the semi-polite lie that is whiteboard architecture diagrams. You see, a child can draw a whiteboard architecture, but the real world is a mess. We discuss the hilariously bad decisions that make it into shipping products, the unfortunate hacks the real-world forces us to build, and that the best to call your staging environment is “theory”. Because invariably whatever you’ve built works in the theory, but not in production. Let’s get to it.
Sponsorships can be a lot of fun sometimes. ParkMyCloud asked, “Can we have one of our execs do a video webinar with you?” My response was, “Here’s a better idea. How about I talk to one of your customers instead, so you can pay to make fun of you.” And turns out, I’m super-convincing. So, that’s what’s happening. Join me and ParkMyCloud’s customer, Workfront, on July 23rd for a no-holds-barred discussion about how they’re optimizing AWS costs, and whatever other fights I manage to pick before ParkMyCloud realizes what’s going on and kills the feed. Visit parkmycloud.com/snark to register. That’s parkmycloud.com/snark.
Welcome. I am Cloud Economist Corey Quinn, and this is the AWS Morning Brief: Whiteboard Confessional; things that we see on whiteboards that we wish we could unsee. Today I want to talk about the worst whiteboard confessions of all time, and those invariably all tend to circle around what we ask candidates to do on a whiteboard during job interviews. There are a whole bunch of objections, problems, and other varieties of crappy opinions around whiteboarding as part of engineering job interviews, but they're all a part of the larger problem, which is that interviewing for engineering jobs fundamentally sucks. There are enough Medium articles on how trendy startups have cracked the interview to fill an S3 bucket. So, I'm going to take the contrarian position that all of these startups and all of these people who claim to have solved the problem, suck at it.
And these terrible questions fall into a few common failure modes, most of which I've seen when they were levied at me back in my engineering days, and I was exercising my core competency of getting rapidly ejected from other companies. So, I spent a lot of time doing job interviews, and I kept seeing some of the same things appear. And they're all, of course, are different. But let’s start with some of the patterns. The most obnoxious one by far is the open-ended question of how would you solve a given problem? And as you start answering the question, they're paying more attention than you would expect. Maybe someone's on their laptop, quote-unquote ‘taking notes’ an awful lot. And I can't ever prove it, but it feels an awful lot—based upon the question—like, this is the kind of problem where you could suddenly walk out of the interview room, walk into the conference room next door and find a bunch of engineers currently in a war room trying to solve the question you were just asked.
And what I hate about this pattern is it's a way of weaseling free work from interview candidates. If you want people to work on real-world problems, pay them. One of the best interviews I ever had is by a company that no longer exists called Three Rings Design. They wanted me to stand up some three-tiered web app, and turn on monitoring, and standard basic DevOps-style stuff; they gave an AWS account credential set to me. But what made them stand out is that they said, “Well, this is a toy problem. We're not going to use this in production, surprise. It's a generic question. But we don't believe in asking people to do work for free, so when you submit your results, we'll pay you a few hundred bucks.” And this was a standard policy they had for everyone who made it to that part of the interview. It was phenomenal, and I loved that approach. It solved that problem completely. But it's the only time I've ever seen it in my entire career.
A variant of this horrible technique is to introduce the same type of problem, but it's with the proviso that this is a production problem that we had a few months ago. It's gone now, but how would you solve it? Now on its face, this sounds like a remarkably decent interview question. It's real-world. They've already solved it. So, whatever answer you give is not likely to be something revolutionary that's going to change how they approach things. So, what's wrong with it? Well, the problem is, is that in most cases, the right answer is going to look suspiciously like whatever they did to solve the problem when it manifested.
I answered a question like this once with, “Well, what would strace tell me?” And the response was, “What does strace do?” I explained that it attached to processes and looked at the system calls that that process was making, and their response was, “Oh, yeah, that would have caught the problem. Huh. Okay, pretend strace doesn't exist.” Now it's not just the question of how you would solve the problem, but how you would solve the problem while being limited to their particular, often myopic, view of how systems work and how infrastructure needs to play out. This manifests itself the same way, by the way, in the world of various programming languages, and doing traditional developer engineer’s. It's awful because it winds up forcing you to see things through a perspective that you may not share. Part of the value of having you go and work somewhere is to bring your unique perspective. And, surprise, there's all these books on how to pass the technical interview. There are many fewer books on how to freaking give one that doesn't suck. And I wish that some people would write that book and that everyone else would read it. You can tell an awful lot about a company by how they go about hiring their people.
Corey: This episode is sponsored in part by ChaosSearch. Now their name isn’t in all caps, so they’re definitely worth talking to. What is ChaosSearch? A scalable log analysis service that lets you add new workloads in minutes, not days or weeks. Click. Boom. Done. ChaosSearch is for you if you’re trying to get a handle on processing multiple terabytes, or more, of log and event data per day, at a disruptive price. One more thing, for those of you that have been down this path of disappointment before, ChaosSearch is a fully managed solution that isn’t playing marketing games when they say “fully managed.” The data lives within your S3 buckets, and th...
About Corey Quinn
Over the course of my career, I’ve worn many different hats in the tech world: systems administrator, systems engineer, director of technical operations, and director of DevOps, to name a few. Today, I’m a cloud economist at The Duckbill Group, the author of the weekly Last Week in AWS newsletter, and the host of two podcasts: Screaming in the Cloud and, you guessed it, AWS Morning Brief, which you’re about to listen to.
Links
Transcript
Corey: Welcome to AWS Morning Brief: Whiteboard Confessional. I’m Cloud Economist Corey Quinn. This weekly show exposes the semi-polite lie that is whiteboard architecture diagrams. You see, a child can draw a whiteboard architecture, but the real world is a mess. We discuss the hilariously bad decisions that make it into shipping products, the unfortunate hacks the real-world forces us to build, and that the best to call your staging environment is “theory”. Because invariably whatever you’ve built works in the theory, but not in production. Let’s get to it.
Sponsorships can be a lot of fun sometimes. ParkMyCloud asked, “Can we have one of our execs do a video webinar with you?” My response was, “Here’s a better idea. How about I talk to one of your customers instead, so you can pay to make fun of you.” And turns out, I’m super-convincing. So, that’s what’s happening. Join me and ParkMyCloud’s customer, Workfront, on July 23rd for a no-holds-barred discussion about how they’re optimizing AWS costs, and whatever other fights I manage to pick before ParkMyCloud realizes what’s going on and kills the feed. Visit parkmycloud.com/snark to register. That’s parkmycloud.com/snark.
Welcome. I am Cloud Economist Corey Quinn, and this is the AWS Morning Brief: Whiteboard Confessional; things that we see on whiteboards that we wish we could unsee. Today I want to talk about the worst whiteboard confessions of all time, and those invariably all tend to circle around what we ask candidates to do on a whiteboard during job interviews. There are a whole bunch of objections, problems, and other varieties of crappy opinions around whiteboarding as part of engineering job interviews, but they're all a part of the larger problem, which is that interviewing for engineering jobs fundamentally sucks. There are enough Medium articles on how trendy startups have cracked the interview to fill an S3 bucket. So, I'm going to take the contrarian position that all of these startups and all of these people who claim to have solved the problem, suck at it.
And these terrible questions fall into a few common failure modes, most of which I've seen when they were levied at me back in my engineering days, and I was exercising my core competency of getting rapidly ejected from other companies. So, I spent a lot of time doing job interviews, and I kept seeing some of the same things appear. And they're all, of course, are different. But let’s start with some of the patterns. The most obnoxious one by far is the open-ended question of how would you solve a given problem? And as you start answering the question, they're paying more attention than you would expect. Maybe someone's on their laptop, quote-unquote ‘taking notes’ an awful lot. And I can't ever prove it, but it feels an awful lot—based upon the question—like, this is the kind of problem where you could suddenly walk out of the interview room, walk into the conference room next door and find a bunch of engineers currently in a war room trying to solve the question you were just asked.
And what I hate about this pattern is it's a way of weaseling free work from interview candidates. If you want people to work on real-world problems, pay them. One of the best interviews I ever had is by a company that no longer exists called Three Rings Design. They wanted me to stand up some three-tiered web app, and turn on monitoring, and standard basic DevOps-style stuff; they gave an AWS account credential set to me. But what made them stand out is that they said, “Well, this is a toy problem. We're not going to use this in production, surprise. It's a generic question. But we don't believe in asking people to do work for free, so when you submit your results, we'll pay you a few hundred bucks.” And this was a standard policy they had for everyone who made it to that part of the interview. It was phenomenal, and I loved that approach. It solved that problem completely. But it's the only time I've ever seen it in my entire career.
A variant of this horrible technique is to introduce the same type of problem, but it's with the proviso that this is a production problem that we had a few months ago. It's gone now, but how would you solve it? Now on its face, this sounds like a remarkably decent interview question. It's real-world. They've already solved it. So, whatever answer you give is not likely to be something revolutionary that's going to change how they approach things. So, what's wrong with it? Well, the problem is, is that in most cases, the right answer is going to look suspiciously like whatever they did to solve the problem when it manifested.
I answered a question like this once with, “Well, what would strace tell me?” And the response was, “What does strace do?” I explained that it attached to processes and looked at the system calls that that process was making, and their response was, “Oh, yeah, that would have caught the problem. Huh. Okay, pretend strace doesn't exist.” Now it's not just the question of how you would solve the problem, but how you would solve the problem while being limited to their particular, often myopic, view of how systems work and how infrastructure needs to play out. This manifests itself the same way, by the way, in the world of various programming languages, and doing traditional developer engineer’s. It's awful because it winds up forcing you to see things through a perspective that you may not share. Part of the value of having you go and work somewhere is to bring your unique perspective. And, surprise, there's all these books on how to pass the technical interview. There are many fewer books on how to freaking give one that doesn't suck. And I wish that some people would write that book and that everyone else would read it. You can tell an awful lot about a company by how they go about hiring their people.
Corey: This episode is sponsored in part by ChaosSearch. Now their name isn’t in all caps, so they’re definitely worth talking to. What is ChaosSearch? A scalable log analysis service that lets you add new workloads in minutes, not days or weeks. Click. Boom. Done. ChaosSearch is for you if you’re trying to get a handle on processing multiple terabytes, or more, of log and event data per day, at a disruptive price. One more thing, for those of you that have been down this path of disappointment before, ChaosSearch is a fully managed solution that isn’t playing marketing games when they say “fully managed.” The data lives within your S3 buckets, and th...
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review