Please login or sign up to post and edit reviews.
160: Decision Making 101
Podcast |
Does Not Compute
Publisher |
Spec
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
Feb 05, 2019
Episode Duration |
00:53:06

Sponsored by Sentry.io

Relying on customers to report errors is not good. It's rude to customers and bad for business.

Ideally, this would be solved easily with tests. Why not just cover every scenario with a test? Then life would be perfect and fine and great. Because here in reality, humans are pretty bad at writing tests. Not just because we’re all kinda lazy and maybe a little dumb, but also because we can’t anticipate every single way users are going to interact with our product. They might do something really, really, really stupid (or something really, really, really smart) that we didn’t think about.

That’s why Sentry tells you about errors in your code before your customers have a chance to encounter them.

Not only do we tell you about them, we also give you all the details you’ll need to be able to fix them. You’ll see exactly how many users have been impacted by a bug, the stack trace, the commit that the error was released as part of, the engineer who wrote the line of code that is currently busted, and a lot more.

Your code is broken. Let’s fix it together. Sentry.io.

Listener Question

@ryanmcdonough - It’d be awesome if you had any decisions product wise you had to make recently and how you came to your final choice, e.g how to structure products, elasticsearch compared to other options. Always interested to hear peoples decision processes.

Rockwell's Notes on Decision Making

Things That Work

  • Researching the options - just reading a lot, taking notes, and bookmarks
  • Pros/cons lists
  • Forcing yourself to attach quantities to things - e.g. hours
    • Doesn't have to be 100% correct, but coming up with a real number forces you to think about factors like overall time, complexity, time for communication/stalling/waiting, etc
  • Writing specs with no intention of following through
  • Analytics - for business decisions
  • Metrics - for performance decisions
  • Being decisive
    • Don't want to second-guess halfway through; leads to dead-ends
  • Don't just look for positives
    • Things to look at instead: community size/support, quantity and quality of StackOverflow issues
    • Look for things that cannot be done, are not well-supported, etc
    • e.g. When car shopping, ended up on https://www.carcomplaints.com/, CR-V issues
  • Interacting with your users

Things That DON'T work

  • Jumping right in
    • Going with the first thing you find
  • Assigning/sorting by priorities
  • Editorialized reviews - mixed bag; take with a grain of salt
    • Told a friend I was starting Elixir, heard about a toxic community?
    • Kept an eye out, but never saw it. Quite the opposite
  • Trusting marketing language
    • Lends itself to hyperbole - every product/service/technology is going to market itself as the best thing since sliced bread
  • Taking user feedback literally

Things Mentioned

Leave us a review

Last but not least, if you haven't rated or reviewed the show yet and you'd like to do us a huge favor, you can do so by clicking here!

Show Notes Archive

If you're looking for a link we've mentioned in the past, head on over to the Does Not Compute site! We've even included a search tool for you to use to find episodes that touch on specific topics.

Join Us On Spectrum

If you have enjoyed the show so far, reach out to us on twitter at @seanwashbot and @Schrockwell, or join us in the Spectrum community at https://spectrum.chat/specfm/does-not-compute!

In episode 160 of Does Not Compute, Sean and Rockwell discuss how they make product decisions; from technical details to overall project strategies.

Sponsored by Sentry.io

Relying on customers to report errors is not good. It's rude to customers and bad for business.

Ideally, this would be solved easily with tests. Why not just cover every scenario with a test? Then life would be perfect and fine and great. Because here in reality, humans are pretty bad at writing tests. Not just because we’re all kinda lazy and maybe a little dumb, but also because we can’t anticipate every single way users are going to interact with our product. They might do something really, really, really stupid (or something really, really, really smart) that we didn’t think about.

That’s why Sentry tells you about errors in your code before your customers have a chance to encounter them.

Not only do we tell you about them, we also give you all the details you’ll need to be able to fix them. You’ll see exactly how many users have been impacted by a bug, the stack trace, the commit that the error was released as part of, the engineer who wrote the line of code that is currently busted, and a lot more.

Your code is broken. Let’s fix it together. Sentry.io.

Listener Question

@ryanmcdonough - It’d be awesome if you had any decisions product wise you had to make recently and how you came to your final choice, e.g how to structure products, elasticsearch compared to other options. Always interested to hear peoples decision processes.

Rockwell's Notes on Decision Making

Things That Work

  • Researching the options - just reading a lot, taking notes, and bookmarks
  • Pros/cons lists
  • Forcing yourself to attach quantities to things - e.g. hours
    • Doesn't have to be 100% correct, but coming up with a real number forces you to think about factors like overall time, complexity, time for communication/stalling/waiting, etc
  • Writing specs with no intention of following through
  • Analytics - for business decisions
  • Metrics - for performance decisions
  • Being decisive
    • Don't want to second-guess halfway through; leads to dead-ends
  • Don't just look for positives
    • Things to look at instead: community size/support, quantity and quality of StackOverflow issues
    • Look for things that cannot be done, are not well-supported, etc
    • e.g. When car shopping, ended up on https://www.carcomplaints.com/, CR-V issues
  • Interacting with your users

Things That DON'T work

  • Jumping right in
    • Going with the first thing you find
  • Assigning/sorting by priorities
  • Editorialized reviews - mixed bag; take with a grain of salt
    • Told a friend I was starting Elixir, heard about a toxic community?
    • Kept an eye out, but never saw it. Quite the opposite
  • Trusting marketing language
    • Lends itself to hyperbole - every product/service/technology is going to market itself as the best thing since sliced bread
  • Taking user feedback literally

Things Mentioned

Leave us a review

Last but not least, if you haven't rated or reviewed the show yet and you'd like to do us a huge favor, you can do so by clicking here!

Show Notes Archive

If you're looking for a link we've mentioned in the past, head on over to the Does Not Compute site! We've even included a search tool for you to use to find episodes that touch on specific topics.

Join Us On Spectrum

If you have enjoyed the show so far, reach out to us on twitter at @seanwashbot and @Schrockwell, or join us in the Spectrum community at https://spectrum.chat/specfm/does-not-compute!

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review