Testing a low-fidelity prototype hurts, but it’s good!

For the mobile app I am working on with my buddy Nirav, I recently tried out usertesting.com to get feedback on the app design. This company takes an URL/app, sends it to a panel of their users, and then provides a video of the tester’s use of the URL/app along with written summary from the user.
I have built a working jquery mobile prototype with Nirav that is very rough. Even though it was embarrassing to test something that is so low-fidelity, we thought the feedback would be valuable and decided to see how the process worked with usertesting.com.
The first learning was an “ops” learning to use a super easy URL for the testers.  Right now, we have a long and convoluted URL from Amazon’s ec2 service with characters that require keyboard toggling for mobile phone users. The first three testers had a heck of a time entering in that terrible URL on an iPhone keyboard.  I couldn’t believe that the usertesting folks didn’t get the urls to the user’s mobile devices for them.  When I saw that in the videos, I realized a URL shortener would be a simple mediator. The next test I ran was with a short URL from goo.gl. Users had no problem with that. Why wouldn’t usertesting.com just text or email their user the URL? Anyway…
The next learning, which I have heard from Lean Startup folks before, was that low-fidelity prototypes yield great feedback. I’ve done many usabilities in my 15+ years of working in Silicon Valley, but this prototype is rough with a capital “R”. While there were big problems like decimals to 4 places and only a back button to navigate the app (users got so lost even though there are only 4 screens!), I learned a crucial problem that we need to resolve so a user can grok the real benefit of the app. That was a huge learning that would have taken days longer to get if we had tried to make the prototype better.  It was seriously embarrassing to watch the videos.  It was hard, and I had to stop them and walk away seeing those glaring bugs.
I should call out that there were a number of emotional barriers I had around running a test like this. I thought it wasn’t good enough, and if it wasn’t good enough, then users would hate us. We’d be blacklisted in some way. I thought somehow we’d be tarnished by testing something so unpolished. None of that happened. Now, it seems so odd to have held up experiments like this in the past. The consequences were non-existent. The benefits are moving us forward.