Get the template

Fill out the form below (even if you’re already an email subscriber!) and you’ll immediately receive the testing data template spreadsheet and instructions on how to use it. 


You’re here because you have a product, a prototype, a service or a concept that needs testing, and you don’t have a budget for an expert user researcher. No problem!


While there are many benefits to hiring an external researcher, sometimes it’s not possible for your project. The good news is that you are planning to do your own testing and you’ll get plenty of data to help drive your next phase of development and keep your project moving forward.


If you don’t have a product or service or concept to test but want to gather information to begin, that’s called User Research. This won’t help you there, but if you sign up to receive emails you’ll be notified when research tools are available (hint: soon!). 





User Testing is NOT:

  • Training the user on new concepts
  • A tutorial
  • A guided tour
  • Instructional in any way
  • Done on yourself

User Testing is:

  • Validation of a hypothesis or new concept
  • Confirmation of decisions before investing in full development
  • Observation of the user's interaction with the product, process and workflow
  • A test to see if your UX (User Experience) is intuitive
  • Insight into the user's perspective

Step 1: Build your test.

User testing is done on a mockup, prototype, or simulation though often a product is fully built before testing (this is less than ideal but still workable). Tests are brief, consistent and repeatable. An average test should have between 3 and 5 tasks, and each task should take no more than a minute or so to complete. An example task for testing a wedding photographer’s website might be to find the pricing page. Your test will have room for questions in between tasks, and your test should fit within a 30 minute window, so keep your tasks simple.


You also want to be able to tell at exactly which point your user struggles, so keeping tasks simple helps the bottlenecks in flow become obvious.


Your test could be a paper representation of a digital product; it could be a digital mockup (using a prototyping tool like Invision, Adobe XD, Sketch or something else. It could be a simulation of a retail environment made out of a classroom; or any representation of the real thing. It does not need to be a high-polish finished product but it does need to give you enough of the experience to be able to observe the user’s interaction within it.




You are not your own user, and nor is anyone who has worked on this project. If you have a clearly defined user persona, target 5-12 people fitting that description for the test. If you are not sure who your user is, that may be a better place to start than testing. You can crowdsource these users via social media, but know that you are getting an already biased sample as they are 1: on social media, 2: somehow connected to you, 3: algorithmically predisposed to see you post via the social channel you chose. 


It is a good idea to offer some sort of compensation for your testers’ time, though please make sure you are not in violation of any code of ethics. When I was running studies for the public sector, police officers (my users) were not able to accept any type of compensation and offering could potentially jeopardize their ability to license the product we were testing. 


Explain to your users beforehand what they can expect in a test.



Will you do this in person or remotely? Both are feasible and acceptable. For in-person, you’ll need a private space that allows you to simulate the environment of a user’s real engagement with the product. For example, if it’s online shopping, they can be seated at a desk or on a sofa with an iPad, but if it is an in-store experience it may require a simulation of a retail environment. For remote, you’ll need a video chat program and the ability to share your screen for your user to take control of your cursor/keyboard, OR the ability to view their screen and send a prototype they can interact with.


Schedule your tests close together but with at least 15 minutes of downtime every 2 tests, in case you get backlogged. You can use Calendly, Doodle, Hubspot and really any other scheduling tool. 




Plan on technology disappointing you 100% of the time. There’s nothing more embarrassing than a failed test; especially because it will be doubly hard to get another round of users to test. 




A moderater:

  • Remains positive, upbeat, supportive, and on the side of the user
  • Reads instructions clearly, repeats if necessary
  • Lets the user know that the design is being tested, NOT the user, and that a failed test means a failed design, not a failed user
  • Provides very limited help and guidance, but may encourage a user to talk it out if they are feeling lost
  • Can compare workflows to those a user may be familiar with in order to inspire confidence or jog memory (ex. An add-to-cart feature on Amazon, etc)
  • Records insights from the user, positive or negative, without interpretation

A moderator does not:

  • Tell the user how to perform the task
  • Communicate exact steps to user
  • Provide excessive assistance
  • Justify design decisions to the user
  • Help users pass the test
  • Tell the user they are wrong
  • Provide negative feedback on users' suggestions

If you can avoid it, do not moderate a test on your own product. Get an outsider who was not directly involved to run the test. It will be torture for you. Do not observe the test and do not allow any observers in the test. Do not test more than one person at a time. You may record the test if you decide to but you must inform the tester they are being recorded. 


Time each task. This data will be important.


At the end of each task, ask the user a series of questions. The same questions every time. In the template, you’ll find I’ve provided columns for 6 questions. They are:


1. On a scale of 1-10, how confident did you feel in completing that task?

2. On a scale of 1-10, how easy was that task to perform?

3. On a scale of 1-10, how likely would you be to recommend that task for the final design?

4. What was frustrating about that task?

5. What did you like about that task?

6. What would you recommend we do to make that task better?


I usually follow the end of the full test with two additional questions:


7. Overall, what are your thoughts on the product/service/whatever it is?

8. Any other thoughts you’d like to share?


Some of these questions are intentionally open-ended to give me an opportunity to learn even more from the user, and sometimes their responses to the first three questions warrant a little digging. If they say a “1” but I felt like they completed the task easily, I’ll ask them something like “What would have made you rank it a 10?”

Get the template

Fill out the form below (even if you’re already an email subscriber!) and you’ll immediately receive the testing data template spreadsheet and instructions on how to use it. 


  • +1 336 525 9993
  • info at tiny pinata dot xyz
  • Greensboro, North Carolina USA

TIPS, Hacks + Events