Have you ever heard from a customer that your software was buggy, slow or unreliable? Or that a particular feature didn’t function exactly as intended? You’re not alone. Every software company, product and development team has experienced customer feedback at some point. But the best time to receive this type of user-generated feedback is before the product launches to the market, not after.
Most organizations complete user-testing with employees (called dogfooding) or with real customers at various stages of the software development process to eliminate issues and improve the user experience. However, not all user-testing is the same. And herein lies the problem.
There are a number of common mistakes that can happen during the testing process that make it difficult for software developers to receive enough actionable data and to be able to engineer the right solution into the product. Sometimes mistakes are made because of a lack of standardization or consistency across the testing process. Often, it can be missing crucial testing steps, failing to follow up with testers properly or an inability to determine the most useful feedback from reams of user input.
Here are five common mistakes that surface during user testing and how anyone managing user tests can avoid them to save time, money and headaches. After all, running a great test and improving software before it goes live is the ultimate goal.
Mistake #1: Not Starting with a Plan
Not starting your software test with a plan usually means you will end your test with scattered results that don’t support your objectives. As tempting as it can be to jump into testing what you think you need to test without a full plan, planning is critical for user testing success. Meet with stakeholders and determine what you are hoping to learn from the user test. Balance this with how much time you have to complete the test and use your experience with the product to determine what will have the most impact on customer experience. With this information, you can start to plan what features you will test and how long each test will run.
Your plan should also include the tester criteria you will need to ensure you test all the required features. For example, if one of your test objectives is to see how your software will work with mobile devices, you need to plan for that requirement in a segment of testers. Additionally, you may need to plan for iOS and Android users into the tester qualifications.
The plan should also include the testing schedule of when you are starting the recruiting process, when you will officially start testing, what surveys will be sent and when you will have status meetings with stakeholders. Take the time to plan your test to make sure you have a clear understanding of goals, expectations and where everyone plays a role.
Mistake #2: Using a One-Size-Fits-All Feedback Form
Valuable tester feedback doesn’t just help identify and fix glitches. It allows testers to share honest opinions about how a software application works. This feedback includes ideas to help improve the software, issues related to coding and user experience, and praise for the features that users like most. While some of these insights may not be as urgent as a critical error, these ideas and praise contextualize issues, strengthen the product roadmap, and signal what’s working and what could be improved.
But collecting this level of feedback requires developers and testing managers to customize feedback forms for each user-testing project. Setting up the forms correctly and customizing them to each product and each test helps teams efficiently analyze user insights while still prioritizing fixes. With generic or one-size-fits-all testing forms, test managers are at risk of missing critical feedback that may not fit into a standard testing feedback form.
Mistake #3: Siloed Data
There are two key reasons why centralized feedback data is critical to the success of any test. First, with so much information coming in, engineers and QA teams need to see it all in context. Siloing data across multiple spreadsheets, emails and software platforms makes it difficult to interpret aggregated feedback to properly address important fixes or report on test progress.
Second, if data is located in numerous different systems, this presents privacy concerns. When a tester decides that they do not want to be involved in a test anymore, companies are legally bound to scrub every place where the tester’s data is located. Storing data across systems makes it more difficult (and legally compromising) to ensure that the right data is deleted from all systems.
While many testers still use manual-heavy processes, it is beneficial to invest in a modern testing platform that centralizes all testing data. This can greatly reduce the time spent copying, pasting and manually combining data into other systems like Jira, which means less delay between identifying issues and when developers and engineers fix or address them. It also ensures that tester data is secure and reduces privacy concerns.
Additionally, using a platform that centralizes data provides clean dashboards that can help testing teams quickly analyze user feedback, monitor progress and develop and share test reports. Helping incorporate tester feedback into the software and deliver the answers to questions that stakeholders are asking.
Mistake #4: Tester Burnout
Testers are busy. They’re balancing time testing the product with daily activities like work, school, dinner, picking kids up from sports, etc. It’s important not to ask too much of testers, as this may deter them from completing the test or providing vague, less detailed feedback.
The number of features tested and the amount of time needed to test them varies by software and tester, but on average, testers will be able to complete three to four feature tests per week. And while it’s tempting to get as much out of a tester as possible, managing their time spent on testing will ensure valuable, detailed feedback that can help best improve the product. As a starting point, consider how much time users typically engage with your product and add an hour of padding for testers to complete test-specific activities and provide feedback.
Mistake #5: Not Thanking Testers
Testers are volunteers, and the best volunteers do the work because they’re passionate about helping out. That said, their hard work shouldn’t be taken for granted. After all, you need their insights much more than they need the experience of testing. It’s important to reward testers for their dedication to making a product better.
A few great ideas for how to say “thank you” are coupon codes, first access to new features and brand swag. Even a sincere thank you note or recognition on social media can be enough to make testers feel special and appreciated. Testing teams want testers to walk away with a good impression of working with your brand so they’ll be willing to help out the next time.
In general, ensuring that you are engaging with and responding to testers throughout the process (also known as closing the feedback loop) will show that you are actively engaged in their experience. This ultimately leads to better tester participation, higher quality feedback and increased brand loyalty.
The post Five common user testing mistakes and how to avoid them appeared first on SD Times.
from SD Times https://ift.tt/qZH3IOB
Comments
Post a Comment