Software QA and testing are a huge industry in itself. We saw the category of bug-tracking tools in the very early days, from Bugzilla and Mantis to the modern day tools such as Linear. Early in my career as a technical writer, I saw how QA were so meticulous about their job—identifying the granular use cases to find issues in the products.
As I moved from content to design to products, I did not see much change in the QA work in the broader sense. QA continues to highlight the usability issues in forms, the error states, missing validations, system behavior gaps, empty state concerns, truncated text issues, overflowing or text wrapping in containers, misplaced user story mapping, and for the missing context in user actions. So much have changed in technology and in our workflow—from svn to Git, from Photoshop to Figma, in the code editors, and for the new category of project management tools, but the quality of digital product work has seen no or very little progress.
I wonder why we have normalized the software bugs—ordinary bugs and issues in the code, in system behavior or design, or in the interactions. And we have categories and levels of bugs—a never ending log? A perfectly acceptable way of working in an organization?
In every organization where I have worked in any role or working model, the engineers and product team work in a certain way because they have a QA team to test the product. With very few possible exceptions somewhere in the world, I can sense this gives them a license to not to be super attentive in their work—they never build their capacity to do quality work.
How have we allowed QA or bugs to become such an industry in itself? To reduce the number of bugs or issues from 72 to 37 in a week is seen as an achievement?
In most of the cases if not in the entire roadmap, the programmer can try running or using their code, to see the error state, or to anticipate the flaws and gaps in the product usability. For example, if programmers can just fill the online forms that they design and ship, the orgs should see a considerable decline in their support tickets. To support this sentiment in the language of *it costs more to let the programmers do the testing* is a misplaced assumption. We are talking about the quality of work and how much it costs to have the entire QA lifecycle—systems, workflow, tools. talent, and the communication system.
Also, if the writers are expected to be meticulous in their content skills, to review and proofread their work before they publish it online, why the programmers cannot own the quality of their work at least for the minimum usability standards? (Of course there are exceptions where writers need editors’ support but the bugs and QA are normalized on a much larger scale, in every digital product team.)
My related post—the origin of a support ticket is in the design stage.