Free template
Free, ready-to-use QA testing SOP template to standardize how your team plans, executes, and reports on quality assurance testing. Use this QA testing SOP template to establish consistent testing standards, catch defects before they reach production, and ship reliable software with confidence. Copy, customize, or create it in Folge with screenshots.
A QA Testing Standard Operating Procedure (SOP) is a documented process that QA engineers and development teams follow to plan test cycles, write and execute test cases, log defects, and report results so that every release meets your quality bar before it reaches users.
Without a standardized QA testing process, teams rely on ad-hoc checks, miss critical regressions, and ship bugs that erode user trust. This template gives your QA and development teams a repeatable framework to review requirements, write thorough test cases, configure environments, execute manual and automated tests, triage bugs by severity, and obtain sign-off before release — all tracked in your test management and bug tracking tools.
Follow a structured workflow from test planning through sign-off so every test cycle is thorough, traceable, and consistently documented
Understand exactly what QA expects, how bugs will be reported, and what criteria must be met before code moves to production
Gain visibility into test coverage, release readiness, and defect trends so you can make informed go/no-go decisions on every release
Meet compliance and audit requirements in healthcare, finance, and other regulated sectors where documented testing procedures are mandatory
Purpose: To provide a standardized process for planning, writing, executing, and reporting on quality assurance tests so that every release meets defined quality criteria before deployment
Scope: All QA engineers, testers, developers, and product managers involved in software testing and release management
Time Required: 2–8 hours per test cycle depending on scope, feature complexity, and the ratio of manual to automated tests
Tools Needed: Test management tools (TestRail, Zephyr), bug tracking (Jira, Linear), automation frameworks (Selenium, Cypress, Playwright), CI/CD pipelines
Action:
⚠️ Tip: Involve QA early in sprint planning or requirements review. The earlier you identify ambiguities and missing acceptance criteria, the fewer defects you will find later — and the faster the entire cycle moves.
Expected Outcome: A test plan document that defines scope, objectives, test types, schedule, and tester assignments for the current cycle
Action:
Expected Outcome: A complete, peer-reviewed test suite organized by feature and priority, stored in the test management tool and linked to requirements
Action:
Expected Outcome: A fully configured, validated test environment with seeded data, accessible integrations, and passing health checks — ready for test execution
Action:
⚠️ Tip: Write bug reports as if the developer has never seen the feature. Include every detail needed to reproduce the issue without asking follow-up questions. A well-written bug report saves more time than a fast one.
Expected Outcome: All test cases executed with pass/fail status recorded, and every failure documented as a detailed bug report with screenshots and reproduction steps
Action:
Expected Outcome: A test execution report shared with stakeholders, all bugs triaged and assigned, and a documented go/no-go decision for the release
Action:
Expected Outcome: All bug fixes verified, regression suite passing, QA sign-off granted, and test artifacts archived for future reference and compliance
Write and review your test cases before the code lands in the test environment. When test cases are ready on day one, you start executing immediately instead of spending the first day of the cycle writing tests under pressure.
Identify tests you run every cycle — login flows, CRUD operations, permission checks — and automate them. Automation frees your manual testers to focus on exploratory testing and complex scenarios that machines cannot evaluate.
Attach a screenshot or screen recording to every bug report. A visual showing the exact failure state eliminates guesswork for developers and reduces the back-and-forth of "I can't reproduce it" conversations.
Emulators and simulators catch most issues, but they miss device-specific quirks like touch behavior, memory constraints, and network throttling. Test critical flows on real phones, tablets, and different browsers before every release.
Never test against production databases or real customer data. Use anonymized or synthetic datasets, isolated test environments, and sandboxed third-party integrations to prevent accidental data corruption or privacy violations.
Measure what percentage of requirements, features, and code paths your tests cover. Track coverage trends over time and use gaps as input for sprint planning — untested areas are where hidden bugs live.
Stop copying and pasting templates. Create interactive, screenshot-based SOPs that your team will actually use.
QA testing is a proactive, process-oriented discipline focused on preventing defects by establishing standards, procedures, and review checkpoints throughout the development lifecycle. Quality control (QC) is a reactive, product-oriented activity focused on identifying defects in a finished product through inspection and testing. In practice, QA defines how you build software correctly, while QC verifies that the finished software works correctly. Most teams need both — QA sets up the processes and standards, and QC (testing) validates the output against those standards.
There is no universal number — the right count depends on the complexity of the feature, the risk level, and your quality objectives. A small feature change might need 10–20 test cases, while a complex new module could require 100 or more. Focus on coverage rather than count: every acceptance criterion should map to at least one test case, every critical user flow should have positive and negative scenarios, and boundary conditions and error handling should be explicitly tested. If you track your defect escape rate (bugs found in production that testing missed), you can use that metric to gauge whether your test plans are thorough enough.
Automate tests that you run repeatedly across cycles — regression suites, smoke tests, API validations, and data-driven tests with many input combinations. These tests have a high return on investment because the upfront scripting cost is offset by savings on every subsequent run. Test manually when you need human judgment — exploratory testing, usability evaluation, visual UI review, and one-time tests for features that change frequently. A good rule of thumb: if a test will run more than three times and the steps are deterministic, it is a candidate for automation.
Use Folge to capture your screen as you walk through each step of the QA testing workflow — creating a test plan in TestRail, writing test cases, configuring your test environment, executing tests, logging bugs in Jira, and generating reports. Folge takes a screenshot at each step and lets you annotate it with callouts, highlights, and instructions. Export the finished SOP to PDF, Word, or HTML so every tester on your team can follow the exact same process with visual guidance.

Folge is a desktop application. Download and use it for free forever or upgrade for lifetime features and support.
The Gold Standard Of Guide Creation
