Free template

QA Testing SOP Template

Free, ready-to-use QA testing SOP template to standardize how your team plans, executes, and reports on quality assurance testing. Use this QA testing SOP template to establish consistent testing standards, catch defects before they reach production, and ship reliable software with confidence. Copy, customize, or create it in Folge with screenshots.

What is a QA Testing SOP?

A QA Testing Standard Operating Procedure (SOP) is a documented process that QA engineers and development teams follow to plan test cycles, write and execute test cases, log defects, and report results so that every release meets your quality bar before it reaches users.

Without a standardized QA testing process, teams rely on ad-hoc checks, miss critical regressions, and ship bugs that erode user trust. This template gives your QA and development teams a repeatable framework to review requirements, write thorough test cases, configure environments, execute manual and automated tests, triage bugs by severity, and obtain sign-off before release — all tracked in your test management and bug tracking tools.

When to Use This SOP Template

QA Engineers & Testers

Follow a structured workflow from test planning through sign-off so every test cycle is thorough, traceable, and consistently documented

Development Teams

Understand exactly what QA expects, how bugs will be reported, and what criteria must be met before code moves to production

Product Managers

Gain visibility into test coverage, release readiness, and defect trends so you can make informed go/no-go decisions on every release

Regulated Industries

Meet compliance and audit requirements in healthcare, finance, and other regulated sectors where documented testing procedures are mandatory

QA Testing SOP Template

Get this template instantly — copy or download, then customize for your team.

✨ Create in Folge

📋 Template Overview

Purpose: To provide a standardized process for planning, writing, executing, and reporting on quality assurance tests so that every release meets defined quality criteria before deployment

Scope: All QA engineers, testers, developers, and product managers involved in software testing and release management

Time Required: 2–8 hours per test cycle depending on scope, feature complexity, and the ratio of manual to automated tests

Tools Needed: Test management tools (TestRail, Zephyr), bug tracking (Jira, Linear), automation frameworks (Selenium, Cypress, Playwright), CI/CD pipelines

Step-by-Step Procedure

1
Review Requirements and Create Test Plan

Action:

  • Review user stories, product specifications, and acceptance criteria for the features under test:
    • Read through each user story and its acceptance criteria line by line
    • Clarify ambiguous requirements with the product manager or developer before writing tests
    • Identify edge cases, boundary conditions, and negative scenarios not explicitly stated in the spec
  • Define the test scope and objectives for the current cycle:
    • List the features, modules, and integrations that will be tested
    • Identify areas explicitly out of scope and document the reasons
    • Set measurable quality objectives (e.g., zero critical bugs, 95%+ test case pass rate)
  • Identify the test types required for this release:
    • Functional testing: Verify features work according to acceptance criteria
    • Regression testing: Confirm existing functionality is not broken by new changes
    • Integration testing: Validate that components and third-party services interact correctly
    • Performance testing: Ensure response times and throughput meet defined SLAs under load
  • Create the test schedule and assign testers:
    • Set start and end dates for each testing phase
    • Assign testers to specific feature areas based on domain expertise
    • Allocate time for test case review, environment setup, and bug retesting

⚠️ Tip: Involve QA early in sprint planning or requirements review. The earlier you identify ambiguities and missing acceptance criteria, the fewer defects you will find later — and the faster the entire cycle moves.

Expected Outcome: A test plan document that defines scope, objectives, test types, schedule, and tester assignments for the current cycle

2
Write Test Cases

Action:

  • Write clear test cases with preconditions, steps, and expected results:
    • Title: A concise description of what the test validates (e.g., "Verify user can reset password via email link")
    • Preconditions: State the required setup — user accounts, data, feature flags, or configurations that must be in place
    • Steps: Number each action the tester must take. Be specific enough that any team member can execute the test without guessing
    • Expected Result: Describe the exact outcome for each step or for the test as a whole. Include UI state, data changes, and system responses
  • Organize test cases by feature area and priority:
    • Group test cases into logical suites by feature, module, or user workflow
    • Assign priority levels: P0 (critical path), P1 (high), P2 (medium), P3 (low/nice-to-have)
    • Tag test cases for easy filtering (e.g., "smoke", "regression", "api", "ui")
  • Peer-review test cases for coverage gaps:
    • Have another QA engineer or a developer review the test cases before execution
    • Check that negative scenarios, boundary values, and error handling paths are covered
    • Verify that acceptance criteria map to at least one test case each
  • Store test cases in your test management tool (TestRail, Zephyr, or equivalent) and link them to the corresponding user stories or tickets

Expected Outcome: A complete, peer-reviewed test suite organized by feature and priority, stored in the test management tool and linked to requirements

3
Set Up Test Environment

Action:

  • Configure the test environment to match production as closely as possible:
    • Deploy the correct build or branch to the staging/QA environment
    • Match production configurations: feature flags, environment variables, and service versions
    • Ensure the environment uses the same database engine, caching layer, and API gateway versions as production
  • Prepare test data and seed databases:
    • Create or restore test datasets that cover typical and edge-case scenarios
    • Generate test accounts with different roles and permissions (admin, standard user, read-only)
    • Ensure test data does not contain real customer PII — use anonymized or synthetic data
  • Verify integrations and dependencies are available:
    • Confirm third-party APIs, payment gateways, and external services are accessible from the test environment
    • Set up mock services or sandbox environments for any integrations that cannot be tested with live endpoints
    • Verify email, SMS, and notification services are routed to test inboxes (not production)
  • Run environment health checks:
    • Execute a smoke test suite to confirm the application launches and core flows work
    • Check database connectivity, cache availability, and queue processing
    • Verify CI/CD pipeline status — confirm the latest build deployed successfully with no errors

Expected Outcome: A fully configured, validated test environment with seeded data, accessible integrations, and passing health checks — ready for test execution

4
Execute Tests and Log Results

Action:

  • Run manual and automated test suites:
    • Execute automated test suites first — run unit tests, integration tests, and end-to-end tests through your CI/CD pipeline
    • Review automation results and investigate any new failures before starting manual testing
    • Execute manual test cases in priority order (P0 first, then P1, P2, P3)
    • Test across required browsers, devices, and operating systems per your compatibility matrix
  • Record pass/fail status for each test case:
    • Mark each test case as Passed, Failed, or Blocked in your test management tool
    • For blocked tests, document the blocker and notify the relevant developer or team immediately
    • If a test case needs to be skipped, document the reason and flag it for the next cycle
  • Capture screenshots and logs for every failure:
    • Take a screenshot of the exact failure state — include the full page, not just the error message
    • Capture browser console logs, network request/response data, and any server-side error logs
    • Record screen recordings for complex multi-step failures that are hard to reproduce
  • Log bugs with steps to reproduce, severity, and screenshots:
    • Title: Clear, searchable summary (e.g., "Checkout fails with 500 error when cart contains 50+ items")
    • Steps to Reproduce: Numbered steps anyone can follow to trigger the bug
    • Expected vs. Actual Result: What should happen vs. what actually happens
    • Severity: Critical, Major, Minor, or Cosmetic
    • Attachments: Screenshots, screen recordings, console logs, and relevant test data
    • Environment: Browser, OS, device, build version, and test environment URL

⚠️ Tip: Write bug reports as if the developer has never seen the feature. Include every detail needed to reproduce the issue without asking follow-up questions. A well-written bug report saves more time than a fast one.

Expected Outcome: All test cases executed with pass/fail status recorded, and every failure documented as a detailed bug report with screenshots and reproduction steps

5
Report Results and Triage Bugs

Action:

  • Compile a test execution report with pass/fail metrics:
    • Total test cases executed, passed, failed, and blocked
    • Pass rate percentage and comparison to previous cycles
    • Test coverage by feature area — highlight areas with low coverage or high failure rates
    • Automation coverage ratio (automated vs. manual test case counts)
  • Categorize bugs by severity:
    • Critical: Application crash, data loss, security vulnerability, or complete feature failure — blocks release
    • Major: Core functionality broken or degraded but a workaround exists — should be fixed before release
    • Minor: Non-critical issue that affects usability but does not prevent core workflows — can be deferred
    • Cosmetic: Visual or UI inconsistency with no functional impact — fix when time permits
  • Hold a triage meeting with the development team to prioritize fixes:
    • Review each open bug with the dev lead and product manager
    • Assign owners and target fix dates for Critical and Major bugs
    • Decide which Minor and Cosmetic bugs to defer vs. fix in the current cycle
    • Identify patterns — if multiple bugs stem from the same root cause, group and address them together
  • Determine go/no-go for release:
    • Apply release criteria: zero Critical bugs, zero Major bugs (or accepted waivers), pass rate above threshold
    • If criteria are not met, define the fix-and-retest plan with updated timelines
    • Document the go/no-go decision and the rationale behind it

Expected Outcome: A test execution report shared with stakeholders, all bugs triaged and assigned, and a documented go/no-go decision for the release

6
Regression Test and Sign Off

Action:

  • Retest fixed bugs to verify resolution:
    • Pull the latest build containing the bug fixes into the test environment
    • Re-execute the exact steps from the original bug report to confirm the fix works
    • Test adjacent functionality to ensure the fix did not introduce side effects
    • Update the bug status to Verified/Closed or Reopened with notes
  • Run the regression suite to ensure no new issues were introduced:
    • Execute the full automated regression suite through the CI/CD pipeline
    • Run the manual regression checklist for areas not covered by automation
    • Pay special attention to areas adjacent to the code changes — fixes in one module can break another
  • Get QA sign-off on the release:
    • Confirm all Critical and Major bugs are verified as fixed
    • Verify the regression suite passes with no new failures
    • Sign off formally in your test management tool or release tracking system
    • Notify the release manager and stakeholders that QA approves the build for deployment
  • Archive test results and update test documentation:
    • Save the test execution report, bug list, and sign-off record for audit and reference
    • Update the regression test suite with new test cases for bugs found in this cycle
    • Document any lessons learned or process improvements for the next test cycle
    • Update test data and environment documentation to reflect current state

Expected Outcome: All bug fixes verified, regression suite passing, QA sign-off granted, and test artifacts archived for future reference and compliance

Best Practices for QA Testing

✓ Write Tests Before You Test

Write and review your test cases before the code lands in the test environment. When test cases are ready on day one, you start executing immediately instead of spending the first day of the cycle writing tests under pressure.

✓ Automate Repetitive Tests

Identify tests you run every cycle — login flows, CRUD operations, permission checks — and automate them. Automation frees your manual testers to focus on exploratory testing and complex scenarios that machines cannot evaluate.

✓ Screenshot Every Bug

Attach a screenshot or screen recording to every bug report. A visual showing the exact failure state eliminates guesswork for developers and reduces the back-and-forth of "I can't reproduce it" conversations.

✓ Test on Real Devices

Emulators and simulators catch most issues, but they miss device-specific quirks like touch behavior, memory constraints, and network throttling. Test critical flows on real phones, tablets, and different browsers before every release.

✓ Separate Test and Production Data

Never test against production databases or real customer data. Use anonymized or synthetic datasets, isolated test environments, and sandboxed third-party integrations to prevent accidental data corruption or privacy violations.

✓ Track Test Coverage Metrics

Measure what percentage of requirements, features, and code paths your tests cover. Track coverage trends over time and use gaps as input for sprint planning — untested areas are where hidden bugs live.

Create This SOP in Minutes with Folge

Stop copying and pasting templates. Create interactive, screenshot-based SOPs that your team will actually use.

  • Capture your actual QA testing workflow with screenshots
  • Add annotations & highlights
  • Export to PDF, Word, or HTML
System Requirements: Windows 7 ( partial support), 8, 8.1, 10, 11 (64-bit only). OSX > 10.10. Available in 🇬🇧, 🇫🇷, 🇩🇪, 🇪🇸 , 🇮🇹, 🇳🇱, 🇵🇹/🇧🇷 and 🇯🇵 languages.

Frequently Asked Questions

What is the difference between QA testing and quality control?

QA testing is a proactive, process-oriented discipline focused on preventing defects by establishing standards, procedures, and review checkpoints throughout the development lifecycle. Quality control (QC) is a reactive, product-oriented activity focused on identifying defects in a finished product through inspection and testing. In practice, QA defines how you build software correctly, while QC verifies that the finished software works correctly. Most teams need both — QA sets up the processes and standards, and QC (testing) validates the output against those standards.

How many test cases should a QA test plan include?

There is no universal number — the right count depends on the complexity of the feature, the risk level, and your quality objectives. A small feature change might need 10–20 test cases, while a complex new module could require 100 or more. Focus on coverage rather than count: every acceptance criterion should map to at least one test case, every critical user flow should have positive and negative scenarios, and boundary conditions and error handling should be explicitly tested. If you track your defect escape rate (bugs found in production that testing missed), you can use that metric to gauge whether your test plans are thorough enough.

When should you automate tests vs. test manually?

Automate tests that you run repeatedly across cycles — regression suites, smoke tests, API validations, and data-driven tests with many input combinations. These tests have a high return on investment because the upfront scripting cost is offset by savings on every subsequent run. Test manually when you need human judgment — exploratory testing, usability evaluation, visual UI review, and one-time tests for features that change frequently. A good rule of thumb: if a test will run more than three times and the steps are deterministic, it is a candidate for automation.

How do I create a visual QA testing SOP with screenshots?

Use Folge to capture your screen as you walk through each step of the QA testing workflow — creating a test plan in TestRail, writing test cases, configuring your test environment, executing tests, logging bugs in Jira, and generating reports. Folge takes a screenshot at each step and lets you annotate it with callouts, highlights, and instructions. Export the finished SOP to PDF, Word, or HTML so every tester on your team can follow the exact same process with visual guidance.

Related SOP Templates

Drawing Moonlanding

Start creating your documentation right now!

Folge is a desktop application. Download and use it for free forever or upgrade for lifetime features and support.

Looks like you are on mobile phone. Click here to send yourself download links for later
System Requirements: Windows 7 ( partial support), 8, 8.1, 10, 11 (64-bit only). OSX > 10.10. Available in 🇬🇧, 🇫🇷, 🇩🇪, 🇪🇸 , 🇮🇹, 🇳🇱, 🇵🇹/🇧🇷 and 🇯🇵 languages.
The Gold Standard Of Guide Creation
Jonathan, Product Director