Skip to main content

Manual Testing Process

In This Document

Manual Testing Overview

Manual testing is a critical part of our quality assurance process. It involves the human-driven verification of changes by both developers and QA specialists to ensure that all features work as expected and meet the acceptance criteria.

Developer Verification

Post-Merge Verification

After a pull request is merged, the developer is responsible for:

  1. Verifying that their changes have been properly deployed to the appropriate environment:

    • For regular features: verify on the develop environment
    • For hotfixes: verify on the staging environment
  2. Conducting a comprehensive review of their changes on the test servers to ensure:

    • All functionality works as expected
    • No regressions have been introduced
    • UI/UX elements display correctly
    • Performance is acceptable
  3. Documenting any issues found during this verification process

Handoff to QA

Once the developer has completed their verification, they must:

  1. Move the task to the appropriate column in Trello for QA review
  2. Provide clear instructions on how to test the changes
  3. Be available to answer any questions from the QA team

QA Testing Process

Test Execution

The QA specialist will:

  1. Review the task and its acceptance criteria
  2. Test the changes on the appropriate environment (develop or staging)
  3. Verify that all acceptance criteria have been met
  4. Perform exploratory testing to identify potential edge cases
  5. Check for any regressions in related functionality

Test Results

Based on the testing results, the QA specialist will:

Approved Changes

If all acceptance criteria are met and no issues are found:

  1. The task is moved to the "Approved" column in Trello
  2. The QA specialist adds a comment confirming approval
  3. The task is considered complete

Acceptance Criteria

Every task must have clearly defined acceptance criteria that:

  • Are specific and measurable
  • Define what constitutes successful implementation
  • Cover all aspects of the feature or fix
  • Are understood by both developers and QA specialists

All acceptance criteria must be met for a task to be approved. Partial implementation is not acceptable.

Continuous QA Monitoring

In addition to task-specific testing, the QA team continuously monitors different parts of the system:

  1. Daily checks of critical system functionality
  2. Regular exploratory testing of different features
  3. Monitoring of system performance and stability

Issue Reporting

When issues are found during continuous monitoring:

  1. The QA specialist creates a new task in Trello
  2. The task is placed in the "Properties" column
  3. The issue is documented with detailed reproduction steps
  4. The product team assigns priority to the issue
  5. The issue is scheduled for fixing based on its priority

Testing Environments

Develop Environment

  • Used for testing regular feature development
  • Updated with each merge to the develop branch
  • Accessible to all team members for testing

Staging Environment

  • Used for testing hotfixes and pre-production verification
  • More stable than the develop environment
  • Closely resembles the production environment
  • Used for final verification before production deployment

Communication During Testing

  • Developers and QA specialists should maintain open communication
  • Questions about testing or implementation should be addressed promptly
  • Complex testing scenarios may require collaboration between developers and QA
  • All testing-related communication should be documented in the Trello card

Test Documentation

For complex features, additional test documentation may be required:

  • Test plans outlining the testing approach
  • Test cases covering specific scenarios
  • Test data requirements
  • Expected results for each test case

Integration with Automated Testing

Manual testing complements our automated testing:

  • Automated tests verify basic functionality and prevent regressions
  • Manual testing focuses on user experience and complex scenarios
  • Issues found during manual testing should be considered for addition to the automated test suite