-
Notifications
You must be signed in to change notification settings - Fork 0
Test Plan Polishing
The system integration tests for the various polished aspects of the Atlantis Sinks game should validate design choices from the developers’ and users’ perspective on all affected aspects of the existing game.
Tests should address:
- the current operation of the game
- the independent performance of the added modules
- the dependent performance of the added modules
- the added functionality or capability of the new feature
The testing should validate the complete system operation with existing and appended components together. The test conclusion should provide the team with a high level of confidence that the program will work according to user requirements.
The system integration tests will consist of JUnit and user testing of all polished classes in Atlantis Sinks to validate functionality and interaction design decisions. Direct and indirect testing of other dependent game classes and functions will be included in the overall integration testing process.
Other independent game classes and functions will not be included in the integration tests.
Atlantis Sinks is a RTS tower defence style single player game and is based on the LibGDX framework and developed in Java programming language. Atlantis Sinks uses entities, components, events, services, and resources as key aspects of its game engine architecture to make the development process and interaction easy.
- Other game branches do not modify the current branch classes
- Key aspects and interface architecture of the main game are not changed
- Each successful build of the team's branch will have passed all unit testing before integration testing will commence with main branch
- Time complexity of going through all the test procedures for each modification to polished classes may be infeasible
Test coverage will be outlined as:
- A table of testable requirements and the test cases addressing them
- A table of test necessity and the accompanied test case for all created tests If the outlined coverage levels are not met, software may not be released or further conversations and investigations will be conducted to ensure code and documentation are adequate.
Team 13's classes and methods of Atlantis Sinks along with all overlapping main interfacing components necessary to completely test the integrated feature set.
All agreed on user requirements from the design test plan will be tested.
The coverage of all JUnit test programs will be assessed using SonarCloud and test code analysis. Tests should cover at least 80% of the functionality of the program.
- JUnit Testing (Automated Software Testing)
- SonarCloud (Automated Software Testing Analysis)
- Interviews (Software User Testing)
- Peer-to-peer Conversations (Software Peer User Testing)
The following team members will be actively apart of managing and conducting the software integration testing:
Name | Title | Responsibilities |
---|---|---|
Jonathan Allen | Software Developer | Design and execute test cases for the polished classes of Atlantis Sinks following the test plan appropriately. |
Harland Jensen | Software Developer | Design and execute test cases for the polished classes of Atlantis Sinks following the test plan appropriately. |
Luke Graham | Software Developer | Design and execute test cases for the polished classes of Atlantis Sinks following the test plan appropriately. |
Tayla Ward | Software Developer | Design and execute test cases for the polished classes of Atlantis Sinks following the test plan appropriately. |
Sam Behm | Software Developer | Design and execute test cases for the polished classes of Atlantis Sinks following the test plan appropriately. |
Cliff Worsfold | Software Developer | Design and execute test cases for the polished classes of Atlantis Sinks following the test plan appropriately. |
Team reviews will also be conducted for all test cases created by the team. Team reviews include theoretical and practical reviews of the test plan, unit tests, test coverage, and test results. Team reviews will be held at the end of each sprint or after all major test integrations.
Task | Assignee | Deliverable |
---|---|---|
Polished classes automated tests | Jonathan Allen | JUnit tests for polished classes |
Polished classes automated tests | Harland Jensen | JUnit tests for polished classes |
Polished classes automated tests | Luke Graham | JUnit tests for polished classes |
Polished classes automated tests | Tayla Ward | JUnit tests for polished classes |
Polished classes automated tests | Sam Behm | JUnit tests for polished classes |
Polished classes automated tests | Cliff Worsfold | JUnit tests for polished classes |
- Isometric Movement
- Player/Enemy UGS integration
- Destruction of buildings dependent on water level
- Island Borders
- UGS Integration
- Building Damage
- Integration of buildings with shop
- Integration of buildings with building UI
- Visualisation of build-mode
- Other game features of Atlantis Sinks that have no direct interaction with the polished classes.
For all requirements and features, the tester will perform JUnit tests and user testing as defined previously. Each test performed will provide the tester with data regarding the result of the test and the expected result of the test, for unit tests and user tests. The summed results of the tests are then evaluated and analyzed. The passed tests are noted, and the failed tests or associated classes are revised.
The tests will be run in a predefined order relative to the Gradle build order. With each version and build of the game the tests will be performed as follows:
- Unit tests must have a coverage of greater than 60%
- All unit tests must pass without exception
- Feature must be tested by at least 10 different people
- User testing must pass with greater than 70% acceptance of feature
- Integration tests must not break main branch
- Integration tests must result in 99.99% integration of polished features without disrupting other game features
All defects found by the test procedure must be recorded and addressed. Defects will be labelled from low, medium, high, to critical.
Level | Description |
---|---|
Low | Defect hinders the optimal performance of the feature |
Medium | Defect hinders the performance of the feature |
High | Defect breaks the functionality of the feature branch |
Critical | Defect breaks the functionality of the main branch |
Critical, high, and medium defects must be addressed immediately by team member leading the tests of the defective feature. Low defects can be reviewed in the team review and addressed at a later time.