Our project aim is to create a full stack Roman Numerals Encoder/Decoder application using a frontend Framework called React
and Node.js
for the backend runtime environment.
Navigate to the /client
directory in a terminal and
yarn install
In a seperate terminal, navigate to the /server
directory and
yarn install
In order to run the program you have to open the /client
directory on one terminal and the /server
directory on another.
Enter yarn start
in both terminals to run the front-end react web page and the express back-end.
In order to run the automated test scripts for the front end, navigate to the /client
directory and enter yarn test
in the terminal.
In order to run the automated test scripts for the server, navigate to the /server
directory and enter yarn test
in the terminal.
Prior to starting development we will create tests using Jest
in order to adhere to our test driven development strategy, following development we will create front-end Snapshot Testing
(also using Jest). Upon completition of development, we will create Manual QA Tests Scripts
based on our requirements, and perform manual testing on the front-end. Accessbility Audit Testing will be conducted by using Microsoft Accessibility Insights for Web
to test the front-end of our application. Finally, once all of our tests have passed, and development has concluded, we will try and get an external party, or third party to conduct manual UAT testing.
Our main communication platform within the team will be Discord. This is because it is easily accessible to everyone in the team, and provides a good way of sending and recieving files / documents.
Our main external communication platform will be the following google meet: https://meet.google.com/kie-tnqx-xvx
The table below displays our teams members, in addition to their roles and account links.
Name | Role | GitHub | |
---|---|---|---|
Simone | UX | ||
Sukh | Scrum Master | ||
Harman | DevOps | ||
Kacper | QA |
The table below shows all of the roles within our team and their corresponding responsibilities.
Role | Responsibility |
---|---|
All | All roles will take part in writing tests and doing development work due to the small team size. |
DevOps | Responsible for the CI Pipeline and GitHub workflows |
Scrum Master | Managing the Kanban board, sprints and created tickets. In addition to being in charge of project documentation (eg. test plan). |
UX | In charge of designing the user interfaces and ensuring a good user experience. Includes collaboration with DevOps to conduct pre-project UI testing via Figma. |
QA | Ensures that everything is of a high quality and adheres to our chosen SQA standard. Includes performing code reviews and following our code review strategy. |
After gathering requirements, we held a Sprint Planning session - where we distributed the requirements into their respective sprints. Following this, we held daily stand-ups and started working on the first sprint. After the first sprint (server development) we held a retrospective session and evaluated what was working in regards to our development strategy, the document can be found in the documents directory. We will reiterate this process two more times as we will undertake a total of 3 sprints to ensure that back-end development, front-end development and additional testing plus documentation (manual test scripts and UAT testing - if possible) are complete.
When creating a pull request on Github we will adopt some rules which will ensure that we are following the industry standard.
After initalizing a pull request we will be forwarded to the review page, within this page it is optional to add a summary of the proposed changes, review the changes made by commits, add labels, milestones, and add assignees where necessary. Once we have created a pull request, we will push the commit from our topic branch and add them to the existing pull request. This will mean that other contributors within the project, specifically the QA member will review the proposed changes, add review comments, contribute to the pull request, and even add commits to the pull request. After the QA member and the original contributor are happy with the proposed changes, we will merge the pull request into the main branch.
We will adopt React Coding Standards for the front-end of our application, this includes the following:
- React UI component's name will be in PascalCase.
- All other helper files will be camelCase.
- All the folder names will be camelCase.
- CSS files will be named the same as the component PascalCase. Global CSS which applies to all components will be placed in global.css and should be named in camelCase.
- CSS class names will use a standard naming convention or any standard practice.
- Test files will be named the same as the component or non-component file.
To ensure we have clean source code we will follow some basic rules:
- Use and adopt the DRY principle (Don't repeat yourself).
- Create multiple files instead of writing a big file - componentization of code.
- Place all of our CSS files in one common folder.
- Avoid Inline CSS as and when possible.
- Review our code before creating a pull request.
- Split our code into multiple smaller functions. Each with a single repsonsibility.
- Create many utility files that help us remove duplicate code from multiple files.
- Seperate all the service calls into a seperate file.
- Thoroughly format each line of code which
prettier
can help us with.
Upon completing any user story, we will see whether the Product has met the Definiton of Done. DoD (Definiton of Done) includes the conditions and criteria that the software solution has to meet in order to be accepted by the customer. What Done means to us is that the code is developed to our standards, reviewed, implemented with Test-Drive Devleopment, tested with 100 percent test automation, integrated and documented.
We need to check whether the user story is compliant with the initial assumptions of the single backlog item, which it was described in. On this stage we also control quality of written code and check if all necessary elements of our process were carried out, here is our checklist:
- Produced code for presumed functionalities
- Assumptions of User Story met
- Project builds without errors
- Unit tests written and passing
- Project deployed on the test environment identical to production platform
- Tests on devices/browsers listed in the project assumptions passed
- Feature ok-ed by UX designer
- QA performed & issues resolved
- Feature is tested against acceptance criteria
- Feature ok-ed by Scrum Master
- Refactoring completed
- Any configuration or build changes documented
- Documentation updated
- Peer Code Review performed
The UI Testing will be completed using Jest and Automated snapshot tests will be able to be found within the frontend src directory. In addition, the frontend will also be tested via manual QA test scripts (located in documents folder).
The automated unit test scripts are present within both the front-end of the application and the back-end. In the back-end tests scripts test the logic of the application such as converting roman numerals to normal numerals (vice versa); in the frontend the tests are used to ensure that each component is rendered and rendered correctly. Smoke tests are also included within the documentation directory.
Microsoft Accessibility Insights for Web will conduct tests on the front end of the application, results can be found below.
The screenshot below shows that we have no common accessbility problems within our website.
The screenshot below depicts an issue with our colour contrast, however because this instance is not related to text content we did not have to act on it (colour of the input textbox border).
Two GitHub Actions pipelines have been created, these run all of the automated tests within the project after a pull request has been created. This ensures that all of our tests pass prior to merging to the main branch will notifies us whether the entire application is still functional even after many changes.
We have adopted a standard called IEEE Standard for Software Quality Assurance Processes
with the intention to enable our software project to use SQA processes to produce and collect evidence that form the basis for giving a justified statement of confidence that the software product confirms to its established requirements. The purpose of this standard is to provide uniform, minimum acceptable requirements for SQA processes in support of our software project.
We will conform to this standard by ensuring that the requirements are achieved, these requirements describe SQA processes, activities, and tasks. Sixteen activities are identified in this clause and are grouped into three major areas: process implementation, product assurance actitivies, and additional product assurance, this can be seen in the image below:
For additional explanation on the IEEE Standard please click on the following link for further details.
The Test Cases/Scripts written for this application can be found within the documents folder.
The Test Plan is included within the documents folder and describes approaches and methodologies that will apply to the unit, automated and system testing of the 'Romaversio Application'.