The Optimal Analytics SDLC testing process ensures delivery of the highest-quality product possible. Testing occurs upon completion of the development sprint and includes test sprint planning, environment setup, test protocol implementation, and recording results. During the testing sprint, each user story and feature is tested using documented and vetted test protocols that are linked to requirements. Test results and requirements are captured for future reference. Optimal ensures that the following types of tests are conducted:
Automated Unit Test
Automated Unit Testing (AUT) is conducted by each developer, in every development sprint, on his or her code to ensure bug-free product release version.
Smoke Testing is conducted after every development sprint to ensure working of the core components of the system.
Verification of individual requirements that is performed after AUT and smoke testing by independent testers using assigned test protocols for all of the specified browsers.
Testing to ensure that performance specific requirements are satisfied.
Review of the source code from the viewpoint of security to ensure that threats like SQL Injection, Cross-site Scripting, broken authentication and session management are addressed, as well as others.
Application: Code Best Practices
Code review by an independent third-party contractor is set up quarterly to implement best practices in software engineering to ensure improved quality of the product.
This testing is conducted to ensure that the product meets the government accessibility guidelines. Tools like JAWS, Wave plugin for browsers and ChromeVox Screen Reader are utilized to test Section 508 compliance.
System Regression Testing
Based upon the failures identified and the impact of product changes, regression testing is performed throughout the lifecycle to ensure that changes do not introduce additional faults to other components of the system.
Verification of project-specific user scenarios by project personnel with in-depth knowledge of the customer requirements.
Clients or end-users test the product and provide feedback prior to release.
Steps to follow if technical problems or security issues occur are laid out in a contingency plan that is shared with everybody.