• Keine Ergebnisse gefunden

5. Design, implementation and quality assurance 45

5.2. Quality assurance

To ensure the quality of the automation tool, a combination of testing, static code analysis and code documentation was used. Component and system tests were created to test individual pieces of software and to determine if the entire automation tool works under realistic inputs.

The test coverage is 93%. It was made sure that the tests can be executed in parallel, which greatly reduces their runtimes.

A static code analysis was performed to gain insight into metrics such as LOC (lines of code) and average cyclomatic complexity. In Figure 5.6 the LOC metric for each python class and template file can be seen. The total number of code lines is 2138, where 1308 lines are python code and 830 lines are template code. The average cyclomatic complexity of a python class is A 1.5, which means it is of low risk and high maintainability. There are 143 python methods in the source code which themselves are associated with 25 classes. The metrics were collected with a command line tool called radon [1]. During programming, the coding convention PEP8 is followed and Flake8, a python linter, is used to highlight errors and spelling mistakes within the code. Code comments were added to classes and methods, listing the reason for their existence and their parameters.

187 6 26

110 103 51

325

77 139

25 58 89 81 31 15 606

209

0 100 200 300 400 500 600 700

deployment_mixin.py configuration.py

serializable.py start.py

ecs_stack.py ecr_stack.py

menu.py training_stack.py

stack.py parameters.py

metric_mixin.py docker_helper.py

stack_helper.py shell_executor.py

ecr_stack.yml ecs_stack.yml

training_stack.yml

Figure 5.6.: LOC metric for each python file.

The unit tests are executed with each commit by the use of a pipeline which was created in Gitlab. The unit tests check if the action and stack parameters are constructed by variables of the right type. For example, it is tested whether numbers are entered when numeric inputs are

expected and whether special characters are denied. The correct serialization and deserialization of stack objects in the classSerializable is also checked. Here it is tested whether files are created in the right places and whether all attributes of the object can be saved and restored correctly. The methods of the helper classes are also tested for correct functionality. In the classStackHelper it is tested whether a CloudFormation stack can be successfully converted into the instance of aStack class, whether all attributes are set correctly and if the output of a CloudFormation Stack can be read successfully. In the classDockerHelperthe methods are checked for the case that the docker image does not exist or attributes like its tag are missing.

TheShellExecutorclass checks whether the method for executing a command produces valid results. Within the unit tests, method calls are mocked if necessary. This is sometimes required for API calls of the AWS SDK.

The integration tests are ran manually, because a permanent execution would cause significant

ID Prerequisite Description Expected result

ST1 -

1. Choose „Create application infrastructure“

2. Select the region (“eu-central-1”) 3. Enter a random alphanumeric project name 4. Select docker image („nginx:mainline-alpine”) 5. Enter docker port („80“)

6. Enter health chech path („/“)

7. Enter minimum number of containers (1) 8. Enter maximum number of containers (2) 9. Enter autoscaling value (0.1)

10. Enter email leon.radeck@web.de

11. Wait until application infrastructure is created 12. Execute a load test

An email was sent with a notification of a starting ECS Task

The application is reachable under the corresponding URL

ST2 -

1. Choose “Create training infrastructure”

2. Select the region (“eu-central-1”) 3. Enter a random alphanumeric project name 4. Enter the path of the training data file

(“./data.csv”)

5. Select the instance type (“ml.t2.medium”)

The training data was uploaded to the S3 bucket

The training instance notebook is reachable

over its URL

ST3

An application infrastructure

exists

1. Choose “Execute canary deployment”

2. Select the region (“eu-central-1”) 3. Select the previously created application

infrastructure

4. Select the docker image (“nginx:mainline-alpine”)

5. Enter the docker port (80)

The old application version is replaced with the new one

The new application version is reachable over its URL

ST4

An application infrastructure

exists

1. Choose “Execute blue green deployment”

2. Select the region (“eu-central-1”) 3. Select the previously created application

infrastructure

4. Select the docker image (“nginx:mainline-alpine”)

5. Enter the docker port (80)

The old application version is replaced with the new one

The new application version is reachable over its URL

ST5

An application infrastructure

exists

1. Choose “Show monitoring metrics”

2. Select the region (“eu-central-1”) 3. Select the previously created application

infrastructure

The dashboard is reachable over its URL

Figure 5.7.: System tests.

costs in AWS. An overview of the system tests can be seen in Figure 5.7. ST1 tests the creation of an application infrastructure in the default region with a random alphanumeric project name.

The docker image “nginx:mainline-alpine” is used as an example application. The minimum number of containers is set to “1” and the maximum number to “2” so that the scaling can be tested. The autoscaling value is set to “0.1” so that even few requests will trigger the scaling of the application infrastructure. A load test is executed subsequently so that it can be checked whether an email was sent with a notification. The reachability of the application is tested additionally. ST2 tests the creating of the training infrastructure. The project name is again a random alphanumeric string. Example training data is provided and as the instance type

“ml.t2.medium” is chosen. It is tested whether the training data was uploaded to the S3 bucket and whether the instance notebook is reachable over its URL.ST3 andST3 test the execution of a canary and blue green deployment. Both tests follow the same structure. An application infrastructure has to be created beforehand to execute the tests. The default region is used and the previously created application infrastructure is selected. The nginx docker image with port 80 is used as an example again. It is tested whether the old application version is replaced by the new one and whether the new application version is reachable over its URL.ST5 tests if the monitoring metrics are reachable. First, the default region is used and the previously created application infrastructure is selected. Then it is tested whether the dashboard for the metrics is reachable over its URL.