• Keine Ergebnisse gefunden

Contact Automation For The ESTCube-2 Mission Control System

N/A
N/A
Protected

Academic year: 2022

Aktie "Contact Automation For The ESTCube-2 Mission Control System"

Copied!
36
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

UNIVERSITY OF TARTU Institute of Computer Science Computer Science curriculum

Karel Liiv

Contact Automation For The ESTCube-2 Mission Control System

Bachelor's Thesis (9 ECTS)

Supervisors:

Umesh Anilchandra Bhat, B.Tech Tõnis Eenmäe, MSc

Tartu 2017

(2)

Contact Automation For The ESTCube-2 Mission Control System

Abstract:

ESTCube-2 is the first student satellite project of the Estonian Student Satellite Foundation and it's objective is the technological demonstration of scientific payloads for upcoming missions. After a satellite's launch, the communication can be done through ground stations, but the contact time is very short over a limited area at a given point in time over the earth.

The aim of this thesis is to design and implement a contact automation module which can schedule and execute automatic contacts with the satellite without minimal/no human intervention. The author aims to implement the first functional version of the required software module. This thesis explains the thought process, design and implementation of the software module, including the integration with the existing ESTCube Mission Control System—a combination of software and hardware that is used for monitoring and controlling the satellite after launch.

Keywords:

Space technology, mission control system, ESTCube-2, contact automation CERCS:

P170 Computer science, numerical analysis, systems, control

Kontakti automatiseerimine ESTCube-2 missioonijuhtimissüsteemile

Lühikokkuvõte:

ESTCube-2 on Eesti Tudengisatelliidi Sihtasutuse satelliidi programm, mille eesmärgiks on tehnoloogiline testimine järgmiste missioonide tarbeks. Peale satelliidi orbiidile saatmist toimub suhtlemine satelliidiga maapealsete sidejaamade abil, aga kontakti aeg satelliidiga ühest maapealsest sidejaamast on väga limiteeritud . Käesoleva töö eesmärk on disainida ja implementeerida kontakti automatiseerimise tarkvaramoodul, mis võimaldab minimaalse inimese sekkumisega planeerida ja läbi viia kontakte satelliitidega. Esmane funktsioneeriv tarkvaramoodul on loodud autori poolt. Töös selgitatakse tarkvara disaini ja

(3)

implementatsiooni ning ühildamist olemasoleva missioonijuhtimissüstemiga, mis on kombinatsioon tark- ja riistvarast, mille abil jälgitakse ja juhitakse orbiidil olevat satelliiti.

Võtmesõnad:

Kosmosetehnoloogia, missioonijuhtimissüsteem, ESTCube-2, kontakti automatiseerimine CERCS :

P170 Arvutiteadus, arvutusmeetodid, süsteemid, juhtimine (automaatjuhtimisteooria)

(4)

Table of contents

Introduction 5

Background 7

ESTCube-2 7

ESTCube-2 Mission Control System (MCS) 7

Contact automation 8

Two-Line Element set format (TLE) 8

TLE Fetcher 9

Satellite-Location Predictor 10

Problem statement 10

Requirements 11

Application program interface 11

Design 12

Notion of a single command 13

Life cycle of a single command 14

Implementation 16

The application 16

Applications program interface endpoints 16

Inserting and updating commands 17

Querying stored commands 18

Deletion of stored commands 19

Scheduled background jobs 20

Updating overpass times 20

Queueing commands before an overpass 22

Cleaning up after an overpass 23

Database 24

Ground Station Client simulator 25

Deployment to the ESTCube MCS server 27

Future work 29

Conclusion 30

References 31

Appendices 33

I. Source code 33

II. Sample ESTCube-1 overpass times 34

III. Sample TLE Fetcher output 35

IV. Licence 36

(5)

1. Introduction

ESTCube-2 is the first student satellite project of the Estonian Student Satellite Foundation and it's objective is the technological demonstration of scientific payloads for upcoming missions [1]. ESTCube-3, which will be partially based on ESTCube-2, is planned to be sent to the Moon orbit in the future [2]. A satellite mission comprises of several subsystems; some onboard and the rest on the ground (Earth), one of them being the Mission Control System (MCS) and ground station support.

The MCS is a combination of software and hardware that assists operators to track, command and communicate with spacecraft after their launch. The MCS can be used for uplink;

sending commands to the spacecraft in orbit, and also for downlink; receiving data from the spacecraft.

The aim of this bachelor thesis is to create a software module which allows scheduling and perform contacts with the ESTCube satellite with minimal/no human interference. The current implemented solution requires too much manual intervention and therefore, the probability of missing contact windows due to human error is huge and the contact times are not used as optimally as they could have been.

There is also a probability for interruptions to occur during contact, which may result in loss of telemetry packets. Therefore, not all packets reach the target ( spacecraft in orbit ). New connections must be scheduled in order to catch the missing packets. For example, during the ESTCube-1 mission, only 85% of packets transmitted were received, even though multiple ground stations were tracking the satellite overpasses [3].

The ESTCube MCS contact automation software module is a network application that interconnects the authorised ground stations and tracks satellites and schedules communication packets in an optimal fashion so as to minimise delay during satellite overpasses. Another factor to consider would be automatic scheduling and compensation of lost data packets during transmission from the ground stations. Such an implementation would greatly reduce the burden of ground station personnel and simplify the work of satellite operators and thereby increasing the amount of information that could be exchanged with the satellite during the short time period available during the overpasses.

(6)

The deliverables of this bachelor thesis comprise of a Python application, which after 1 completed would be integrated into the ESTCube Mission Control System (MCS) hosted on Docker platform as a service (PaaS) technology. The application communicates with2 multiple ground stations using RabbitMQ messaging queue and stores relevant information3 in a PostgreSQL database. 4

This thesis will only focus on storing and scheduling of commands for upcoming overpasses.

The following functionalities are out of the scope of this thesis:

- On-ground testing of the scheduled commands - Scheduling satellite downlink

- Specifying the execution order of commands - Transferring and executing of commands

The source code for the application developed is attached in Appendix 1 as a ZIP file.

1 https://www.python.org/

2 https://www.docker.com/

3 https://www.rabbitmq.com/

4 https://www.postgresql.org/

(7)

2. Background

2.1 ESTCube-2

Estonia's third space program is going to be a 3-Unit CubeSat in-orbit technology demonstration platform [4]. A 3-Unit CubeSat is a satellite with standardised dimensions and weight. It’s structure is made of up three identical cubes connected in series where the volume of a single cube is approximately one liter with the dimensions of 10×10×10 cm [5].

The primary payloads for this space mission are to test deorbiting using plasma brake, electric solar wind sail and a highly integrated subsystem solution [1].

The satellite would consist of various integrated electro-mechanical and electronic subsystems, most of them built in-house with the one of the important goals being to minimise mass and volume, so as to enable a similar satellite platform for upcoming space missions and experiments [1].

2.2 ESTCube-2 Mission Control System (MCS)

The ESTCube Mission Control System (MCS) is a combination of software and hardware designed for sending (uplink) and receiving (downlink) data to the ESTCube satellites in orbit. While most mission control systems for space missions require the physical presence of the satellite operators in the “mission control” room, the ESTCube MCS is a lightweight web-based command and control solution for satellites in orbit, enabling the operators to be anywhere on the planet and have the same operational capabilities.

In addition to monitoring and controlling the satellites, MCS also provides tools for automating and scheduling various operations of the MCS so that the operator does not always have to be present when a satellite is passing over a ground station. The automation minimises errors and increases the amount of data received from a satellite.

During the development phase, MCS provides a hardware test platform for all the subsystem teams. The hardware test platform comprises of the actual engineering model and simulates all the operations of the real satellite so that the components can be tested properly before launch. After launch, the hardware test platform exists as a testbed to check and determine the behaviour of the command before they are sent to the actual satellite in orbit.

(8)

2.3 Contact automation

Contact automation or satellite range scheduling is the process of scheduling the communication between a satellite and its ground station over blocks of time so that a satellite's support request gets always satisfied on a ground station overpass [6]. A ground station overpass is a period of time during which the satellite is in the communication range of its ground station. For low Earth orbit (LEO) satellites like ESTCube-2, this can occur up to ten times a day and the duration of a single overpass is approximately from 10 to 15 minutes [3, 4]. Appendix 2 contains a small sample of a satellite's (ESTCube-1) overpass times.

In order to schedule contacts with the satellite, a series of jobs have to be scheduled. To prioritise jobs in the set, each job is characterised by the following parameters:

- The processing time of job

- Set of ground stations available for processing the job - Due date of the job

- Revenue of the job completion - Size of data packets [7]

As ground stations and satellite have both unit capacity, meaning that they can only process a single job at a time, the precedence constraints between jobs and ground station setup times have to be taken into account for scheduling a job as well [7].

As the connection between a satellite and ground stations is very noisy and the signal's strength can be very low, it is very common that some data packets go missing during the connection. In order to retrieve the missing data packets, new jobs have to be scheduled for requesting them.

2.3.1 Two-Line Element set format (TLE)

The Two-Line Element set format is a data format developed by the North American Aerospace Defence Command and is used to transmit a coded set of orbital elements that5 perfectly describe a satellite's orbit [8].

5 http://www.norad.mil/

(9)

Figure 1: Definition of the two-line element format [9]

Two-line element format represents information about the orbit of a satellite, an example for ESTCube-1 with element explanations is shown in figure 1. The TLE contains the orbital data of the satellite, information on when these were obtained and meta information on the satellite, including the launch date and angle between the equator and earth plane. But as the satellite's orbit changes over time, TLE is up-to-date for only a limited amount of time.

Satellite TLE’s are commonly used along with NORAD's SGP4/SDP4 orbital model to calculate the current position and velocity of the satellite [10]. To make calculations more accurate NORAD regularly calculates and publishes new TLEs for most of the public satellites in orbit.

2.3.2 TLE Fetcher

TLE Fetcher is a detached software module developed by the ESTCube-2 Mission Control System team. The aim of this software module is to fetch and serve the latest TLE for ESTCube satellites. In order to retrieve the freshest TLE for a satellite, the software module makes a request to Space-Track . Space-Track is a system developed by the Science 6 Applications International Corporation for tracking all artificial earth satellites and space7 probes.

An example output of the TLE Fetcher application is provided in appendix 3.

6 https://www.space-track.org/

7 http://www.saic.com/

(10)

2.3.3 Satellite-Location Predictor

The Satellite-Location Predictor is another detached software module developed by the ESTCube-2 Mission Control System team. The software uses satellite’s orbital parameters and ground station location information for estimating satellite overpass times. For calculating the overpass times, the Satellite-Location predictor uses the Orekit library. Orekit8 is a Java library which provides methods for building flight dynamics applications [11]. The library also contains methods for computing propagated coordinates with a SGP4 model [12].

For calculating upcoming overpass times a satellite’s TLE and information about ground station’s coordinates are required. In order to get the latest satellite TLE regular requests get made to the TLE Fetcher software module. As the satellite’s TLE gets regular updates the overpass calculations can be done regularly with new outcomes. Therefore, in order to save calculation time the Satellite-Location Predictor’s output is limited to calculate only configured amount of upcoming overpasses. Appendix 2 contains a small sample of contact times predicted by the Satellite-Location Predictor software module. As the calculated overpass times are only used for scheduling upcoming contacts then the accuracy is not a concern. Actual and accurate contact start and end times are calculated individually for each ground station prior to the scheduled contact.

2.4 Problem statement

The main problem is that the current solution requires human interaction and as the contact window for a single satellite is very limited and often happens at late hours the contact windows with the satellite may not be used as optimally as it could. Another problem is that the interruptions during the connection between a ground station and a satellite are very common. When an interruption happens the missing packets have to be scheduled for being retransmitted.

8 https://www.orekit.org/

(11)

3. Requirements

Requirements for the Contact Automation Module for ESTCube-2 Mission Control System would be the following:

1. Built using the best practices of microservices architecture 2. Fetch the TLE information for the satellite

3. Predict the satellite passes

4. Schedule the commands according to priority

5. Update the priority queue in case of a manual override 6. Check for ground station availability

7. Schedule the commands according to ground station availability 3.1 Application program interface

Requirements for the application program interface would be the following:

1. The content type for all requests must be application/json 2. Correct HTTP request methods must be used

3. The content type of all responses must be application/json 4. Correct HTTP response statuses must be used

5. The application must be secured using authentication

(12)

4. Design

The contact automation with a satellite can be separated into two different parts—scheduling commands and executing commands. The scheduling part includes storing and prioritising commands for upcoming overpasses and is a part of the mission control system. The execution part is responsible for transferring the commands to the satellite and executing them. This thesis concentrates only on the scheduling part and is not responsible for the execution part. The execution part will be implemented as a part of the ground station software and will depend on the scheduling part. The dependencies between the different software modules are described using figure 2.

Figure 2: Dependency between different Mission Control System components

An arrow in figure 2 indicates that a component depends on another component (the dependent component being indicated by the direction of the arrow). For example, the Contact Automation module is dependent on the Satellite-Location Predictor module.

(13)

4.1 Notion of a single command

Each command in the system contains the following information:

- Unique identifier - Target satellite - Command body - Command arguments - Type

- Priority (low, normal, high) - Status

The unique identifier is required in order to avoid a single command getting executed multiple times. Additionally, it adds the option of backtracking a command's status starting from the scheduling process up to its execution.

As the mission control system is not specific to a single satellite an identifier of the target satellite is required.

The command body and command argument are for now a textual representation of the command to be executed. In the future they may be replaced with actual packets that are transferred to the satellite.

The priority value will ensure that the commands with a higher priority will be transmitted before the commands with a lower priority. That way it is possible to later add commands for maintenance or other critical commands in a way that they still get executed first. The priority can not be used for controlling the execution order of commands; such a feature would need to be handled by a different mechanism which shall be added in the future.

The type field indicates whether the command is meant to be executed on the ground station or on the satellite. The type of a command can either be uplink, in which case the data is sent from the ground station to the satellite, or downlink, meaning the data is sent from the satellite to the ground station.

The status field helps the system keep track of a single command. It holds information on the current state of the command. The status of a command may either be untested, scheduled for

(14)

the upcoming overpass, transmitted to the ground station, finished, or failed. Under normal circumstances a command has to pass through all of the aforementioned states, excluding failed, before it can be counted as successfully executed.

4.2 Life cycle of a single command

Figure 3: The flow of a single command from scheduling to execution

The life cycle of a single command is described in figure 3. The life cycle of a command starts with it getting stored for scheduling. The next step in a command's life cycle is testing.

Each command is tested on ground using a satellite simulator before being transmitted to the satellite, thus avoiding the transmission of corrupted commands. As the testing platform is still work-in-progress, the testing of command is skipped for now. It shall be added in the future.

After testing of a command is finished, the command gets marked as ready for execution.

Prior to the next satellite pass, the command is assigned a queue priority order number based

(15)

on the creation time of the command and the value of it's priority field. By taking this priority order number into account, the command is then pushed into a first-in-first-out messaging queue. During a pass, the ground station consumes messages from the messaging queue and tries to execute them one-by-one. If the execution of the command succeeds, it gets removed from the messaging queue, otherwise it remains in the queue to be re-executed later.

A command is marked as successfully executed if it no longer remains in the messaging queue after the pass. Otherwise, the command's status is set back to ready for execution and it will be retransmitted before the next pass.

(16)

5. Implementation

The MCS uses modern microservices architecture, meaning it consists of many independent applications which all communicate with each other over the network [13]. In MCS the microservices architecture is achieved using Docker technology. Each microservice is packaged into a Docker container and assigned a unique port number in the MCS context.

This way each microservice is independently maintainable without breaking the whole system.

As the deliverable of this thesis, a contact automation module was designed and implemented. The module itself consists of multiple separate Docker containers - an application container, a database container and a messaging queue container. In addition, each container can initialize connections to other containers on the ESTCube Mission Control System server, for example to the Satellite-Location predictor or to the TLE Fetcher, in order to avoid multiple implementations of the same service.

5.1 The application

Contact Automation is an independent application that mainly takes care of two different things. It provides endpoints for storing and managing commands for upcoming overpasses and queues the stored commands for an upcoming overpass (depending on whether the command is marked as uplink or downlink).

The application itself is written using Flask , a lightweight web framework for Python. The 9 framework provides methods for handling different HTTP requests and responding to them.

SQLAlchemy , a SQL toolkit for Python applications which also provides an10 object-relational mapper, is used to simplify interfacing with the database. The RabbitMQ first-in-first-out messaging queue is used for queueing commands.

5.1.1 Applications program interface endpoints

The application provides several API endpoints for managing scheduled commands. The application endpoints can be divided into two groups—query and modification. For query endpoints the HTTP request method must be GET, for modification endpoints the request

9 http://flask.pocoo.org/

10 https://www.sqlalchemy.org/

(17)

method can be either POST, PUT, or DELETE, depending on the specific endpoint. All modification endpoints expect the request's body to be in a valid JSON format. The response type of all endpoints is JSON. The response body may contain either a confirmation of a successful operation or an error message indicating a failure. Appropriate HTTP response status codes are also used. All endpoints are also granted with unit-tests.

In order to protect the application from unauthorised users, a basic authentication is applied to all routes. The basic authentication is achieved using HTTP headers—the application expects each request to have an authorization header. If the header is not present, a login prompt is shown to the user. The credentials can be set using the configuration file in the project's root directory.

5.1.1.1 Inserting and updating commands

The endpoints for inserting a new command and for updating a command data are pretty similar. They both expect the request body to contain all the information about the command to be scheduled in JSON format and as a response, they both return the JSON representation of the stored command. In figure 4 the new command creation endpoint handler function is shown. This function expects the HTTP request method to be POST. or the command update endpoint the HTTP request method would have to be PUT.

@routes.route('/commands', methods=['POST']) def post_command ():

"""API endpoint for storing a new command"""

data = request.get_json() command = Command()

try :

command.update(data)

command.status = Status.UNTESTED

db.session.add(command) db.session.commit()

data = command.serialize()

except (sqlexc.IntegrityError, sqlexc.DataError, ValueError):

db.session.rollback()

abort( 200 , 'Invalid data provided.' ) return jsonify({ 'data' : data})

Figure 4: Command creation endpoint

(18)

First, the function parses the data from the request's body and then maps it to a Command model entity. The Command model entity is a Python representation of a single row in the command table in the database. For the command update endpoint the command to be updated would first be queried from the database and then the received data would be merged with the command queried from the database.

Before the command is stored in the database its status is set to untested. This is important because all commands have to be tested on a satellite simulator before they can be scheduled for execution.

Finally, the function tries to store the command's data into the database. If no error occurred during the database insertion or row update, a JSON representation of the Command entity is returned. Otherwise, an HTTP response with status 200 and a relevant error message is returned.

5.1.1.2 Querying stored commands

It is possible to query the information about stored a command via two endpoints. One endpoint returns a list of stored command and another one returns information about a single specific command. In figure 5 the listing endpoint function is shown. As it is a query endpoint the HTTP request method is expected to be GET.

As there are no limitations to the number of commands that can be stored in the system, the number of commands returned can be very large. Therefore it is possible to filter scheduled commands either by satellite, status, or subsystem. There are no restrictions to applying multiple filters simultaneously. The filters can be passed to the endpoint using the endpoint URL's query component.

(19)

@routes.route('/commands', methods=['GET']) def commands ():

"""API endpoint for listing stored commands"""

satellite = request.args.get( 'satellite' ) status = request.args.get( 'status' ) subsystem = request.args.get( 'subsystem' )

query = Command.query.filter(Command.delete_time == None )

if satellite:

query = query.filter(Command.satellite == satellite) if status:

query = query.filter(Command.status == status) if subsystem:

query = query.filter(Command.subsystem == subsystem)

data = []

for row in query:

data.append(row.serialize())

return jsonify({

'count' : len(data), 'commands' : data, })

Figure 5: Commands listing endpoint 5.1.1.3 Deletion of stored commands

For cases where an unwanted command gets created, there is a data modification endpoint that allows the deletion of a single command. As shown in figure 6 the HTTP request method of this endpoint has to be DELETE.

@routes.route('/command/<int:command_id>', methods=['DELETE']) def delete_command (command_id):

"""API endpoint for deleting a specified stored command"""

command = get_object_or_404(Command, command_id) command.delete()

db.session.commit()

return jsonify({ 'data' : 'Command successfully deleted.' })

Figure 6: Command deleting endpoint

First, the command to be deleted is queried from the database. If no command with such id is found, an HTTP response with status 404 is returned. After the command has been queried it

(20)

gets marked as deleted and the modified command is saved to the database. To indicate that the deletion of the command was successful a human readable success message is returned in the response body.

5.1.2 Scheduled background jobs

Another core functionality of the application is achieved through scheduled background jobs.

For scheduled background jobs the APScheduler library is used. Two different types of11 scheduled jobs are used—timed and regular background jobs. The execution of timed background jobs is started at the specified time and regular background jobs are executed regularly over blocks of time.

5.1.2.1 Updating overpass times

Fetching and storing the satellite overpass times is done by a regular background job. The background job is scheduled on application startup and executed regularly (the execution interval can be configured using the configuration file in the project root directory).

The background job first fetches information about the available ground stations from the Ground-Stations API. As the actual Ground-Stations application is still a work-in-progress an Apiary project was created and used instead. Apiary is a tool for designing application12 program interfaces with an option to simulate the operation of the designed API.

11 https://pypi.python.org/pypi/APScheduler

12 http://apiary.io/

(21)

def update_overpass_times ():

"""Inserts the satellite overpass times to the database

Updates the overpass times in the database for each ground station registered in the Ground-Stations API

"""

util.print_info( "Trying to update overpass times" )

ground_station_library.GroundStation.api_url = app.config[ 'GROUND_STATION_API' ] ground_stations = ground_station_library.get_ground_stations()

for ground_station in ground_stations:

util.print_info( "Trying to update overpass times for {}" . format(ground_station.name))

predictor = LocationPredictor(ground_station) predictor.fetch()

predictor.filter()

predictor.store_and_schedule() db.session.commit()

util.print_info( "Updating overpass times finished" ) return True

Figure 7: Updating the satellite overpass times

Then, as shown in figure 7, for each ground station an API call is made towards the Satellite-Location Predictor. As a response, the Satellite-Location Predictor returns a list of upcoming passes in a valid JSON format. First, the JSON is parsed and read by the Contact Automation application. For reading and parsing the response a utility method is called. The described utility method is shown in figure 8.

def read_remote_json (url):

response = urlopen(url)

data = response.read().decode( 'utf-8' ) json_data = {}

try :

json_data = json.loads(data) except ValueError:

print_info( 'An error occurred while reading remote JSON data: {}' .format(data)) return json_data

Figure 8: Reading JSON response from a URL

(22)

As the Satellite-Location Predictor returns multiple upcoming overpasses, an iteration over the results is done. Each result is checked against the database to determine whether it already exists in the database. As the fetched overpass times and their accuracy can change over time, two overpasses whose start times differ less than the configured minimum amount of seconds are considered to be the same. All newly found overpasses are stored in the database for future reference. In addition, two background jobs are scheduled for each newly found overpass—the first one is scheduled a configured amount of seconds prior to the overpass and another one after it. These background jobs handle the transferring of scheduled commands to the ground station for execution.

5.1.2.2 Queueing commands before an overpass

Prioritising and queuing commands prior to a satellite overpass is done using a timed background job. When the background job is executed, it first fetches the commands scheduled for this overpass from the database in a prioritised order.

def fetch_commands (self):

"""Fetches commands from the database in prioritized order"""

query = Command.query.filter(Command.delete_time == None ).\

filter(Command.satellite == self.satellite).\

filter(Command.status == Status.SCHEDULED).\

order_by(sqlalchemy.desc(Command.priority)).\

order_by(sqlalchemy.asc(Command.create_time))

self.commands = query.all()

print_info( 'Fetched {} commands from the database' .format(len(self.commands)))

Figure 9: Querying commands in prioritised order from the database

In figure 9 the function for querying commands in a prioritised order is shown. First the commands in the database are filtered by the satellite and their status. After the suitable commands have been filtered out from the database, an ordering operation is applied. The ordering—also shown in figure 9—ensures that commands with a higher priority and earlier creation time are preferred over other commands.

(23)

def queue (self):

connection = messaging_queue.get_connection() channel = connection.channel()

# Ensuring that a queue for this satellite exists..

channel.queue_declare(queue=self.satellite)

for command in self.commands:

command.status = Status.TRANSMITTED

body = json.dumps(command.serialize()) channel.basic_publish(exchange= '' ,

routing_key=command.satellite, body=body)

print_info( 'Transmitted command with ID {}' .format(command.id))

db.session.commit() connection.close()

Figure 10: Queuing commands prior to an overpass

After prioritising the commands a connection to a RabbitMQ messaging queue channel, which is unique for each satellite, is opened. This connection is opened using the Python pika library. This library provides a pure Python implementation of the AMQP 0-9-1 protocol

13

[15]. This is the connection protocol that is used for connection to the RabbitMQ messaging queue. After a connection to the messaging queue channel has been opened an iteration is done over the previously prioritised command, as shown in figure 10. Each command's status is set to transmitted and their JSON representation is published to the messaging queue. After the iteration over the commands is finished, changes on the command statuses are stored in the database and the messaging queue connection is closed.

5.1.2.3 Cleaning up after an overpass

Cleaning of the messaging queue, after a satellite overpass, is done in a timed background job. The cleaning process is important in order to avoid overloading the queue with low priority commands.

First, a connection to the relevant RabbitMQ messaging queue channel is opened. After the connection has been opened the program starts to consume messages from the queue until

13 https://pypi.python.org/pypi/pika

(24)

there are none left. During the consumption of messages, each command's status is set from transmitted back to scheduled. It means that these commands will be transmitted again prior to the next overpass.

Once the messaging queue is empty, the connection to the messaging queue channel is closed.

Finally, all commands that had their status set to transmitted prior to this overpass, but were no longer in the messaging queue, are marked as finished.

5.2 Database

For storing commands and information about overpasses a PostgreSQL database is used, which is a powerful open-source object-relational database system [10]. The database's design is pretty simple and is shown in figure 11. For command and satellite_pass tables soft deletion is used—table rows are only marked as deleted on the application level instead of being actually deleted. Soft deletes help avoid any possible data loss at a pretty small cost.

The table apsheduler_jobs is used by the APScheduler library for storing the scheduled background jobs.

Figure 11: Database structure

The command table stores information about all the commands that have ever been scheduled for satellite contacts. A scheduled command itself consists of a textual command name and

(25)

the relevant parameters. Each command also has a priority field, which is an integer value where a higher value indicates a higher priority. In order to avoid multiple executions of a single command, each row has a status field. The status field has the type status which itself is an enum type. The status can be either untested , scheduled , cancelled , failed , or finished . All commands with the status failed should be investigated and rescheduled by the command scheduler application, whenever possible. All other statuses are self-descriptive. In addition, each stored command has some other informative fields like the name of the operator who created the command, and the type of the command. The type of a command can be either uplink or downlink .

The satellite_pass table stores any relevant information about upcoming and past satellite overpasses. As for now we only care about overpass times, the rows in this table only contain the information about overpass start and end times. In the future, it is possible to store more parameters about a satellite pass.

5.3 Ground Station Client simulator

As for now the Ground Station Application is still work-in-progress, a ground station client simulator application was implemented in order to validate and test the scheduling process of this application. The aim of this application is to simulate the operation of a real ground station during a satellite overpass. The application is written in Python and is controlled through a command-line interface.

As shown in figure 12, the simulator application expects three command line arguments—the name of the satellite, overpass start time, and the end time of the overpass to be simulated.

The application starts the simulation process on the specified start time and ends the simulation process when the specified end time has passed.

(26)

$ python3 client.py "ESTCUBE 1" 18:28:23 18:32:32 Starting Contact Automation client simulation..

Trying to consume a message

Received: b '{"priority": 3, "subsystem": null, "id": 1, "type": "downlink",

"status": "transmitted", "satellite": "ESTCUBE 1", "parameters": "", "created_by":

"Karel Liiv", "command": "test"}'

Simulating an overpass command execution started Simulating an overpass command execution finished Failed to consume a message.

Trying to consume a message

Received: b '{"priority": 3, "subsystem": null, "id": 1, "type": "downlink",

"status": "transmitted", "satellite": "ESTCUBE 1", "parameters": "", "created_by":

"Karel Liiv", "command": "test"}'

Simulating an overpass command execution started Simulating an overpass command execution finished Successfully consumed a message.

No messages found.. Waiting some seconds..

Trying to consume a message

No messages found.. Waiting some seconds..

Trying to consume a message

No messages found.. Waiting some seconds..

Trying to consume a message

No messages found.. Waiting some seconds..

Simulating an overpass has ended.

Figure 12: Work of the Ground-Station simulator client

When the overpass simulation starts the simulator client starts consuming messages from the messaging queue for the specified satellite. For each message consumed from the messaging queue the simulator pauses for 5 seconds. This pause should simulate the process of executing a single command.

class RandomExecutor (Executor):

def execute (self):

"""Simulates the execution of the command

Returns:

bool """

print( 'Simulating an overpass command execution started' ) sleep( 5 )

print( 'Simulating an overpass command execution finished' ) return random.randint( 0 , 1 ) == 1

Figure 13: Simulating a single command execution

(27)

As shown in figure 13, a failure which may happen during the execution of a command is simulated using a random number generating function. If the simulation of a command execution succeeds, an ACK response is sent to the channel indicating that the message in the queue was consumed successfully. If the simulation fails, a NACK response is sent to the messaging channel to indicate that the message was not successfully consumed. Messages that receive a NACK response are added back to the messaging queue by the RabbitMQ queue service.

5.4 Deployment to the ESTCube MCS server

Along with the developed application, a Dockerfile containing all the required commands for assembling a Docker image is provided [11]. A Docker image consists of all the required files and software for running the application contained within it.

FROM python:3.4

MAINTAINER Karel Liiv "karel.liiv@estcube.eu"

COPY ./requirements.txt /usr/src/app/

RUN pip3 install --no-cache-dir -r /usr/src/app/requirements.txt

COPY . /app WORKDIR /app

ENTRYPOINT [ "python3" ] CMD [ "main.py" ]

Figure 14: Contact Automation Dockerfile

In figure 14 the Dockerfile used for building the Contact Automation Docker image is shown.

The first line of the Dockerfile declares the base image used for building the image. By using Python's official base image, the installation process of Python is skipped. After that the Dockerfile states that the Python libraries required by the application should be installed using the pip3 command. Then the content of the current folder is copied from the host machine into the image. The two final lines of the Dockerfile specify that the program python3 with the argument main.py will get executed when a container created from the image is started.state that whenever a container created from this image is being started, execute main.py file using python3 command.

(28)

A Docker image can be built from the Dockerfile using the docker build command.

$ docker build -t estcube/contactautomation:latest application/

Figure 15: Building the Contact Automation Docker image

As shown in figure 15 the docker build command is a pretty simple command. A tag name for the image to be built can be passed to the command using the -t flag. The last argument of the command has to be the location of the Dockerfile used by the building process. In our example, it is located in the application directory.

A Docker container can be created from the image previously built using the docker run command. The docker run command takes multiple arguments. An example command for running the Command Scheduler container is shown in figure 16.

$ docker run --name contactautomation \ -e DB_SERVICE= 'contactautomation_db' \

-e TLE_FETCHER_API= 'http://tlefetcher:8002/' \ -e RABBITMQ_HOST= 'rabbitmq' \

-e SATELLITE_LOCATION_PREDICTOR_URL= 'http://satellitelocationpredictor:8003/' \ -d -p 8006:8006 \

--link contactautomation_db:contactautomation_db \ --link tlefetcher:tlefetcher \

--link satellitelocationpredictor:satellitelocationpredictor \ --link rabbitmq:rabbitmq \

estcube/contactautomation

Figure 16: Starting the Contact Automation Docker container

As shown in figure 16, multiple environment variables can be passed to the container using the -e argument. The argument -d states that the container is started in the background.

Mapping a port from the host machine to toe container is done using the -p argument.

Containers that are required by the Contact Automation container are linked using the --link argument. Multiple links can be created using this parameter, as also seen in figure 16. The last argument of the docker run command is the name of the image to use.

(29)

6. Future work

The current solution meets the requirements that were set in the beginning, but the application is not yet a usable by a satellite operator. Therefore future improvements have already been planned for this software component.

Firstly, the current solution is missing the on-ground testing part, as described in Chapter 4.2.

The integration with the hardware testing platform will be added to the implemented software module as soon as the development of the hardware testing platform has finished.

Secondly, the data pushed to the messaging queue might change in the future. This future improvement might be required because the Ground Station software is still work-in-progress. The finalised method of communication will be implemented as soon as the Ground Station software is finished.

Thirdly, the current solution has an option to sort commands by priority, but an option for specifying the execution order of commands should be added. A potential solution to this would be allowing to assign a prerequisite command to each command. The command could then be executed only after the prerequisite command has been successfully executed.

Finally, to make the current solution user-friendly and usable by satellite operators, a graphical user interface is required, including the integration with the MCS. The graphical user interface must be first designed and then a mock-up of it has to be created.

Referenzen

ÄHNLICHE DOKUMENTE

Then requirements and structure of ESTCube-2 AOCS are described, followed by an analysis of the previous attitude and orbit determination and control hardware solutions from

Figure 2: Block diagram of the electrical power supply control electronics prototype design... The prototype includes lots of pin headers for connecting external devices, test pins

 Support for the evolution of service-oriented software through what-if analyses by predicting the impact of design decisions on performance, reliability and

In the method, a new Script object is created, the script code is added to the object’s code field and also an identifier is added, to separate the testing script from regular

This thesis presents the ESTCube-2 attitude and orbit control subsystem AOCS sensor and actuator testing and calibration plan, which is used to both validate the system design

To display the images sent from the satellite, it is necessary for the camera component software to read the image data from the files saved using certain file

On the average, these employees devote 30 to 50 hours a week to direct personal contacts, some of them more than 60 hours (including travelling time, but not pre-travel

The authors describe the role of flexible automation in increasing productivity, characterize flexible automation as a socioeconomic phenomenon, make a rough forecast