Platform#

Integration#

The Strongbox store provides a HTTP interface, following REST. It stores payload in the local filesystem and metadata in a shared relational database. A message broker is used for asynchronous communication between cluster nodes.

Each Strongbox store instance of the cluster is called a ring node. Each ring node must be configured with a cluster-wide uniqe feature.nodeUid and a unique http.baseUri. The base URI is propagated as is to other ring nodes and clients, which use it to build a URL when accessing resources of the node. Also it instructs the ring node how to listen for incoming requests.

Note: the http.baseUri must be resolvable by clients. It must point to a publicly accessible interface and contain a routable IP address or a hostname that resolves into one. Using http://0.0.0.0:8094/ is not permitted.

Authentication & authorization#

Strongbox store does not require authentication and does not enforce any authorization.

Transport level security is not employed.

Software stack#

The Strongbox store requires a Java 11 runtime. It is based on:

Relational database#

Strongbox store expects a high-available high-performant relation database supporting the MySQL flavour. All ring nodes must have access to the same database (cluster).

The object model is explained in the entity relationship diagram.

Message broker#

Strongbox ring nodes receive asynchronous tasks from a message broker via JMS. These tasks instruct the ring node to replicate a chunk, delete a container or perform other forms of background operation. These tasks can be sent by other ring nodes, a node itself, or by an operator.

Each Strongbox store ring node registers itself at a shared JMS message broker. For each ring node a dedicated command queue is created. While the service is running, a consumer will handle inbound messages and pass them to executors. All ring nodes of the same cluster must share the message broker prefix, configured in messageBroker.destinationPrefix.

Image

The message broker provides the important task of queuing tasks for offline ring nodes. As a consequence, it is important that sufficient queuing capacity exists, based on the traffic load and the expected duration of downtime. While the JMS messages themselves are short text messages, their sheer number can lead to large queues.

During normal operation only few messages are expected to be exchanged, and queues should be empty.

Simple file storage#

Each ring node requires a local filesystem to store chunk payload. A base directory is configured as sfs.baseDir. The store manages the directory and file structure underneath itself; direct operator interaction is discouraged.

To avoid overloading the directory index, chunks and containers directories are distributed:

    /srv/sfs/strongbox/data                <-- sfsConfig.getSfsBase()
                /{nodeUid}                       <-- featureConfig.getNodeUid()
                    /{tenantUid}                   <-- tenant
                        /{suffix}/{containerUid}     <-- container home
                            /chunks                    <-- chunks home
                                /{partition}/{chunkUid}  <-- chunk location
  • suffix: last three characters of the containerUid
  • partition: first character of the chunkUid

Support for heterogenous nodes#

The hash ring divides its total space into even segment, and distributes them over the registered ring nodes evenly. As a result of this, when mapping a chunk address to a ring node, the distribution is reasonably uniform. This leads to the expectation that all ring nodes have equal diskspace and processing capacity available.

This does not fulfill the expectation of a system supporting heterogenous nodes.

A possible workaround is to run multiple Strongbox store instances per physical hardware,

  1. either through virtualisation,
  2. or by starting multiple instances.

The requirement of a unique feature.nodeUid and a http.baseUri for each instance remains. If multiple instances are started on the same machine, individual configuration files can be provided using the --config flag.