This guide provides a more-detailed guide on how to plan a deployment of SwiftStack. For a step-by-step instruction on installing a SwiftStack, see SwiftStack Install Guide
Planning a SwiftStack Deployment
OpenStack Swift is designed to store and retrieve whole files via HTTP across a cluster of industry‐standard servers and drives, using replication to ensure data reliability and fault tolerance. While this model provides great flexibility (and low cost) from a hardware perspective, it requires some upfront planning, testing and validation to ensure that the deployment is suitable for the expected workload that is required.
When selecting a specific hardware configuration for your SwiftStack cluster, it is important to determine which configuration provides the best balance of IO performance, capacity and cost for a given workload. For instance, customer-facing web applications with a large number of concurrent users will have a different profile than one that is used primarily for archiving.
There are two types of services provided with SwiftStack: Proxy Nodes and Storage Nodes. SwiftStack provides a package-based installer that contains both a proxy server and an object storage server components, which can either be run on the same physical node for smaller clusters or be split out in separate tiers for larger deployments. For larger-scale and high-performance clusters, the account and container metadata storage tier can also be split out into a separate tier, leveraging high-performance media, such as SSDs, for metadata storage and retrieval.
Proxy Tier Recommendations
Proxy nodes handles all incoming API requests. Once a proxy server receives a request, it will determine which storage node to connect to based on the URL of the object. Proxy services also coordinate responses, handles failures and coordinates timestamps. As the proxy tier uses a shared-nothing architecture, it can be scaled as needed based on projected workloads. If separated into its own tier, a minimum of two nodes should be deployed in the proxy tier for redundancy. Should one proxy node fail, the others will take over. Having the proxy services in their own tier enables read/write access to be scaled out independently of storage capacity. For example, if the cluster is on the public Internet and requires ssl-termination and has high demand for data access, many proxy nodes can be provisioned. However, if the cluster is on a private network and it is being used primarily for archival purposes, fewer proxy nodes are needed.
The proxy tier also includes ssl-terminators and authentication services. For most publicly-facing deployments as well as private deployments available across a wide-reaching corporate network, SSL will be used to encrypt traffic to the client. As SSL adds processing load to establish sessions between clients, more capacity in the access layer will need to be provisioned. SSL may not be required for private deployments on trusted networks.
The proxy nodes use a moderate amount of RAM and are network IO intensive. Typically, Proxy servers are 1U systems with a minimum of 12 GB RAM. For small SwiftStack clusters, the storage services and proxy services can run on the same physical nodes.
Storage Tier Recommendations
The next component is the storage nodes themselves. When a client uploads data to SwiftStack, the proxy tier will write data in three different availability zones in the storage tier. A quorum is required — at least two of the three writes must be successful before the client is notified that the upload was successful.
For incoming read requests, one of the three replicas of the data will be served. Subsequently, each storage node and drive that is added to a Swift cluster will not only provide additional storage capacity, but will also increase the aggregate IO capacity of the cluster as there are more drives available to serve incoming read requests.
Storage nodes are typically higher density 2U, 3U or 4U nodes with 12-‐36 SATA disks each. Storage nodes use a reasonable amount of RAM and CPU as metadata needs to be readily available to quickly return objects. A good rule of thumb is approximately 1 GB of RAM for each TB of Disk. Eg. for a node with 24 drives, 24-36GB of RAM should be used. The storage nodes run services not only to field incoming requests from the proxy nodes, but also replication, auditing and other processes to ensure durability of the data stored.
SwiftStack recommends using either 2TB or 3TB 7200 RPM SATA drives, which deliver good price/performance value. When selecting drives, desired I/O performance for single-threaded requests should be kept in mind. Since the system does not use RAID, each request for an object is handled by a single disk. Subsequently, faster drives will increase the single-threaded response rates. Desktop-grade drives can be used where there are responsive remote hands in the data center, and enterprise-grade drives can be used where that is not the case. SwiftStack does not recommend using “green” drives as SwiftStack is continuously ensuring data integrity and the power-down functions of green drives may result in excess wear.
Usable vs. Raw Storage Capacity
To calculate the usable versus raw storage capacity in a SwiftStack cluster, the following needs to be considered:
- Replication factor for objects (e.g. 3x, 2x etc.)
- Assume that only 80% of the raw disk capacity can be used as 5-10% of the drive space is “lost” when formatting them (will vary by drive size) and it is good practice to always have headroom in a drive so they are not filled at 100% capacity
As an example, if your objective is to set up a SwiftStack cluster with 100TB of usable storage, you would need the following raw storage capacity:
100TB usable storage capacity x replication factor of 3 = 300TB raw storage capacity
After deducting what is lost in formatting and headroom, you would need the following number of drives, assuming 3TB drives are used:
300TB raw storage capacity / 80% (formatting & headroom) = 375TB of raw disk capacity
375TB of raw disk capacity / 3TB drives = 125 drives
Assuming that a 24-drive chassis is used, this environment would require a minimum of 6 storage nodes.
A typical SwiftStack deployment will have a front-facing ‘access’ network (to run the proxy and authentication services) and a back-end ‘storage’ network. When designing the network capacity, keep in mind that writes fan-out in triplicate in the storage network. As there are three copies of each object, an incoming write is sent to three storage nodes. Therefore network capacity for writes needs to be considered in proportion to overall workload.
The proxy nodes should ideally be provisioned with two high-throughput (10GbE) interfaces as these nodes field each incoming API request. One interface is used for ‘front-end’ incoming requests and the other for ‘back-end’ access to the object storage nodes to put and fetch data.
Storage nodes can be provisioned with 1GbE (single gigabit) or 10GbE network interface depending on expected workload and desired performance. To achieve apparent higher throughput, the object storage system is designed with concurrent uploads/downloads in mind. The network I/O capacity (1GbE, bonded 1GbE pair, or 10GbE) should match your desired concurrent throughput needs for reads and writes.
Sample Configuration: Small Cluster with 20TB of Usable Storage
For a small deployment, e.g. with 2 nodes and a total of 20TB of usable storage, the SwiftStack packages will run all services on all nodes. This includes load balancing, proxy server, object storage etc. This is automatically configured on the node through the SwiftStack installation and configuration process and does not require any additional setup by the operator. Zones will be automatically configured by the SwiftStack Controller.
Since the SwiftStack includes a load balancer (LVS), there is no need to configure an external load balancer. If an external load balancer needs to be configured, just uncheck the “SwiftStack Load Balancer” option in the SwiftStack Controller console and add the IP of your load balancer.
Sample Configuration: Medium Sized Cluster with 100TB of Usable Storage
For a medium sized deployment, e.g. with 6-12 storage nodes, the cluster should be configured as follows:
At this scale, the choice could be made to run either a dedicated proxy/load balancing tier, or run all the services on every node.
If running a separate proxy tier, at least two dedicated proxy/load balancing nodes for high-availability. The proxy nodes should also contain two interfaces, one for front-facing API requests and one “back-end” network connecting the storage nodes.
If running all services on each node, be sure to evaluate that the overall network capacity is suitable for your workload.
Typically, a single network tier is used for small and medium sized deployments. Either 1GbE or 10GbE switches can be used for this purpose depending on the throughput the cluster is expected to sustain.
Sample Configuration: Large Cluster with 1PB of Usable Storage
For larger deployments such as a cluster with 1PB of usable storage as depicted in the example below, the following needs to be considered:
- A separate load balancing tier will likely need to be used, e.g. F5s or NetScalers.
- The proxy nodes will be set up in a separate proxy tier. In this case, there are 8 proxy nodes.
- Each rack with storage nodes will be configured as a separate zone. In this example, there are 40 storage nodes. The zone can be configured in the SwiftStack Controller on the Node configuration page.
- A pair of aggregation switches with two links back to the front-end network / border network, which connects to the proxy tier and to each of the five top-of-rack(ToR) switches for each of the storage zones. All connections to the proxy tier and the zones are 10GbE
- In each rack with the storage nodes there is a pair or top-of-rack (ToR) switches. Each ToR switch connect itself to the aggregation network. Depending on overall concurrency desired, a deployment can use either a 1GbE or a 10GbE network to the object stores. It’s possible to use a single, non-redundant switch as the system is designed to sustain a zone failure.
Tiering Accounts and Container Metadata
Each account and container is an individual SQLite database that is distributed across the cluster. An account database contains the list of containers in that account. A container database contains the list of objects in that container. To keep track of object data location, each account in the system has a database that references all its containers, and each container database references each object.
When your application(s) needs to ingest many millions of files in a single container, it may be necessary to use higher-performing media, e.g. SSDs for the account and container metadata. The accounts and container data set is relatively very small in size and does not require much storage, which makes it suitable to to store on higher performing media, such as SSDs.
There are two options to tier the account and container metadata. The first option is to add a pair of SSDs in each storage node for the account and metadata and configure SwiftStack to only store the account and container metadata on those SSDs. The other option is to set up an altogether different tier, with dedicated high-performance nodes (with SSDs) for the account and container metadata.
Benchmarking Your SwiftStack Cluster
Benchmarking and tuning goes hand-in-hand with designing the right cluster. Some of the areas to consider when conducting your benchmarking and tuning include:
- The spread of file sizes
- Number of concurrent workers
- The proportion of creation / read / update / delete that is expected
To conduct your benchmarking, a pre-populated cluster with a set of data to mimic an eventual state should be set up. This is ideally set up as a proof-of-concept environment, which will enable you to gain valuable experience before setting up the production cluster.
SwiftStack is available to assist in determining which hardware configuration is most optimal for your workload.
SwiftStack Integration & Planning
Integrating SwiftStack Into Your Environment
When getting SwiftStack up and running in your data-center, there are several potential integrations with 3rd party systems and services to consider, including:
- Authentication system, such as LDAP and Active Directory
- Operations support systems
- Billing systems
- External monitoring systems – e.g. to consume SNMP polling of system information, and SNMP traps etc.
- Content Distribution Networks (CDNs)
- Replication to off-site Swift clusters for additional redundancy
Each of these areas can be integrated with your Swift environment but requirements will differ based on your specific requirements and use-case. SwiftStack provides a wide selection of add-ons, which enable integration into an operations environment. SwiftStack currently offers the following add-ons:
- Active Directory
- On-Disk Encryption with LUKS
- Coming soon: Integrations for monitoring platforms (Nagios / Gangila) and content delivery networks (CDN).
Planning the Datacenter Deployment
The datacenter space and power requirements are critical areas to plan for a successful SwiftStack deployment. They require planning with your datacenter facilities team and/or datacenter vendor to deploy a full rack of object stores. Be sure to meet early with this team to develop your plan for:
- Power provisioning
- Physical space requirements
- Networking capacity, planning, configuration
- Networking layouts
- Rack layouts
- Hardware validation and burn-in
SwiftStack can assist with best practices and datacenter facilities planning advice for a successful implementation.