Hardware Infrastructure Configuration

True Hardware Neutrality

The components of the SwiftStack Storage System have been designed to have minimal hardware dependencies, allowing them to run on virtually any enterprise-grade x86 server and standard drives. This allows you to choose and mix hardware that best fits your application and drastically reduces the overall cost of the solution.

With the design of the cluster, you can manage capacity and performance independently. It’s easy to adjust each resource over time, so what you deploy initially does not impede how you scale in the future. What’s more, it is easy to source hardware from the server vendors or integrators you already have procurement relationships with which you already have procurement relationships.

Cisco
Cisco

The hardware information below is meant to give you ideas that get you going. We do not have a rigid certification list for you to choose off of because the SwiftStack services are compatible with a broad range of hardware configurations. Often that’s a server family that you’re already buying and supporting. The same goes for operating system requirements. SwiftStack supports a wide range of Linux distributions and versions (RHEL, CentOS, Ubuntu), allowing you to use the one that you’re comfortable with and may have already optimized for your environment.

Hardware Architecture Options

  • Cisco
  • HPE
  • Dell

Each SwiftStack Storage Node:

  • Cisco UCS S3260
  • Single UCS C3x60 M4 Server Node
  • 2x UCS-CPU-ES2620E CPU
  • 128 GB RAM
  • 2x boot SSD
  • Dual 10GbE Intel NIC
  • 60x 8TB HDDs
  • UCSC-C3K-NV8 800G NVMe SSD

Small

5 x UCS S3260

Medium

10 x UCS S3260

Large

25 x UCS S3260

Usable Capacity 144 TB usable 730 TB
With 3X Replicas
1.4 PB
With 3x Replicas
3.6 PB
With 3x Replicas
Application Bandwidth 5Gbps 25Gbps 50Gbps 125Gbps
When appropriate, erasure coding can offer 2x or more usable capacity relative to replicas.

Each SwiftStack Storage Node:

  • HPE Apollo 4510
  • Single Server Node
  • 2 x Intel E5-2620 CPU
  • 128GB RAM
  • 2x boot SSD
  • 400GB PCIe NVMe flash drive
  • Dual 10GbE Intel NIC
  • 68x 8TB HDDs

Small

5 x Apollo 4510

Medium

10 x Apollo 4510

Large

25 x Apollo 4510

Usable Capacity 163 TB usable 815 TB
With 3X Replicas
1.6 PB
With 3x Replicas
4.1 PB
With 3x Replicas
Application Bandwidth 5Gbps 25Gbps 50Gbps 125Gbps
When appropriate, erasure coding can offer 2x or more usable capacity relative to replicas.

Each SwiftStack Storage Node:

  • Dell DSS7000
  • Single Server Node
  • 2 x Intel E5-2620 CPU
  • 192GB RAM
  • 2x boot SSD
  • 400GB PCIe NVMe SSD
  • Dual 10GbE Intel NIC
  • 90x 8TB HDDs

Small

5 x DSS7000

Medium

10 x DSS7000

Large

25 x DSS7000

Usable Capacity 216 TB usable 1.1 PB
With 3X Replicas
2.2 PB
With 3x Replicas
5.5 PB
With 3x Replicas
Application Bandwidth 5Gbps 25Gbps 50Gbps 125Gbps
When appropriate, erasure coding can offer 2x or more usable capacity relative to replicas.

NODE CONFIGURATION OPTIONS

The storage cluster is made up of nodes. Simply, to scale the cluster, add more nodes. Nodes are typically configured to run services in one of three ways:

  • Proxy - Applications interact with the proxy service to read and write objects. The proxy layer can be run on dedicated nodes to manage and scale performance independent of capacity.
  • Storage - The storage service stores and protects all objects. This layer can be scaled independently for capacity and redundancy.
  • Combined - This mode runs both proxy and storage services on the node. This is ideal for testing, small environments, or archive solutions.
Cisco

 ADDITIONAL SERVICES

The SwiftStack Storage System was designed to be very easy to deploy and scale, similar to the way Public Cloud storage is consumed. A key element to that is the controller that SwiftStack hosts as-a-service. This controller gives you automatic visibility and control across all of your clusters no matter where they are. Optionally, these services can also be deployed on your infrastructure:

  • On-Premises controller - If you would like to run the entire storage system behind your firewall, you can run the SwiftStack controller in your environment. Once it is deployed, it functions the same as the standard controller as-a-service.

NODE HARDWARE OPTIONS

Important to overall storage system performance is choosing adequate components for each node.

  • Chassis - Industry-standard 1-4U chassis that have dense drive capacity are most often used. Since the objects are redundantly stored across at least three nodes, high-availability hardware features (redundant power supplies, hot-serviceable components, etc.) are not required.
  • CPU - Most often used are 64-bit x86 CPU (Intel/AMD), quad-core or greater, running at least 2-2.5 GHz.
  • RAM - At least 1GB of RAM for each TB of disk.
  • OS drives - For optimal performance, each node should contain a pair of SSDs to run and protect the OS and storage system metadata.
  • Storage drives - In most scenarios, high-capacity disk drives can be used to store objects. All disk drive vendors now sell “cloud drives” that are enterprise-grade, but cost dramatically less than traditional drives in this category, adding further cost savings to the overall storage solution.
  • Network ports - At least two 10Gb Ethernet ports for storage traffic and one Gigabit Ethernet port for management traffic.

NETWORK TOPOLOGY

Important to overall storage system performance is choosing adequate components for each node.

  • Storage network - for optimal storage performance and scalability, it’s critical that the cluster has its own dedicated 10Gb Ethernet network. All outward-facing traffic to and from applications, as well as inward-facing cluster traffic will flow over this network. As demands from the storage system increase, outward-facing and cluster-facing traffic can be isolated on their own networks.
  • Management network - to ensure management traffic does not affect storage performance, it is isolated from the storage network. For most environments, it’s common to run this traffic over the Gigabit LAN.

NETWORK TOPOLOGY

Horizontally stacked 10Gb top-of-rack switches create a high-performance storage network with the management traffic traveling over the Gigabit LAN.

 
 

EXAMPLE #1

TESTING OR SMALL SUB-100TB
ENVIRONMENTS

In this starter example, 3 nodes are each in their own zone, or failure domain. This is because SwiftStack stores three copies of each object for redundancy. To scale, nodes can be added in groups of 3 to evenly grow the zones. Each node performs all services (proxy, account, container, and object), so as the cluster scales, some services may need to be dedicated to their own node.

EXAMPLE #2

SCALE-OUT BACKUP/ARCHIVE
ENVIRONMENT

In this backup and archiving example, three racks of nodes are deployed with each rack being a unique zone. Proxy services have dedicated nodes to be able to handle the load of many disks drives. To scale the cluster, add three additional racks to keep the zones balanced.

EXAMPLE #3

HIGH-PERFORMANCE
APPLICATIONS

In this high-performance example, like the backup example, three racks of nodes are deployed with each rack being a unique zone. The difference here is that more proxy nodes are added for increased front-end performance and account/container nodes are separated from object nodes for increased back-end performance. To scale the cluster, add three additional racks to keep the zones balanced.

 

Resources

Active Archive with Cisco UCS

Solution Brief

Download

SwiftStack Reference Architectures

Tech Doc

Download