Hardware Infrastructure Configuration

True Hardware Neutrality

The components of the SwiftStack Storage System have been designed to have minimal hardware dependencies, allowing them to run on virtually any enterprise-grade x86 server and standard drives. This allows you to choose and mix hardware that best fits your application and drastically reduces the overall cost of the solution.

With the design of the cluster, you can manage capacity and performance independently. It’s easy to adjust each resource over time, so what you deploy initially does not impede how you scale in the future. What’s more, it is easy to source hardware from the server vendors or integrators you already have procurement relationships with which you already have procurement relationships.

Cisco
Cisco

The hardware information below is meant to give you ideas that get you going. We do not have a rigid certification list for you to choose off of because the SwiftStack services are compatible with a broad range of hardware configurations. Often that’s a server family that you’re already buying and supporting. The same goes for operating system requirements. SwiftStack supports a wide range of Linux distributions and versions (RHEL, CentOS, Ubuntu), allowing you to use the one that you’re comfortable with and may have already optimized for your environment.

Hardware Architecture Options

  • Cisco
  • Dell
  • HPE
  • Super Micro

Each SwiftStack Storage Node:

  • Cisco UCS S3260
  • Dual M5 Server Node
  • Each Node:
    • 2x 4114 Processor 2.2 GHz 10 core
    • 128GB Ram
    • 2x Boot SSD
    • SIOC - Dual Port 40GbE
    • 1TB NVMe
  • 56x 14TB HDDs (28 per node)

Small

2 x UCS S3260

Medium

10 x UCS S3260

Large

81 x UCS S3260

Capacity: 784 TB Raw 512 TB Usable
3x Replica
5 PB Usable
8+4 Erasure Code
50 PB Usable
15+4 Erasure Code
Bandwidth: 160 Gbps Total 80 Gbps write throughput 640 Gbps write throughput 5,717 Gbps write throughput

Each SwiftStack Storage Node:

  • Dell DSS7000
  • Dual Server Node
  • Each Node:
    • 2x Intel E5-2620 CPU
    • 128GB RAM
    • 2x boot SSD
    • 400GB PCIe NVMe SSD
    • Dual 10/25GbE Intel NIC
  • 88x 14TB HDDs (44 per node)

Small

2 x DSS7000

Medium

6 x DSS7000

Large

52 x DSS7000

Capacity: 1,232 TB Raw 822 TB Usable
3x Replica
5 PB Usable
8+4 Erasure Code
50 PB Usable
15+4 Erasure Code
Bandwidth: 50 Gbps Total 25 Gbps write throughput 120 Gbps write throughput 1,147 Gbps write throughput

Each SwiftStack Storage Node:

  • HPE Apollo 4510 Gen10
  • Single Server Node
    • 2x 4114 Processor 2.2 GHz 10 core
    • 256GB Ram
    • 2x Boot SSD
    • Dual Port 10/25GbE
    • 60x 14TB HDDs
    • 400GB NVMe M.2

Small

4 x Apollo 4510 Gen10

Medium

9 x Apollo 4510 Gen10

Large

76 x Apollo 4510 Gen10

Capacity: 840 TB Raw 1.1 PB Usable
3X Replica
5 PB Usable
8+4 Erasure Code
50 PB Usable
15+4 Erasure Code
Bandwidth: 50 Gbps Total 50 Gbps write throughput 180 Gbps write throughput 1,676 Gbps write throughput

Each SwiftStack Storage Node:

  • SMC 6049P-E1CR60L
  • Single Server Node
    • 2x 4114 Processor 2.2 GHz 10 core
    • 384GB Ram
    • 2x Boot SSD
    • Dual Port 10/40GbE
    • 60x 14TB HDDs
    • 1TB NVMe M.2

Small

4 x SMC 6049P-E1CR60L

Medium

9 x SMC 6049P-E1CR60L

Large

76 x SMC 6049P-E1CR60L

Capacity: 840 TB Raw 1.1 PB Usable
3X Replica
5 PB Usable
8+4 Erasure Code
50 PB Usable
15+4 Erasure Code
Bandwidth: 80Gbps Total 80 Gbps write throughput 288 Gbps write throughput 2,682 Gbps write throughput

Need more info?

Talk to a SwiftStack engineer about the best hardware options for your deployment.

Contact Us Now

NODE CONFIGURATION OPTIONS

The storage cluster is made up of nodes. Simply, to scale the cluster, add more nodes. Nodes are typically configured to run services in one of three ways:

  • Proxy - Applications interact with the proxy service to read and write objects. The proxy layer can be run on dedicated nodes to manage and scale performance independent of capacity.
  • Storage - The storage service stores and protects all objects. This layer can be scaled independently for capacity and redundancy.
  • Combined - This mode runs both proxy and storage services on the node. This is ideal for testing, small environments, or archive solutions.
Cisco

 ADDITIONAL SERVICES

The SwiftStack Storage System was designed to be very easy to deploy and scale, similar to the way Public Cloud storage is consumed. A key element to that is the controller that SwiftStack hosts as-a-service. This controller gives you automatic visibility and control across all of your clusters no matter where they are. Optionally, these services can also be deployed on your infrastructure:

  • On-Premises controller - If you would like to run the entire storage system behind your firewall, you can run the SwiftStack controller in your environment. Once it is deployed, it functions the same as the standard controller as-a-service.

NODE HARDWARE OPTIONS

Important to overall storage system performance is choosing adequate components for each node.

  • Chassis - Industry-standard 1-4U chassis that have dense drive capacity are most often used. Since the objects are redundantly stored across at least three nodes, high-availability hardware features (redundant power supplies, hot-serviceable components, etc.) are not required.
  • CPU - Most often used are 64-bit x86 CPU (Intel/AMD), quad-core or greater, running at least 2-2.5 GHz.
  • RAM - At least 1GB of RAM for each TB of disk.
  • OS drives - For optimal performance, each node should contain a pair of SSDs to run and protect the OS and storage system metadata.
  • Storage drives - In most scenarios, high-capacity disk drives can be used to store objects. All disk drive vendors now sell “cloud drives” that are enterprise-grade, but cost dramatically less than traditional drives in this category, adding further cost savings to the overall storage solution.
  • Network ports - At least two 10Gb Ethernet ports for storage traffic and one Gigabit Ethernet port for management traffic.

NETWORK TOPOLOGY

Important to overall storage system performance is choosing adequate components for each node.

  • Storage network - for optimal storage performance and scalability, it’s critical that the cluster has its own dedicated 10Gb Ethernet network. All outward-facing traffic to and from applications, as well as inward-facing cluster traffic will flow over this network. As demands from the storage system increase, outward-facing and cluster-facing traffic can be isolated on their own networks.
  • Management network - to ensure management traffic does not affect storage performance, it is isolated from the storage network. For most environments, it’s common to run this traffic over the Gigabit LAN.

NETWORK TOPOLOGY

Horizontally stacked 10Gb top-of-rack switches create a high-performance storage network with the management traffic traveling over the Gigabit LAN.

 
 

EXAMPLE #1

TESTING OR SMALL SUB-100TB
ENVIRONMENTS

In this starter example, 3 nodes are each in their own zone, or failure domain. This is because SwiftStack stores three copies of each object for redundancy. To scale, nodes can be added in groups of 3 to evenly grow the zones. Each node performs all services (proxy, account, container, and object), so as the cluster scales, some services may need to be dedicated to their own node.

EXAMPLE #2

SCALE-OUT BACKUP/ARCHIVE
ENVIRONMENT

In this backup and archiving example, three racks of nodes are deployed with each rack being a unique zone. Proxy services have dedicated nodes to be able to handle the load of many disks drives. To scale the cluster, add three additional racks to keep the zones balanced.

EXAMPLE #3

HIGH-PERFORMANCE
APPLICATIONS

In this high-performance example, like the backup example, three racks of nodes are deployed with each rack being a unique zone. The difference here is that more proxy nodes are added for increased front-end performance and account/container nodes are separated from object nodes for increased back-end performance. To scale the cluster, add three additional racks to keep the zones balanced.

 

Resources

Active Archive with Cisco UCS

Solution Brief

Download