Filing cabinets filled with custom laser-cut foam and ~600 loose hard drives, DiskCatalogMaker, and BlacX docks — this ’shared storage’ and archive strategy was my introduction to petabyte-scale problems in the media industry. But let me back up.
In late 2012, I joined an early-stage but now prominent post-production and distribution facility near Hollywood. I was surprised by what I found but now know that it is relatively normal for young media companies.
There was no internal IT team, so technology decisions were lacking in the areas of reliability and sustainability. Most technology was handled by contractors who weren’t necessarily operating with the company’s long-term interests in mind.
Luck would have it that the company’s founder had an open-door policy, and—after I expressed these concerns—he took a leap of faith and offered to let me start the internal IT team. In hindsight, this probably had a lot more to do with cost than my abilities (or anything else), but I’m still extremely thankful for that opportunity and the road down which it sent me.
The following six years saw that team grow to eight people on two continents and saw our storage infrastructure transform from that ~600 disk sneakernet to 25PB of some of the most advanced tech in the media industry—with SwiftStack at the core.
To understand why I joined SwiftStack, it helps to understand why we selected SwiftStack for our storage transformation project rather than another vendor.
In January of 2015 I led a project to evaluate and select a next-generation storage platform that would serve as the central storage (sometimes referred to as an active archive or tier 2) for all workflows. We identified the following features as being key to the success of the platform:
- Utilization of erasure coding for data/failure protection (no RAID!)
- Open hardware and the ability to mix and match hardware (a.k.a. support heterogeneous environments)
- Open source core (preferred, but not required)
- Self-healing in response to failures (no manual processes required, like replacing a drive)
- Expandable online to exabyte-scale (no downtime for expansions or upgrades)
- High availability / fault tolerance (no single point of failure)
- Enterprise-grade support (24/7/365)
- Visibility (dashboards to depict load, errors, etc.)
- RESTful API access (S3/Swift)
- SMB/NFS access to the same data (preferred, but not required)
In hindsight, I wish we would have included two additional requirements:
- Transparently tier and migrate data to and from public cloud storage
- Span multiple geographic regions while maintaining a single global namespace
We spent the next ~1.5 years evaluating the following systems:
- Ceph (InkTank/RedHat/Canonical)
- Dell/EMC ECS
- Cleversafe / IBM COS
- HGST/WD ActiveScale
- NetApp StorageGRID
- Quantum Lattus
- QFS (Quantcast File System)
- AWS S3
- Sohonet FileStore
SwiftStack was the only solution that literally checked every box on our list of desired features, but that’s not the only reason we selected it over the competition.
The top three reasons behind our selection of SwiftStack were as follows:
- Speed – SwiftStack was—by far—the highest-performing object storage platform— capable of line speed and 2-4x faster than competitors. The ability to move assets between our “tier 1 NAS” and “tier 2 object” with extremely high throughput was paramount to the success of the architecture.
- Flexibility – A lot of this ties back to the features we identified as being key to success—especially the ability to support mixed hardware vendors and vintages that allowed us to reuse ~14PB of raw disk. This ability comes from SwiftStack’s software architecture, which allows all cluster services to be co-located on the same hardware OR broken into separate tiers depending on hardware availability and design goal.
- Support – From pre-sales to professional services to the official support team, we were constantly impressed by SwiftStack’s knowledge, professionalism, and response times. We often described dealing with SwiftStack’s team as “the opposite of the often-frustrating traditional ‘big vendor’ experience.”
Note: While SwiftStack 1space was not a part of the SwiftStack platform at the time of our evaluation and purchase, it would have been an additional deciding factor in favor of SwiftStack if it had been. The ability to transparently interact with multiple cloud providers and move data between them (without your apps knowing) is game-changing for M&E and many other data-rich industries.
Integrating SwiftStack allowed us to achieve the following significant business efficiency gains:
- Removed dependency on ability of third party facilities to ingest in a timely manner
- Insulated tier 1 (NAS) from unexpected client load or internal processing load
- Allowed tier 1 to be right-sized for daily operations rather than capacity
- Reduced need for costly tier 1 investments
- Improved performance of existing tier 1
- Allowed ops to continuously function at their own pace instead of waiting on technology
- Enabled accepting PB-scale projects with no changes
- Significantly reduced $$$/TB usable cost
- Reduced admin requirements from 2-3 FTEs dedicated to storage to 1/5 FTE (8 hours per week or less)
- Removed use-case-specific storage silos for things like backup and SIEM
- Repurposed ~14PB of raw disk
- Enabled utilization of cloud compute services (“cloud-bursting”) without copying data (Hybrik)
So why did I join SwiftStack?
To put it simply, I loved the platform as a customer and believe that it is the best petabyte-scale storage product available. I enjoyed working with the team and felt that I could add value to their media efforts while learning a lot about the rest of the business. I wanted to help other media shops repeat the success that we achieved thanks to SwiftStack and complementary technologies while also avoiding the headaches and late nights that seem to plague the industry.
The last factor was that the SwiftStack opportunity presented itself at a time when I was ready to make a change. Having been at the previous company for six years—through two acquisitions and at least three distinct technology stacks, it felt like the time was right. There were fewer and fewer metaphorical fires to fight as we progressed (despite a very real fire along the way!), and we had transitioned into a period of “maintenance mode” as the new storage and IT architecture successfully met the needs of the business.
The fact that we succeeded in getting to that relatively boring “maintenance mode” state fills me with pride, and I am eternally thankful to everyone who helped along the way.
Thanks for reading. If you’re facing similar challenges or are interested in whether SwiftStack might be helpful in your modern workflows, feel free to reach out. I’m here to share what I’ve learned!