Monetize your data at petabyte scale, with in-place Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) enabled by SwiftStack
AI/ML/DL storage workflow phases and their I/O challenges

Why SwiftStack for AI/ML/DL data pipelines?
Cloud Native Architecture ✓
SwiftStack’s cloud-native architecture is a transformative storage platform for AI, ML and DL workflows. Traditional storage architectures are not designed for these new distributed workloads and fall short of performance, scale and value.
Ingest
- Enables hundreds of GB/s high ingest with massive concurrency and throughput to match the GPU compute layer parallelism
- Provides broad application and protocol support for enterprise applications ingest with POSIX compliant filesystem (over NFS or SMB) as well as cloud native applications (AWS S3 or Swift APIs)
Enrich
- Supports rich metadata tagging, for contextualizing, supervised learning and other workflows
- Metadata search integration to isolate data sets
- Leverage SwiftStack 1space for lifecycle management with metadata tagging
Train
- In-place neural net training, with native S3 support for TensorFlow frameworks
- Massive read bandwidth
- Architectural separation of compute and storage enables massive scalability of training datasets
- SwiftStack 1space enables cloud-bursting for best and cost efficient on-premises or across cloud GPU compute farms
Infer
- Enables Inferencing at the edge or core
Retain
- Policy based lifecycle management with metadata tagging, cloud tier, governance, massive scale with best economics
Use Cases
AI/ML/DL solutions are currently used across several vertical use cases: