Right now, at NAB 2017, we’re demonstrating an example media workflow in the Cisco booth using SwiftStack software and Cisco UCS servers. For those who are unable to make it to the show and are interested in seeing the demo, check out this video we produced that highlights some of the ways SwiftStack private and hybrid cloud storage has benefited a number of our clients in the media and entertainment industries:
The video is brief and doesn’t show a lot of details for the sake of time, so this post provides more. Before we get to the demo workflow, if you’re new to SwiftStack or to SwiftStack in the context of media workflows, you may want to first visit swiftstack.com and swiftstack.com/media to introduce yourself to some of the features mentioned here.
There are lots of logos in the picture above, but this is only a sampling of the many hardware and software vendors with products in this industry. At the center of the picture is SwiftStack private cloud storage, which can be a target for a modern Media Asset Manager (MAM) or can be used independent of a MAM. Working clockwise from the left, we see a common five-step workflow that is particularly common in television and film production scenarios:
- Ingest: New media files are either created (e.g., by recording on a camera) or received (e.g., by high-speed file transfer tools) and then stored in the appropriate storage location for use in subsequent workflow steps.
- Transcode to Proxy: Many editing applications perform better when using a lower-than-original resolution, then—at a later step—the edits will be applied to the high-resolution media. A “proxy” is a lower-resolution file (or perhaps one of several) used for the interim workflow steps.
- Editing: This step can include many things, but the essence is that the source media is modified as desired.
- Transcode/Render to Final Format(s): When editing is complete, the final media is often produced in multiple formats (e.g., for small-screen mobile devices, for online publishing, for OTT broadcast, etc.).
- Playout, Publish, Transfer, Archive: At this point, the final media is used as intended and/or sent to someone else who uses it in another workflow.
Our demo video shows a few keys pieces of each of these five steps, so we’ll look at them in more detail below. Our example project is to add some SwiftStack references to a Cisco UCS S3260 video—one of the common hardware platforms on which our clients deploy SwiftStack software today.
Step 1: Ingest
To get started, we need to move an original copy of our source media into our SwiftStack media storage. SwiftStack provides a number of tools for ingesting media—including programmable object APIs (i.e., S3 and Swift), traditional file protocol support (i.e., NFS and SMB), watch folders, and a handy drag-and-drop GUI client. In our video, we used the SwiftStack Client to simply drag the source MP4 file into a SwiftStack container.
Step 2: Transcode to Proxy
Because our example source file is relatively small and only 720p resolution, we’ll actually do our editing on the original resolution later in the workflow, but to demonstrate how someone might produce a lower-resolution version of a source file at this point, we used the popular and open-source ffmpeg tool.
We kept things simple and just scaled the original video size to 25% by cutting the input height and width in half (“-vf scale=iw.5:ih.5″). Because ffmpeg can access source files using HTTP (“-i https://cloud.swiftstack.com/v1/AUTH_video-demo/nab/Cisco_UCS_S3260_video.mp4”), we let it pull the original directly from SwiftStack storage (after setting the read permissions on the SwiftStack container to permit this access). After transcoding, ffmpeg stored our lower-resolution media file locally to use in subsequent steps if desired.
./ffmpeg -i https://cloud.swiftstack.com/v1/AUTH_video-demo/nab/Cisco_UCS_S3260_video.mp4 -vf scale=iw*.5:ih*.5 Cisco_UCS_S3260-half-size.mp4
Revisiting Steps 1 & 2 with the Viz One MAM
We mentioned earlier that SwiftStack can serve as target storage for modern media asset managers; for many of our clients, that was the easiest way to begin leveraging SwiftStack with minimal modification to their existing workflows.
One of the first MAMs to integrate tightly with SwiftStack was Vizrt’s Viz One. When Viz One’s watch folder detects new media, it automatically ingests that media into local storage, indexes it in the MAM’s databases, and transcodes lower-resolution proxy files, which are stored directly on SwiftStack. Viz One’s basic Studio editing environment also allows for playback, scrubbing, and straight-cut edits using the proxy files while still resident in SwiftStack (so you don’t have to move them back and forth between local storage and SwiftStack). For the latter steps in the workflow, Viz One provides integration with popular non-linear editors and transcoders, and when you are finished, there is a one-click option to archive a high-resolution version of your media to SwiftStack for long-term storage.
Step 3: Edit
At this point, we’re ready to modify the source media. We used Apple’s Final Cut Pro X editing software, but we first needed to check out a copy of our source media to our local workstation for editing. As mentioned before, SwiftStack provides a number of tools to make storing and retrieving data simple, and we used the SwiftStack client again to navigate to the container with our original media and download a copy locally.
Our editing job was pretty simple: We added a “Feature” title to build in and out a couple of key phrases (“Powered by SwiftStack” and “Private and Hybrid Cloud Storage”) and then a SwiftStack logo at the end of the short video, then we had Final Cut Pro export a master file locally on our workstation in MP4 format. Finally, we used the SwiftStack Client to upload our new media file to SwiftStack storage.
Step 4: Final Transcoding
Now that we had our final media asset in high-resolution format, we needed to produce the various formats which would be used for publishing and other subsequent workflow steps. (Note: For those in the animation industry, this step is similar—in concept—to the final rendering and/or export steps in your workflows.) There are many transcoding tools available (e.g., Colorfront’s Transkoder works nicely with SwiftStack storage), and preferences vary, but we used ffmpeg again at this step for simplicity.
However, rather than transcoding in one linear step, we leveraged SwiftStack’s Cloud Sync feature to copy our final media file to Google’s public cloud storage, then we used temporary Google compute resources to transcode small segments of the video in parallel before storing the final formats back in SwiftStack. This part required a few configuration steps—described here:
- First, a Cloud Sync policy mapping was configured in SwiftStack to ensure that any data written into the container named “nab-gold-copy” in account “AUTH_nab” would be automatically synchronized to a Google bucket named “nab-demo-bucket.”
- Second, the “golden copy” of the final media was stored in SwiftStack, and Cloud Sync synchronized it to Google as expected.
- With the asset now in Google’s public cloud storage, it can be easily used with Google’s container engine to run parallel transcoding jobs in Kubernetes clusters. SwiftStack has developed an example blueprint for this type of transcoding job that can be shared upon request. One piece of the blueprint is a simple script that can be used to generate a yaml configuration file to control the Kubernetes clusters. As an example, one job in the yaml file transcodes the first four seconds of the video to 1/16th of the original size:
apiVersion : batch/v1 kind: Job metadata: name: demo0 spec: template: metadata: name: demo0 spec: volumes: - name: data emptyDir: medium: Memory containers: - name: demo0-0 image: registry.swiftstack.com/alpine_bash_ffmpeg_swift imagePullPolicy: IfNotPresent command: - "sh" - "-c" - "echo '===ffmpeg read from Google bucket and covert no:1 4 Seconds mp4===' && time ffmpeg -ss 00:00:00 -i https://storage.googleapis.com/nab-demo-bucket/fae4/AUTH_nab/nab-gold-copy/Cisco_UCS_S3260_video_final.mp4 -t 4 -vcodec libx264 -vf scale=iw*.25:ih*.25 -b 1750k -acodec libmp3lame -ac 2 -ab 160k out-1.mp4 && echo '===upload out-1.mp4back to temp_mp4===' && swift -A https://cloud.swiftstack.com/auth/v1.0 -U video-demo -K <password> upload temp_mp4 out-1.mp4 && echo '===upload to shared cache===' && mv out-1.mp4 /data/ && echo '===check cache===' && ls /data" volumeMounts: - mountPath: /data name: data nodeSelector: nodenum: node1 restartPolicy: Never
Other jobs in the yaml file transcode the other parts of the file to this same size or different sizes, and other jobs concatenate the parts and store the final complete versions in SwiftStack.
In our demo, we configured the Kubernetes jobs to generate a small, medium, and large size of the final video. When all jobs are complete, we can access the final videos in SwiftStack using the SwiftStack Client or—if enabled—SwiftStack’s file access protocols (a beta feature).
Step 5: Publish, Playout, Transfer, and/or Archive
We now have our final media in its various formats ready for use. For many of our clients, the workflows may diverge a bit at this point: A “post house” may send the finished product back to a customer via a physical shipment, a high-speed file transfer tool, or a simple download link from their SwiftStack storage; an episodic film production team may publish the new episode online or transfer it to prepare for playout; many people will archive multiple copies of the assets for safe-keeping (some of those archive copies can live in SwiftStack or leverage Cloud Sync to live in public cloud archive services!).
In our demo, we configured Amazon’s Cloudfront CDN service to leverage SwiftStack as an origin for content on a simple web page. (You can read how to do this in a separate post.) The HTML is pretty simple:
And the page is as expected:
In this demo, we showed how SwiftStack can be a central media storage archive that enables many steps in a common create→edit→distribute workflow. There are many more features of SwiftStack not included here: For example, SwiftStack can store relevant metadata about each piece of media in a search tool like Elasticsearch to make it easy to find just the right asset in a massive archive, and SwiftStack’s optional multi-region deployment architecture means remote production teams can collaborate without the need for expensive one-way file-transfer tools. If you’re curious about any of the steps in the demo workflow above or anything else about SwiftStack, please contact us.