SHARE   

swift_devs_alamo

We’ve come home from another great Swift midcycle hackathon. We’ve released 2.10.0. We’re getting ready for the OpenStack Summit in Barcelona. It’s easy to get caught up in the day-to-day of the review queue and what our respective employers are building on, around, and with Swift. What’s the bigger picture, though? What can we expect over the next six months and into 2017?

As always, there’s a lot going on.

austin_hackathon_dotmocracy

There are two things that are top priority in my mind right now:

  • container sharding
  • object server and replication performance improvements

Not only do implementing these features solve problems today for existing users, solving them helps us avoid problems later with other things that are in progress.

Improving container performance

For container sharding, the big problem is that we promise users that with API storage you don’t really have to think about the hard problems of storage. You put your data in the system and you are able to read it whenever you want. The problem is that our current implementation has some edge cases where the user doesn’t have a very good experience. What happens is that a user makes a container and starts to put stuff there. And keeps putting stuff there. And then puts some more stuff there. And when you’ve got a few hundred million things to store, you end up with a very large container. (And for some users, “a few hundred million” is actually not a very big number at all.) Large containers can really slow down some of the internal processes in the cluster, and this can have cascading effects in the system. End-user performance can be negatively affected, and operators have to struggle with container replication and full container drives.

So the idea behind container sharding is pretty simple: instead of storing a reference to every object in one place, split it up and distribute it around the cluster. This splits the load across a lot of hardware and allows containers to grow much much larger without running any of the existing issues. Also, container sharding really helps when a user is moving to Swift from Amazon S3 and has the expectation (or existing data layout) that includes millions or billions of objects (keys) in one container (bucket).

Improving storage and consistency performance

Improving Swift’s object server and replication performance is a particularly challenging issue. The biggest issue is that, currently, when you have hundreds of concurrent requests coming into a storage server with many cores reading and writing data to dozens of hard drives, Swift’s current object server doesn’t do a great job of fully utilizing the available hardware and efficiently getting data from the NIC to drive and back again. Unfortunately, this isn’t an issue which can be easily solved by optimizing a loop or making an external library in C. The problems come from the state of disk vs network IO in the Linux kernel and the CPython interpreter implementation.

For the past year and a half, Swift devs have been working on reimplementing the object server in Golang. You’ve been able to see this work on the `feature/`hummingbird` branch in Swift’s code repo. In April at the OpenStack summit, after seeing the progress of the hummingbird implementation and the significant improvements it brings, we worked out a plan on how to integrate the Golang code into our `master` branch. We’re currently working through this process now, and I expect to be able to deliver significant improvements to users as soon as it’s done.[1]

Beyond the immediate benefits of the Golang object server in Swift, I’m excited about what we can continue to build on top of the hummingbird object server. The storage world is rapidly changing, and Swift will continue to innovate to stay at the forefront of what’s happening. I’m excited about tackling small file optimization, huge drives (100+TB), non-volitile memory, all-flash storage, and more.

So much more

Container sharding and Golang object servers aren’t the only things going on in the Swift community right now. We’re actively working on a host of other things, including:

  • Increase the part power of a ring
  • Automatic storage policy migration
  • Improving the encryption functionality
  • Supporting global erasure-code storage policies
  • Composite rings where operators have more control over placement
  • Notifications of activity
  • Symlinks
  • Container sync improvements
  • Audit watchers that support arbitrary triggers
  • Improving erasure code storage policies
  • Better automated testing for common operator situations
  • Python 3.5 compatibility
  • Storage policy tiering
  • Improving documentation
  • Different config file parsing and logging formats

We’ve got *years* of exciting work ahead of us. If you’d like to help out, join us in #openstack-swift and say hi.

I’ll see you in Barcelona!

final_barca_shirts_small_for_web

[1] For those who follow OpenStack internal discussions, there’s been a significant debate about using Golang for an OpenStack service. Although the OpenStack Technical Committee agrees that the community-driven hummingbird work in Swift is a better implementation of Swift’s object server, their current recommendation is to remove this code from official OpenStack repos. The debate continues.

About Author

John Dickinson

John Dickinson

John joined SwiftStack as Director of Technology coming from Rackspace where he worked on Rackspace Cloud Files and OpenStack Swift, the large-scale distributed object storage system which SwiftStack is built on. As the Project Technical Lead (PTL) for OpenStack Swift, John is also leading much of its community development efforts.