This guide provides step-by-step instructions on how to install and configure the SwiftStack Nodes software, including OpenStack Swift, on one or more nodes and configure a functioning SwiftStack cluster using the SwiftStack Controller service. The SwiftStack Controller is also available for on-premise deployment in your datacenter.
Create a SwiftStack Account
An invite code is required to set up your SwiftStack environment. Request an invite code by going to http://swiftstack.com/trial/
Register your name, email address and company details, answer the questions. Once you receive the invite code, go to: https://platform.swiftstack.com/accounts/signup/
Complete and submit the signup form with the following information:
- The SwiftStack invite code which was provided to you.
- Your username, email, name, company and password
- Set the timezone for the geographical area the cluster is running
After you have successfully created an account, log in to the SwiftStack platform with the username and password you selected here: https://platform.swiftstack.com/accounts/login/
After you have logged in you will be directed to the profile page where you can adjust your login and account credentials as needed.
Installing SwiftStack Node Software
SwiftStack supports Ubuntu 12.04 LTS Precise Server (64-bit) and CentOS/RedHat 6.3 Server (64-bit), which is the only software that needs to be installed before installing SwiftStack. The nodes you install SwiftStack on will need to be able to access the Internet to reach the SwiftStack Controller service, which is where you will configure, manage and monitor your cluster. All commands on the nodes will also need to be performed with root privileges.
SwiftStack signs the repository Packages file used to install SwiftStack Nodes.
To install the SwiftStack Nodes software on Ubuntu, run the following command:
To install the SwiftStack Nodes software on CentOS or RedHat, run the following command:
Once you execute this command, review the output to familiarize yourself with the installations commands that will be run to install the SwiftStack Nodes software. Note that if you have previously installed OpenStack Swift on the node, it will be replaced with the version included in the SwiftStack Nodes software. When you are ready, go ahead and run the command:
!! | bash
Claim a new node
After the SwiftStack Nodes software has been installed, the node contacts the SwiftStack Controller service via https (port 443) to register itself. When this process is successful, the node will obain its identification from the SwiftStack Controller and create a ‘claim’ URL which will be used to configure a cluster. The claim URL will be displayed in the terminal as soon as the installation completes.
+----------------------------------------------------------------------------+ | Please claim this node to continue the installation process. | | | | Your claim URL is: | | https://platform.swiftstack.com/claim/0362be5e-949e-11e2-8836-000c29c7812d | +----------------------------------------------------------------------------+
To configure your SwiftStack cluster make sure you are logged into your SwiftStack account and copy/paste that URL into a browser. The SwiftStack node will open a secure VPN connection between itself and the SwiftStack Controller service over UDP (port 1194). (Note that only outgoing connections are allowed over the secure VPN connection and all incoming connections are blocked.) Once the secure VPN connection is established, the node will periodically provide system configuration and monitoring data to the controller, which is displayed in the controller interface. Changes to the configuration settings in the controller are also provided to the nodes through this connection. At no point does the controller have any access to data that is stored on a SwiftStack Node. All changes to configuration settings must also be initiated by the user before deployed on the node.
To ensure that the node can connect to the cloud based controller, port 443 and 1194 will need to be open in your firewall. Also note that since the SwiftStack Node requires a persistent connection with the SwiftStack Controller, it can not connect through an http proxy.
Using your browser, navigate to the ‘claim URL’. Establishing a session for the first time may take about 20-30 seconds. While the VPN connection is being established, you may be presented with a page similar to:
Click on Continue Claim.
To add the node to an existing Cluster:
- Select the Cluster that the node is to be added to (if you have more than one cluster defined)
- Select a Zone for this node.
- Select the interface to use for the Outward-facing interface and the Cluster-facing interface.
- Click add node to cluster
Note: New zones should only be created for fault-tolerant domains (eg. different networking segments, PDUs etc.). With Swift’s unique-as-possible data placement feature, Swift will automatically place data across different nodes and drives even if all nodes are in the same zone.
To create a new cluster:
- Give the new cluster a name
- Give the cluster an API IP address
- The Cluster API IP Address is the front-facing IP address of the cluster and must be on the same network as the outward-facing IP addresses of the nodes in the cluster. The Cluster API IP Address must, however, be different from the outward-facing IPs of the nodes.
- Choose to use the SwiftStack-provided load balancer
- If you are using your own load balancer, the client-facing IP address of the load balancer should be entered as the Cluster API IP Address.
- If you are using the built-in load balancer in the SwiftStack platform, enter the IP address you want use as your client-facing IP address.
- Optionally, provide an API Hostname.
- Caution only provide a valid DNS hostname, otherwise leave blank.
- Click on create new cluster to create a cluster and add the node.
Note: The outward-facing interface is used for proxy server and load-balancing services. The cluster-facing interface is used for intra-cluster communication among Swift daemons. The same interface may be used for both (as in the example shown).
The node is now ready to be provisioned to be included as an active component within the cluster.
Provisioning a Swiftstack Node
Provisioning a SwiftStack Node involves identifying which cluster that the node is to be a part of, selecting which network interfaces will be utilized and formatting drives.
In the example below the new node has been added to an existing cluster, but still needs provisioning:
Provision the Node
At the configuration menu choose Provision for the node that is to be provisioned. The nodes that haven’t been provisioned appear highlighted in yellow. If you have active nodes which have been previously provisioned they appear in green. At the node provision menu you can start adding the drives available to your node:
You will see a status page depicting the current state of all the drives available to the node. Information available here is:
- The mount point used by the node to make the drive available
- The Linux device name for the corresponding drive
- The cluster internal drive label used to identify within the cluster the drive associated with a particular node
- The size of the drive in gigabytes
- Status: unformatted, available, etc
- Operation, the action that can be performed on the drive
At this page you can also manage the state of each of the drives. Initially they all are Unformatted and are not available yet to the cluster. After you have formatted the drives, this page allows you to selectively associated drives to be available for:
- Storage of object data
- Storage of account and container metadata (the data that describes the stored objects and accounts)
Any drive can be assigned to storing metadata, but Swiftstack generally recommends using faster media, such as an SSD, for storing account and container metadata.
Select one of the Format buttons and make sure the status of all the drives is change to Format and mount:
Perform the actions prepared and click Change. After a few minutes the system will bring you back to the Node configuration page, the status of the drives has changed to Ready and they can now be allocated appropriate roles:
At this stage the drives are formatted but not yet available to the cluster. The final action to complete adding the drives is to add the drive either immediately or gradually to the cluster. If this is a cluster that is running in production, it is recommended that a nodes drives be added gradulally. However, if this is a new cluster with a small amount of data (or no data), it is okay to Add immediately:
This action can be submitted to the cluster by clicking Change.
The system returns you to the main node configuration page. Click Edit drives to view the status change (this happens immediately). The status of the drives has been changed to “In Use” (check for which roles the system has allocated the drive):
In the example above all drives are dedicated to store meta-data and object data.
Click Enable Node to add node in the cluster (you might need to scroll down the page):
You will be returned to the cluster configuration menu, the node is now ready to participate in the cluster:
The new node has been configured on the SwiftStack Controller.
Configure the Cluster
To edit the configuration of a cluster, first go to Cluster page, then select the cluster you wish to edit.
At the next page press the Configure Cluster button to configure the cluster.
On the Configure Cluster page, you have the option to:
- Enable or disable the Swiftstack load balancer
- Adjust the cluster API IP address
- Create user accounts
- Swiftstack cluster authentication is based on user accounts which each have their allocated storage accounts.
- At least one user account must be created before the cluster can be used.
- A user has read and write rights to the containers and objects in his/her account.
- Enabling SSL HTTP access to the cluster API, there are two options here to support SSL access
- Use your own certificate, which you can obtain from an accredited certification authority (you must do this yourself). This has the advantage that client software accessing the cluster will accept the certificate as genuine and as verified.
- Have the cluster generate a self signed certificate. This is the default and most straightforward method to enable SSL, but it has the disadvantage that client software accessing the cluster might generate a warning message.
- Manage middleware
Add new users
Before you can use the system to store data you will need to create one or more user accounts.
Return to the cluster configuration screen if you haven’t already done so and click the Manage Storage Accounts button:
Here you can create a new user account and a corresponding password:
Fill in the fields and click Add a new account. The new user will be added to the system and is available after the configuration has been pushed to the cluster (see next section):
After any change to the cluster configuration, adding a new user account for example, a new button is available at the cluster configuration page.
Click on Push Config to Cluster
Pushing Config to Cluster performs the following actions:
- Adds appropriate devices & nodes into the Swift ring database
- Creates a distilled Swift ring configuration file
- Adds any additional user accounts that had been created or modified
- Creates new swift configurations based on, network settings and the settings available in ‘Tune Cluster’
- Notifies each node in the cluster that new configuration settings are available and should be downloaded.
- Each node in the cluster pulls the new configuration files and restarts processes (when necessary)
While the configuration is being pushed, the following status message is displayed:
When the configuration push is done, your SwiftStack cluster will be available at the IP specified in the Cluster API IP address or at the cluster API host name (if you have specified this option). Do note, that it takes about 5-10 minutes for the SwiftStack Controller to rebuild the ring and push the new configuration out to the cluster. The system will respond with a status message reporting the push was successful:
Making an API Request
A great way to access the Swift cluster is with the Swift command-line client. The Swift client can be installed on OSX, Windows, or linux.
For example on OSX:
sudo pip install python-swiftclient
For example on Ubuntu Linux:
sudo apt-get install python-swiftclient
To access the Swift cluster a request is made to the ‘Auth URL’. To view the Auth URL of your cluster, first go to Cluster page, then select the cluster you would like to connect to.
Note: Swiftstack does not recommend running the Swift command client from an enabled node. Please use a separate system to access the authorization URL.
This auth URL provides access to the storage service. To make requests, include the following arguments (using the newly created user account as an example):
swift -A http://production01.acme.com/auth/v1.0 -U user01 -K password stat
The system will respond with basic information about the user’s account:
Account: AUTH_user01 Containers: 0 Objects: 0 Bytes: 0 Accept-Ranges: bytes X-Timestamp: 1364147836.58268 X-Trans-Id: tx8d61d538ded94c7caebbb2d947a29435 Content-Type: text/plain; charset=utf-8
If you did not set a cluster hostname you can also use the virtual IP address to do the same, the above example using the API IP adddress (in the example we use 10.0.0.48) would be:
swift -A http://10.0.0.48/auth/v1.0 -U user01 -K password stat
To make using the tool more convienent, these options may be set as envirionment variables. For example in a bash profile:
export ST_AUTH=http://production01.acme.com/auth/v1.0 export ST_USER=user01 export ST_KEY=password
After you have specified the environment variables you can use a greatly simplified syntax.
To upload an object:
swift upload container_name file_to_upload
For example to simultaneously create a container called Pictures and upload a file called Holidays2013_01.JPG type:
swift upload Pictures Holidays2013_01.JPG
You can view the containt of the Pictures container by listing it:
swift list Pictures Holidays2013_01.JPG
Or checking out its statistics:
swift stat Pictures
Which will respond with something similar to:
Account: AUTH_user01 Container: Pictures Objects: 1 Bytes: 7251299 Read ACL: Write ACL: Sync To: Sync Key: Accept-Ranges: bytes X-Timestamp: 1364148979.10367 X-Trans-Id: tx51a61829992d42169561a5062830d2ca Content-Type: text/plain; charset=utf-8
To download the object:
swift download container_name file_to_upload
To list files in the container:
swift list container_name
To list all containers for the account:
Enable the Swift web client
Using the Swiftstack controller Return to cluster you want to enable the Swift web client for. Click on Configure Cluster and navigate to Manage Middleware
To enable the Swift Web Client (the Swift Web Console) you must enable the following middleware:
- Static Web
- Swift Web Console
Click on the name of the first middleware component that needs enabling: Static Web. This will bring you to a menu where you can enable the Static Web component. Tick the Enabled box:
Click Submit. Do the same for the remaining middleware as shown in the following four screenshots.
As you can see the system will enable a URL which is visible in the Swift Web Console menu. Make sure you remember the address and click Submit.
Back at the Available Swift Middleware menu you will see the modules you have just enabled marked as Active:
Return to the cluster configuration menu, scroll down and click Push Config to Cluster. The controller will push the configuration to the cluster. Wait a few minutes and when the system reports back to you the push config was done, return to the cluster. At the top right in the main menu you will see the Web Console (Swift Web Client) URL:
Navigate using your browser to the Web Console URL shown in the cluster overview page. You will see the login page:
Login using an account you created earlier (in the examples before we used the user01 account name). The system will show you an overview of the containers available for this user account:
Your SwiftStack object store is now available through the Swift Web Client console.
After installation time both Ubuntu and Centos/Red Hat will use dynamic IP address assigment (if a local DHCP server was available at installation time). Setting static IP network addresses can be done afterwards by manually editing a few configuration files. Network configuration on Ubuntu is different from Centos/Red Hat. This section will cover simple network configuration changes for a node with only one interface after a default installation.
Most Linux systems have a few commands available to find out what the running network configuration is, for example:
Which on a system with one interface outputs something similar to this:
eth0 Link encap:Ethernet HWaddr 00:0c:29:f5:9d:79 inet addr:10.0.0.41 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: 2001:981:4681:1:b800:849:e34c:e602/64 Scope:Global inet6 addr: 2001:981:4681:1:20c:29ff:fef5:9d79/64 Scope:Global inet6 addr: fe80::20c:29ff:fef5:9d79/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:28163 errors:0 dropped:7394 overruns:0 frame:0 TX packets:17672 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8894689 (8.8 MB) TX bytes:3737212 (3.7 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:72 errors:0 dropped:0 overruns:0 frame:0 TX packets:72 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5472 (5.4 KB) TX bytes:5472 (5.4 KB)
You can also use variations of the ip command:
The output of which looks similar to the following:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f5:9d:79 brd ff:ff:ff:ff:ff:ff inet 10.0.0.41/24 brd 10.0.0.255 scope global eth0 inet6 2001:981:4681:1:b800:849:e34c:e602/64 scope global temporary dynamic valid_lft 6983sec preferred_lft 3383sec inet6 2001:981:4681:1:20c:29ff:fef5:9d79/64 scope global dynamic valid_lft 6983sec preferred_lft 3383sec inet6 fe80::20c:29ff:fef5:9d79/64 scope link valid_lft forever preferred_lft forever
Basic network configuration on an Ubuntu system is stored in a file
/etc/network/interfaces, the hostname is stored in
/etc/hostname. These files must be edited manually and the interface must be restarted after this configuration file has been changed. For example on a system with one Ethernet interface only, eth0 and IP address 10.0.0.41/24 the
/etc/network/interfacesfile will look similar to this:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 10.0.0.41 netmask 255.255.255.0 network 10.0.0.0 broadcast 10.0.0.255 gateway 10.0.0.1 dns-nameservers 10.0.0.1 126.96.36.199 dns-domain acme.com dns-search acme.com
It is important to add nameserver information, in the above example 10.0.0.1 and 188.8.131.52. This allows the Ubuntu network manager to automatically generate the correct nameserver configuration (in
/etc/resolv.conf). When you are happy with your configuration restart the interface (if you are connected using SSH you will loose your connection, just connect again using the new IP address):
ifdown eth0; ifup eth0
To adjust the hostname, to for example prodnode01, you have to edit the
/etc/hostname file to contain:
You can rename the system by using:
service hostname restart
Or just reboot the system.
Centos / Red Hat
Basic network configuration on a Centos/Red Hat system is stored in several files which must all be edited to create a working configuration:
- Specifies routing and host information applicable to all interfaces: /etc/sysconfig/network
- Configures the nameserver credentials: /etc/resolv.conf
- Configuration scripts for each network interface: /etc/sysconfig/network-scripts/ifcfg-ethX
For a node called prodnode02.acme.com the content of the
/etc/sysconfig/network file would like like this:
A name server configuration in
search acme.com nameserver 10.0.0.1 nameserver 184.108.40.206
And finally the interface configuration of a system with one interface only in:
DEVICE="eth0" BOOTPROTO="none" ONBOOT="yes" TYPE="Ethernet" IPADDR=10.0.0.42 NETMASK=255.255.255.0 BROADCAST=10.0.0.255 GATEWAY=10.0.0.1
Reboot the system and your new configuration will be available.