Configuring Riak CS Multi-Datacenter

Configuring Multi-Datacenter Replication in Riak CS requires the addition of a new group of settings to the app.config or advanced.config configuration file for all Riak CS and Riak KV nodes that are part of the Riak CS cluster.

Riak KV Configuration

As of Riak release 1.4.0, there are two different MDC replication modes that Riak CS can use to request data from remote clusters. Please see the comparison doc for more information.

Replication Version 3 Configuration

For each Riak node in the cluster, update the mdc.proxy_get setting in riak.conf, or by appending the {proxy_get, enabled} setting to the riak_repl section of the old-style advanced.config or app.config files, as shown in the following example:

mdc.proxy_get = on
{riak_repl, [
             %% Other configs
             {fullsync_on_connect, true},
             {fullsync_interval, 360},
             {data_root, "/var/lib/riak/data/riak_repl"},
             {proxy_get, enabled}
             %% Other configs
            ]}
{riak_repl, [
             %% Other configs
             {fullsync_on_connect, true},
             {fullsync_interval, 360},
             {data_root, "/var/lib/riak/data/riak_repl"},
             {proxy_get, enabled}
             %% Other configs
            ]}

Version 3 replication requires additional configuration in the source cluster via the command line.

riak-repl proxy_get enable <sink_cluster_name>

The sink_cluster_name should be replaced with the name of your configured sink cluster.

See also:

Riak CS Configuration

For each Riak CS node in the cluster, update the riak_cs section of the advanced.config, or the old-style app.config files, by appending the proxy_get setting as shown in the following example:

{riak_cs, [
           %% Other configs
           {proxy_get, enabled},
           %% Other configs
          ]}
{riak_cs, [
           %% Other configs
           {proxy_get, enabled},
           %% Other configs
          ]}
Note on restarting Riak nodes
Be sure that you restart cluster nodes in a rolling fashion after making configuration changes. In particular, after restarting a node, be sure that you wait for Riak's key/value store to become available before restarting the next node. To check the status of `riak_kv` on a node after restarting, execute the following command: ```bash riak-admin wait-for-service riak_kv ``` Replace the `node` variable above with the nodename specified in the `riak,conf` or older `vm.args` configuration file.

Stanchion Configuration

Though there is no specific configuration for Stanchion, note that Stanchion should be a single, globally unique process to which every Riak CS node sends requests, even if there are multiple replicated sites. Unlike Riak KV and Riak CS, Stanchion should run on only one node in a given cluster, perhaps on its own, dedicated hardware if you wish. Stanchion runs on only one node because it manages strongly consistent updates to globally unique entities like users and buckets.