Christopher B. Browne's Home Page

1.3. Slony-I Concepts

In order to set up a set of Slony-I replicas, it is necessary to understand the following major abstractions that it uses:

It is also worth knowing the meanings of certain Russian words:

The use of these terms in Slony-I is a "tip of the hat" to Vadim Mikheev, who was responsible for the rserv prototype which inspired some of the algorithms used in Slony-I.

1.3.1. Cluster

In Slony-I terms, a "cluster" is a named set of PostgreSQL database instances; replication takes place between those databases.

The cluster name is specified in each and every Slonik script via the directive:

    cluster name = something;

If the Cluster name is something, then Slony-I will create, in each database instance in the cluster, the namespace/schema _something.

1.3.2. Node

A Slony-I Node is a named PostgreSQL database that will be participating in replication.

It is defined, near the beginning of each Slonik script, using the directive:

     NODE 1 ADMIN CONNINFO = 'dbname=testdb host=server1 user=slony';

The SLONIK ADMIN CONNINFO(7) information indicates database connection information that will ultimately be passed to the PQconnectdb() libpq function.

Thus, a Slony-I cluster consists of:

1.3.3. Replication Set

A replication set is defined as a set of tables and sequences that are to be replicated between nodes in a Slony-I cluster.

You may have several sets, and the "flow" of replication does not need to be identical between those sets.

1.3.4. Origin, Providers and Subscribers

Each replication set has some origin node, which is the only place where user applications are permitted to modify data in the tables that are being replicated. This might also be termed the "master provider"; it is the main place from which data is provided.

Other nodes in the cluster subscribe to the replication set, indicating that they want to receive the data.

The origin node will never be considered a "subscriber." (Ignoring the case where the cluster is reshaped, and the origin is expressly shifted to another node.) But Slony-I supports the notion of cascaded subscriptions, that is, a node that is subscribed to some set may also behave as a "provider" to other nodes in the cluster for that replication set.

1.3.5. slon Daemon

For each node in the cluster, there will be a slon(1) process to manage replication activity for that node.

slon(1) is a program implemented in C that processes replication events. There are two main sorts of events:

1.3.6. slonik Configuration Processor

The slonik(1) command processor processes scripts in a "little language" that are used to submit events to update the configuration of a Slony-I cluster. This includes such things as adding and removing nodes, modifying communications paths, adding or removing subscriptions.

1.3.7. Slony-I Path Communications

Slony-I uses PostgreSQL DSNs in three contexts to establish access to databases:

The distinctions and possible complexities of paths are not normally an issue for people with simple networks where all the hosts can see one another via a comparatively "global" set of network addresses. In contrast, it matters rather a lot for those with complex firewall configurations, nodes at multiple locations, and the issue where nodes may not be able to all talk to one another via a uniform set of network addresses.

Consider the attached diagram, which describes a set of six nodes

1.3.8. SSH tunnelling

If a direct connection to PostgreSQL can not be established because of a firewall then you can establish an ssh tunnel that Slony-I can operate over.

SSH tunnels can be configured by passing the w to SSH. This enables forwarding PostgreSQL traffic where a local port is forwarded across a connection, encrypted and compressed, using SSH

See the ssh documentation for details on how to configure and use SSH tunnels.


If this was useful, let others know by an Affero rating

Contact me at