Plastic SCM 4.1 Distributed System Guide
This guide explains Plastic SCM’s capabilities when it comes to working with distributed systems. It contains a general description of the different distributed scenarios supported, followed by detailed explanations. Replication scenarios are also covered in-depth.
Plastic SCM’s distributed capabilities allow you to set up different servers for multi-site development support, which are able to both replicate and reconcile changes made on replicated branches.
Plastic SCM has the ability to create different multi-site scenarios ranging from single-server to fully distributed deployments.
As Figure 1 shows, Plastic SCM can be configured to work in a single server mode, which is the default mode on installation and the conventional mode available on all SCM products.
The next step has been called classic multisite in which several servers exist – one for each development location – and contents are replicated among them. The basic rule at this distribution stage is that branch mastership is kept by only one site at a time. If a branch is modified at one site, the other sites won’t modify it until the branch is replicated again. In many systems, this behavior is encouraged by the software itself, preventing simultaneous changes in a master/slave relationship. Plastic SCM does not restrict you to working in this mode, though you can use both permissions and a clear replication policy to simulate it, should you find this configuration useful.
Full multisite support is almost identical to the previous distribution stage with only one difference: All the SCM servers can modify their branches at any time. Changes can be reconciled back later on if the same branch is modified more than once at different locations.
Full distribution is exactly the same as full multisite, but on this deployment scenario each developer has his own SCM server. There’s only one restriction imposed on systems working in this mode: Servers must be light enough to run on non-dedicated workstations and even laptops. Plastic SCM servers can easily be configured to work in this mode, introducing full disconnected support. A developer can take his laptop home and continue working as if he were at the office and reconcile his work when he’s back at the office.
The main operation in distributed systems is replication. By means of this operation, repositories can be distributed on several machines. The replication unit in Plastic SCM is the branch. Users specify which branch they want to work with and replicate it from a source repository to a destination repository. All revisions, labels, links, attributes, and changesets will be replicated to the destination.
Figure 2 shows two repository servers at two different locations. The server at Location 2 has replicated the branch main from Location 1. Then a developer at Location 2 has created another two branches which have been replicated back into Location 1 later on.
As you can see from the figure, distributed repositories don’t have to be exact clones. They share replicated branches and their contents but the entire repositories don’t have to be identical. Instead, they can evolve separately, sharing only some branches.
There are several possible distributed scenarios with Plastic SCM. They will be explained in detail in this chapter.
In this scenario, two or more servers are used in replication. Servers will normally run at different locations to enable geographically distributed teams to work together on the same project. A server at each location will solve the problem of slow or unreliable internet connections between sites.
Figure 3 shows both a deployment diagram and a detailed view of the branching strategy. This set up resembles classic multi-site replication as implemented by many master/slave based products. In Plastic SCM, this scenario is just one possibility and it will be used to explain replication.
The two sites, Location 1 and Location 2, will have their own servers. Both sites will be working on the same code-base, so developers will need to be able to check in changes at any time. The chosen strategy would be this:
Figure 3 shows how branches are replicated from Location 2 to Location 1 in its lower area.
Figure 4 and Figure 5 describe the previous scenario step by step. They show how the main branch is first replicated from Location 1 into Location 2, how the newly created release 58 is then available to the two development groups.
Then the groups start working on and creating task branches independently from each other, but starting at a well-known point: release 58.
Once the iteration is finished, branches task1012, task1013 and task1030 created at Location 2 are replicated to Location 1 to be integrated.
Once the integration is finished, the branch /main will be replicated again to Location 2, so that the development group there can continue working with the latest approved baseline.
Note that both repositories are not identical after the development iteration finishes, but the content on the main branches, considering they’re being modified at only one site, is exactly the same.
The deployment required for this scenario is exactly the same as in the mastership case. The difference will be the way in which the replicated branch evolves. Now developers will make simultaneous changes to the replicated branch and Plastic SCM will have to help reconcile these changes together.
Figure 6 depicts the situation: Developers at two different sites are working against the same branch which has been replicated from Location 1 to Location 2. Then both groups perform changes directly on their replica of the main branch.
In the previous diagram, all changesets are on the main branch. We show it this way for visual simplification. In the real world, however, development teams should use the branch-per-task pattern and integrate their changes to the main branch on all sites.
When Location 1 (or vice versa) requests changes made at Location 2 using replication, the newly created changesets on branch main at Location 2 will be linked to the right changesets on the Location 1, keeping the correct changeset linking. If a subbranch is created (more than one head or last changeset on a given branch, which can happen after pulling changes from a remote repository) it will have to be merged in order to reconcile the changes created remotely.
In a purely distributed scenario, there isn’t a central server. Each developer instead runs his own server containing his own repositories.
This strategy can be fully implemented with Plastic SCM by configuring a server on each developer’s workstation.
This fully distributed scenario can be adopted by any company, even if they would normally prefer to count on a central copy. With distributed development there will always be a master server, not necessarily due to software restrictions, but to some sort of meritocracy, as happens with open source projects. It is usually best to explicitly decide which computer will be the one containing the well-known stable releases. Obviously there will be more than one satisfying this requisite, but it is better for simplicity’s sake to exactly determine which will be the master one at any time.
In corporate scenarios this purely distributed ability can be tuned to support a mixed scenario:
Alternatively, all developer’s workstations could run Plastic SCM servers. This is totally supported by the system. Deciding to use this capability or not will depend on the organization itself, developers’ skills, and the amount of administrative burden required.
Figure 7 depicts the concepts described above.
So far, the behavior of general distributed systems has been introduced. This topic will explain in detail how Plastic SCM replicates changesets between branches on different repositories and how to reconcile conflicting changesets created in parallel in the same replicated branch on two different repositories.
The diagrams and samples introduced in the previous chapters focused on overall branch behavior. Figure 8 details a replication sample that studies what happens at the changeset level.
The sample focuses on a file named /src/main.cpp at the branch main/fix. The branch is replicated from repository A at Location A to repository B at Location B. Note that the figure specifies the Plastic SCM command needed in order to run a replication.
At step 1, there is only one changeset on the two replicated repositories, containing the first change on /src/main.cpp.
Step 2 shows how the file is modified at rep A: two new revisions are created.
At step 3, the developer at location B runs once more the same replication command. The two new revisions created at rep A are now copied into rep B.
During replication, Plastic SCM first pulls the changesets at the branch specified by the user (starting at the last previously replicated changeset if any). Then it will pull the changesets from the source repository. To do so, Plastic SCM finds the parent changeset of the new changesets being pulled, and links them accordingly.
At step 4, the developer at rep B makes a new change starting from the latest replicated changeset and modifying again main.cpp.
At step 5, the developer at rep A replicates /main/fix at rep B. The newly created changeset 3 gets replicated and correctly placed in his repository.
Note that the example from Figure 8 shows only one change at a time on the branch, so no conflicts can happen. While following this strategy, the two replicated branches will continue being exact clones on replication.
Figure 9 shows a more complex scenario. Both locations start with the same configuration: three changesets at branch /main.
At step 2, the two repositories evolve in parallel when the developers introduce new changes on main.cpp.
At step 3, the user at rep A tries to replicate changes from rep B. Now Plastic SCM can’t directly link revisions 3 and 4, created at rep B to revision 2 because a new revision 3 has also been created at the branch.
Note that internally Plastic SCM identifies each object by a GUID (Globally Unique Identifier) so don’t get confused by the changeset numbers shown in the sample.
If changeset 4 at repA didn’t exist, then Plastic SCM would have placed revision 5 and 6 from rep B just right of the existing changeset 3. In this situation though, it can’t do that. So what Plastic SCM actually does is create a subbranch to place the replicated changesets.
There are two replication modes available:
Figure 10 depicts the two available replication modes.
The package based replication introduces the ability to keep servers in sync which are not allowed to connect directly due to security restrictions.
All the replication scenarios and possibilities described can be set up with a single Plastic SCM command: replicate.
Where srcbranch is a branch spec identifying the branch to be replicated and its repository, and destinationrepos is the repository where the branch is going to be replicated.
Suppose you want to replicate the branch main at repository code at server london:8084 to repository code_clone at bangalore:7070. The command would be:
To replicate branches using packages, the first step will be creating a replication package, and then importing the package into another server.
Suppose you have to create a replication package for the main branch at repository code at server box:8084.
The previous command will generate a package named box.pk with all the content of the main branch.
Later on, the package will be imported at the repository server berlin:7070.
During replication, different servers have to communicate with each other. This means that servers running different authentication modes will have to exchange data.
To do so, the replication system is able to set up different authentication options.
Figure 11 shows a typical scenario with a client and two servers. All the involved Plastic SCM components are configured to work in LDAP and they share the same LDAP credential, so no translation is required.
Note that authentication happens at two levels:
If both servers were not using the same authentication mechanism or not authenticating against the same LDAP authority, step 2 would fail.
Figure 12 shows a scenario in which the server london is configured to use user/password authentication. In this case, a command like the one specified at the top of the figure will fail because authentication between servers won’t work at step 2.
To solve this problem, the replication system has the ability to specify authentication credentials to be used between servers. In the example, the client can specify to the server berlin a user and password to communicate with server london.
Figure 13 shows two different ways to specify authentication credentials when using user/password at the source server.
The first option is actually specifying the mode plus the user and password (for UP) at the command line.
The second one uses an authentication file, which is useful when authentication credentials are going to be used repeatedly. As the figure shows, an authentication file is a simple text file containing two lines:
Suppose now that replication must happen in the opposite direction, from berlin to london as Figure 14 shows. The parameters to connect to an LDAP server (in this case an Active Directory accessed through LDAP) are specified. Normally in LDAP an authentication file will be used to ease the process.
When replication is performed between servers with different security modes, authentication is not the only issue. User and group identifications have to be translated between the different security modes.
The sample at Figure 15 tries to replicate from a user/password authentication mode to an LDAP based one. The user list at the UP node stores plain names but the user list at the LDAP server stores SIDs. When the owner of a certain revision being replicated needs to be copied from repA to repB, a user or group will be taken from the user list at repA and introduced into the list at repB. If a name coming from repA is directly inserted into the list at repB, there will be a problem later on when the server at berlin tries to resolve the LDAP identifier because it will find an invalid one: The user identifiers in user/password mode won’t match those of the LDAP directory and the user names will be wrong in the replicated repository.
So in order to solve the problem, translation will be needed.
The Plastic replication system supports three different translation modes:
Replication can be done from both the command line interface (CLI) and the Plastic Graphical User Interface (GUI) tool. All the possible actions are located in a submenu under the branch options, because replication is primarily related to branches. This topic will describe how to perform the most common replication actions from the GUI.
In the GUI, replication and distributed collaboration has been organized in the following actions:
1. Branch actions:
a. Push the selected branch
b. Pull the selected branch
c. Pull a remote branch
2. Package actions:
a. Create a replication package from the current branch
b. Create a replication package from a branch
c. Import a replication package
Figure 17 depicts the different available operations. From the command line, all the operations are issued from a single command, but the GUI makes a distinction between push (move changes from your server to a destination) and pull (bring changes from a remote repository to yours) actions.
As was mentioned before, all replication actions can be accessed from the branch menu (check Figure 18).
The options push this branch, pull this branch, and create replication package from this branch are related to the branch currently selected in the branch view. The other options: pull remote branch, create replication package, and import replication package are generic replication actions which are not constrained to the current branch, but are instead located under the branch menu to keep all the replication options together.
Whenever you want to push your changes to a remote repository, select push this branch on the branch menu. Pushing your changes means sending the changes made on the selected branch to a remote repository.
If the branch already exists in the destination repository, the changes will be synchronized. A warning message will show up if there are conflicting changesets in the destination. Then, the developer will have to reconcile changes by first pulling the branch to the local server and then pushing it, once the merge conflicts have been resolved.
If the branch doesn’t exist in the destination repository, a new branch will be created (identified by the same GUID used on the source repository).
See Figure 19 for a detailed explanation.
Once you’ve pushed your branch to a different repository, the branch can be modified remotely. At some point in time, you’ll be interested in retrieving the changes made remotely to your branch. In order to do that, you have to use the pull this branch action from the replication branch menu.
The dialog box depicted in Figure 20 is very similar to the one used to push changes, but this time, your server is located on the right as destination of the operation.
When you pull changes from a remote branch, a subbranch can be potentially created if there are conflicting changesets on the two locations.
Another common scenario during replication is importing a branch from a remote repository into yours in order to start making changes or create child branches from it.
In order to perform the import, use the pull remote branch option. The dialog box shown in Figure 21 will be displayed. Notice that this time you can choose the source server, repository, branch, and destination repository on your server.
As it was described in chapter 7, different Plastic SCM servers can use different authentication modes. By default, when you try to connect to a remote server, you’ll be using your current profile (the configuration used to connect to your server). Sometimes, though, the default profile won’t be valid on the remote server.
In order to configure Plastic SCM to be able to connect with a remote server with different authentication mode, use the advanced options button on the replication dialog. It will pull up a dialog like the one in Figure 22.
The dialog box shows the profile currently selected (the default one on the screenshot) and also the translation mode (refer to chapter 7 for more information) and the optional translation table.
You can have different authentication profiles created from previous replication operations, and you can list them or create new ones by pressing the browse button located on the right of the remote server configuration profile edit box.
It will display a dialog box like the one in Figure 23 which will allow you to select, edit, create, or remove a profile.
So far, all the steps have been focused on setting up the replication process. Once the operation is correctly configured, press the replicate button and you’ll actually enter the replication progress dialog box as explained in Figure 24.
The replication operation is divided into three main states: fetch metadata, push metadata, and transfer revision data. The first one happens on the source server, the second one on the destination server, and the third one involves the two servers as data is transferred from the source to the destination.
At any point in time, the operation can be canceled pressing the cancel button.
When the replication operation finishes, a summary is displayed, containing detailed information about the number of objects created.
A replication package can be created from a branch on your repository or from any branch on any server you can connect to. In order to create a package from the selected branch in the branch view, click on create replication package from this branch. If you want to create a package from any remote branch, click on create replication package on the replication menu.
Figure 25 shows the package creation dialog. It will generate a replication package from the selected branch which will contain all data and metadata from the branch. It can be used to replicate between servers when no direct connection is available.
From the replication menu select import replication package and select a package file to be imported. The dialog box is shown in Figure 26.
The Branch Explorer is one of the core features in the Plastic SCM GUI and it has been greatly improved in recent releases to be able to deal with distributed scenarios. That’s the reason why it now receives the “distributed” Branch Explorer name. Its short-hand name is DBrEx.
Consider two replicated servers. The second one has first replicated the “main” branch from the first, and later a second branch was created and developers worked on it. At a certain point in time it will look like Figure 27.
At this point, each of the servers contains part of the development, but prior to 4.0, there wasn’t a good way to understand the whole branching schema other than connecting and browsing each repository Branch Explorer separately.
The DBrEx is able to render a distributed diagram by collecting data from different sources and then rendering the changesets and branches on a single diagram as the Figure 28 shows.
The DBrEx will combine the different sources and create an interactive diagram with the information gathered from the different sources.
There are several options in order to combine more than one replicated repository into the same DBrEx diagram. The first one is used to create a combined render including all the changesets and branches coming from the selected replication sources. Figure 29 shows you how to start configuring the diagram.
Once you click on one or more replication sources the distributed diagram will be rendered as depicted in Figure 30.
This way, the Distributed Branch Explorer introduces a new way to understand how the project and branches evolve across different replicas.
It is also possible to run the replication operations from the DBrEx, so pulling a remote branch is now as easy as selecting the remote branch rendered in the DBrEx and clicking on “pull this branch”. Remote branches and changesets are available for “diffing” too, which greatly enhances your work with distributed changes.
It is possible to right-click a remote branch or changeset on the DBrEx to explore and understand what was modified remotely. This way developers or integrators can better understand what changes are going to be pulled from the remote sources prior to completing the operation. The following figure shows the options enabled on a remote changeset.
Figure 31. Menu options (diffing) on remote changesets
Sometimes it is not necessary to render the whole distributed diagram because the SCM manager or developer needs to focus on a specific branch only.
Figure 32 shows the Branch Explorer / Show remote changesets menu option which allows you to select a remote source to decorate a branch with remote data to understand what needs to be pulled, see explorer differences, and trigger replication commands.
Plastic SCM is all about helping teams to embrace distributed development. To do so, we enhanced the DBrEx, but in order to deal with hundreds of distributed changesets, a new perspective has been created: the distributed view.
The Sync View enables you to synchronize any pair of repositories easily, browsing and diffing the pending changes to push or pull.
In order to take advantage of the new Sync View it is necessary to configure one. The entry point to the Sync View has been placed on the left menu bar as depicted by Figure 33.
The entry point will let you browse all the defined sync views once you’ve created them.
A synchronization configuration is about selecting a source repository and a destination repository and then running them to find the outgoing and incoming changesets.
The Figure 34 guides you through the process of setting up a sync view.
Whenever you define a new sync view configuration, you’ll have to define a repository source and a destination.
· You can set up a sync view with your code repo at your laptop as source and your project repo at the central server as destination to use it to push and pull changesets from your central server to your laptop and vice versa.
· You can set up a sync view between your main server and a mirror, to use it to keep the mirror in sync.
· You can set up a sync view to keep two servers, at two distant locations, in sync.
The details of the sync view shows the branches to be synchronized grouped in “outgoing” and “incoming” and then containing the related changesets. Figure 35 shows a sync view containing the mentioned details.
The menus available for the destination repository, branches, and changesets are depicted in Figure 36.
As Figure 37 shows, it is possible to define more than one source-destination pair on a single view. This is useful when you need to replicate more than one repository between the same two servers.