Plastic SCM under heavy load

Plastic SCM scalability

The Plastic SCM server is capable of handling high loads serving up to hundreds of developers on a single server and attending their requests on a timely manner.

When other version controls fail to scale and end up locking developers, Plastic SCM is able to handle the load. Find out how.

Version control performance is key for many teams

For many teams the version control system became a bottleneck. They are slowed down but common operations that should be blazingly fast but end up taking minutes or getting locked.

    Normally they share some of all of the following features:

  • Big teams: hundreds of developers loading the central server.
  • Big projects: huge codebases with hundreds of thousands of files (200k to 400k files).
  • Big binaries: a number of files as big as 1Gb or even more.
  • Need efficient branching: they are using (or would like to use) modern branching patterns.
  • Require fast merging: to deal with the increased number of branches.

Distributed Version Control is not always an option

Distributed Version Control Systems (DVCS) can solve the problem removing the need of a central server and hence the potential bottleneck.

But DVCS is not always the solution for all teams especially when big files are involved, typically in gaming, aeronautics and big-asset demanding projects. The reasons are:

    Normally they share some of all of the following features:

  • They can’t afford to have a repository clone on each developer machine. This can happen due to size or security restrictions.
  • Certain roles in the team are not version control savvy and hence it is easier to get them interacting with the central server.

When that happens central version control is still the solution but combining the best features of the DVCS systems: super-fast lightweight flexible branching and efficient, fast and precise merging.

Plastic SCM is a DVCS but can work centralized

While Plastic SCM is a full featured DVCS, it is the only system able to support, concurrently, the two working modes: central server or distributed server.

Plastic SCM is also the only DVCS able to do partial replication: no need to do a full clone of the repository, replicating even a single changeset is available.

But when a central server is mandatory Plastic SCM can scale up to handle hundreds of concurrent users.

Scenario description

The goal is to show a heavy load scenario in action and check how one developer can perform version control actions while the server is fully loaded.

To describe how Plastic SCM behaves under heavy load conditions we’ve created the following test scenario in Amazon:

Server Spec 20GB RAM
2 x Intel Xeon X5570
4 core per processor (2 threads per core) (2.7Ghz) – 16 logical cores – Amazon server running Windows Server 2012
SQL Server 2012 Configured to use up to 12Gb RAM
Plastic SCM server Configured to use server memory profile and to make extensive use of changeset-tree cache – local SQL Server as backend
Working copy size 350k files
25k directories
20Gb size
Bigger file size: 1Gb
Repository size 5k branches
500k changesets
8 million revisions
Server network interfaces 10gbps
Client network interfaces 1gbps

    Client operation: every client will go through the following steps:

  • Create a branch starting from the last changeset on the “main” branch.
  • Checkin 100 times.
  • Modify between 1 and 15 files on each checkin.

In the meantime, a separate client machine is manually operated to check how normal operations get affected under heavy load.

Server scalability test

The following table + graphic describe the time to complete the entire test as the number of concurrent developers grow. Each test is run on the same growing database so the database size keeps getting bigger on each test execution.

Behavior and responsiveness under heavy load

In order to check how the server responds while handling really heavy load we kept the test-bots running while performed some regular operations from a GUI client on a different machine.

We launched the load test with 100 concurrent test-bots and then run a regular checkout-checkin and merge cycle from the GUI client to check server responsiveness.

The 100 concurrent test-bots create 10.000 checkins (10 thousand!) in about 5 minutes, which is a much higher load than 100 “real” developers working concurrently.

Server under heavy load

The following screenshots show how the server behaves while dealing with the workload sent by the 100 concurrent clients.

You can see the memory usage evolution and CPU usage. Also some screenshots showing the status of the 16 virtual cores during the test.

Additional client during performing operations while the 100 test bots perform checkins

The following screencast shows an additional, human-controlled, client (number 101) performing some basic version control operations while the server is under heavy load.

The purpose of the screencast is to show how the system responds while dealing with a high load.

Get the latest news