For many teams the version control system became a bottleneck. They are slowed down but common operations that should be blazingly fast but end up taking minutes or getting locked.
Normally they share some of all of the following features:
Distributed Version Control Systems (DVCS) can solve the problem removing the need of a central server and hence the potential bottleneck.
But DVCS is not always the solution for all teams especially when big files are involved, typically in gaming, aeronautics and big-asset demanding projects. The reasons are:
Normally they share some of all of the following features:
When that happens central version control is still the solution but combining the best features of the DVCS systems: super-fast lightweight flexible branching and efficient, fast and precise merging.
While Plastic SCM is a full featured DVCS, it is the only system able to support, concurrently, the two working modes: central server or distributed server.
Plastic SCM is also the only DVCS able to do partial replication: no need to do a full clone of the repository, replicating even a single changeset is available.
But when a central server is mandatory Plastic SCM can scale up to handle hundreds of concurrent users.
The goal is to show a heavy load scenario in action and check how one developer can perform version control actions while the server is fully loaded.
To describe how Plastic SCM behaves under heavy load conditions we've created the following test scenario in Amazon:
Server Spec | 20GB RAM 2 x Intel Xeon X5570 4 core per processor (2 threads per core) (2.7Ghz) – 16 logical cores – Amazon server running Windows Server 2012 |
---|---|
SQL Server 2012 | Configured to use up to 12Gb RAM |
Plastic SCM server | Configured to use server memory profile and to make extensive use of changeset-tree cache – local SQL Server as backend |
Working copy size | 350k files 25k directories 20Gb size Bigger file size: 1Gb |
Repository size | 5k branches 500k changesets 8 million revisions |
Server network interfaces | 10gbps |
Client network interfaces | 1gbps |
Client operation: every client will go through the following steps:
In the meantime, a separate client machine is manually operated to check how normal operations get affected under heavy load.
The following table + graphic describe the time to complete the entire test as the number of concurrent developers grow. Each test is run on the same growing database so the database size keeps getting bigger on each test execution.
In order to check how the server responds while handling really heavy load we kept the test-bots running while performed some regular operations from a GUI client on a different machine.
We launched the load test with 100 concurrent test-bots and then run a regular checkout-checkin and merge cycle from the GUI client to check server responsiveness.
The 100 concurrent test-bots create 10.000 checkins (10 thousand!) in about 5 minutes, which is a much higher load than 100 "real" developers working concurrently.
The following screenshots show how the server behaves while dealing with the workload sent by the 100 concurrent clients.
You can see the memory usage evolution and CPU usage. Also some screenshots showing the status of the 16 virtual cores during the test.
The following screencast shows an additional, human-controlled, client (number 101) performing some basic version control operations while the server is under heavy load.
The purpose of the screencast is to show how the system responds while dealing with a high load.