One of the key features for game dev teams is to be able to use highly scalable centralized servers
We run frequent scalability tests using automated bots that simulate actual users to tune Plastic under heavy load.
We have an Amazon EC2 based setup to enable teams to connect and check scalability by themselves. The typical scenario is described here including a screencast showing how a developer using the Plastic GUI is affected by the load created by hundreds of concurrent users.
This section explains how Plastic scales using different operating systems and database backends.
Each "test bot" performs the following operations:
So each “test bot” performs 100 checkins of 10 files each.
Code base size: 197,356 files, 15,179 directories – 2.59 GB
OpenSUSE 12.3 x64 14.0 GB RAM, 3.00 GHZ 8 processors
MySQL (7GB innodb) MySQL-server-5.6.15-1.linux_glibc2.5.x86_64.rpm
Windows 7 Pro SP1 64 bit, 8GB RAM 3.60 GHZ Intel Xeon E5-1620 0
Windows Server 2012 R2 x64 14.0 GB RAM, 3.00 GHZ 8 processors
SQLServer 2012 (Limit to 7GB max mem)
MySQL (7GB innodb) mysql-installer-community-184.108.40.206
Linux – std sgen means the test uses the standard sgen garbage collector with no extra params
Linux – boehm means the Plastic server runs on the standard setup, using the Boehm garbage collector
Linux – tweak sgen means the Plastic servers uses sgen with the following params (highly recommended for high load linux servers) - Mono 3.0.3 provided with PlasticSCM5, with sgen. Server launched with the following MONO ENV variable: MONO_GC_PARAMS=nursery-size=64m,major=marksweep-fixed,major-heap-size=2g
MySQL – int ids shows how the overall performance is greatly improved when the database stores object identifiers using ints instead of longs. This improvement is already present in the 5.4 release.
The results show how Plastic scales under a really heavy load scenario. The Windows server + SQL Server combination probes to be the fastest, closely followed by the Linux setup correctly configured to work under heavy load
It is important to highlight that the number of checkins performed during the test by each ‘test bot’ is extremely high compared to what a human developer will do. This test with 100 bots can be easily compared to a real scenario with several hundreds of concurrent developers.