Handle huge projects
Handle huge projects (300k files is regular count for a workspace, 1.5 million is big).
Handle huge projects (300k files is regular count for a workspace, 1.5 million is big).
Collaborate effectively on code & art among different teams (and studios) in distant locations – potentially transferring gigabytes.
Handle huge repository sizes including really big binaries (typically game art, both in raw and processed formats).
Artists work on files that can’t be merged – so they need exclusive checkouts (locking).
Need to work centralized (without local repositories) in order to save space and make many team members feel at home with the workflow. Since teams can be pretty big, server scalability and performance under heavy load is a must.
Flexible workspace configuration: being able to work on sparse trees. It is important to avoid losing time downloading huge binaries that won’t be used all the time.
Direct access to the version control engineering team: tools teams inside game studios face huge challenges to keep the pipeline up and running. Engineer to engineer communication saves them precious time.
A clear migration path from the current version control into the new one, so that the move is as painless as possible.
Good ways to structure the codebase hierarchies to simplify component oriented development: solid alternatives to monolithic repositories that can be hard to maintain and evolve.
In short: game dev teams need the distributed and branching/merging capabilities of the modern version controls, but can’t afford losing any of the strengths of their current systems: performance, scalability, exclusive locks and centralized workflow support.
Under some specific circumstances teams will prefer proxies than using a replicated server.
The benefits of distributed/replicated over proxy based solutions are clear: if the connection goes down, the teams depending on proxies won’t be able to continue working with the main server.
That’s the reason why we always encourage the use of distributed servers instead of proxies for distributed scenarios.
Plastic is all about flexibility, though, so a proxy server is available.
There are scenarios where we recommend the use of proxies:
Under these circumstances is where introducing a proxy server on each network segment can dramatically improve overall performance by reducing data requests (updates) to the central server.
Game projects typically share some key structural elements:
Different game projects will share the ‘engine’ code that will evolve with the contributions of a core engine team and the different game teams.
While in other version control systems it usual to define a monolithic big repository in Plastic the best way is to use a different repository for each component and then link them using Xlinks
Smaller and cohesive repositories end up being a more logical way to design the repository structure and provide additional advantages:
The following sections describe how to model common game scenarios using Plastic and Xlinks.
Each game sharing a common engine will use a similar structure:
Alternatively the “game proj” repo can be avoided and the team can use a simpler structure like this one:
Each different game in the studio will follow a similar repository pattern, reusing as many shared components as required.
An engine developer will typically need a “sample game” in order to test different improvements with real code during development.
In order to model this requirement he will use in the following way:
In order to test the “engine” with more games, the structure will be as follows:
Artists will use an ‘art’ repository (normally ‘raw art’) and will work on a single branch.
They will use “locking” to prevent concurrent access to the files they’re working on, which are typically unmergeable.
The structure will be as follows:
While working with different studios we found out that transferring large amounts of data between distant sites is a key operation for many of them.
That’s why we included an improved ‘WAN network channel’ in Plastic SCM 5.4.
Data transfer between servers hosted on different continents faces different problems than LAN scenarios.
Typically when replicating data between distant servers through the Internet, applications have to deal with high latency, so even when the bandwidth is high the final transfer rate will be greatly reduced.
TCP is not ideal under these circumstances and that’s why the new WAN channel takes advantage of a UDP based protocol to improve the transfer rate. A 2.3 improvement is easy to achieve with a 100msec latency, and then as the latency increases, WAN channel outperforms the TCP one up to 5-10 times.
One of the key features for game dev teams is to be able to use highly scalable centralized servers
We run frequent scalability tests using automated bots that simulate actual users to tune Plastic under heavy load.
We have an Amazon EC2 based setup to enable teams to connect and check scalability by themselves. The typical scenario is described here including a screencast showing how a developer using the Plastic GUI is affected by the load created by hundreds of concurrent users.
This section explains how Plastic scales using different operating systems and database backends.
Each “test bot” performs the following operations:
Loop 100 times:
So each “test bot” performs 100 checkins of 10 files each.
Code base size: 197,356 files, 15,179 directories – 2.59 GB
OpenSUSE 12.3 x64 14.0 GB RAM, 3.00 GHZ 8 processors
MySQL (7GB innodb) MySQL-server-5.6.15-1.linux_glibc2.5.x86_64.rpm
Windows 7 Pro SP1 64 bit, 8GB RAM 3.60 GHZ Intel Xeon E5-1620 0
Windows Server 2012 R2 x64 14.0 GB RAM, 3.00 GHZ 8 processors
SQLServer 2012 (Limit to 7GB max mem)
MySQL (7GB innodb) mysql-installer-community-220.127.116.11
Linux – std sgen means the test uses the standard sgen garbage collector with no extra params
Linux – boehm means the Plastic server runs on the standard setup, using the Boehm garbage collector
Linux – tweak sgen means the Plastic servers uses sgen with the following params (highly recommended for high load linux servers) - Mono 3.0.3 provided with PlasticSCM5, with sgen. Server launched with the following MONO ENV variable: MONO_GC_PARAMS=nursery-size=64m,major=marksweep-fixed,major-heap-size=2g
MySQL – int ids shows how the overall performance is greatly improved when the database stores object identifiers using ints instead of longs. This improvement is already present in the 5.4 release.
The results show how Plastic scales under a really heavy load scenario. The Windows server + SQL Server combination probes to be the fastest, closely followed by the Linux setup correctly configured to work under heavy load
It is important to highlight that the number of checkins performed during the test by each ‘test bot’ is extremely high compared to what a human developer will do. This test with 100 bots can be easily compared to a real scenario with several hundreds of concurrent developers.
We frequently benchmark Plastic against other key version control systems known for its high performance and continuously improve in key operations such us add, checkin and update.
We’ve compared Plastic, Git and a very well-known commercial version control system extensively used in the game industry. We’ve tested with 3 types of codebases: small, medium and large.
The tests benchmark a full add+checkin (adding the entire code base to the version control) and also a ‘clean update’ (downloading the entire codebase to a clean directory).
While certainly adding a big codebase of 140GB is not something a team will do on a daily basis, it shows how Plastic can handle the scenario. When the data to be checked in is really large, disk and network IO tend to be the bottleneck, so probably the ‘medium’ scenario is the most relevant since even in large repositories the team members will more often do ‘medium’ checkins (or even ‘small’) than large ones.
All the tests have been performed with Plastic SCM 5.4 using the ‘filesystem blob storage’ which saves big blobs (bigger than a certain configurable size) on the filesystem instead of the database. Plastic uses a SQL Server database backend.
When comparing to Git it is important to highlight that Plastic is configured in client/server mode, so all data transfer has to be sent through the network.
Plastic is 7.3 times faster than ‘commercial version control’ doing add + checkin
Plastic is 4 times faster than Git – consider that Git is checking in locally while Plastic is sending data through the network from client to server
This scenario is specially relevant because even if game teams use LARGE repositories, most of the time they’ll be doing ‘medium’ checkins (it is not so common to checkin 142Gb as it is a few Gb). In these cases is where Plastic shines since it excels managing the metadata and IO speed is not yet the limit factor.
Plastic is 6 times faster than ‘commercial version control’ doing add + checkin
Plastic is 3.3 times faster than Git – consider that Git is checking in locally while Plastic is sending data through the network from client to server.
Plastic is 3 times faster than the ‘other commercial version control’.
Plastic needs 35 minutes less to complete a add/ci than Git -> 1.7 times faster.
Note: the Git test was repeated several times because we found occasional issues (out of memory).
CPU: 4CPU, 14 ECU (x64) / Intel Xeon CPU E5-2670 v2 @ 2.50GHz
RAM: 30.5 GB
OS: Windows Server 2012
Server Machine: 64.5MB/sec Read speed (HDTune)
Client Machine: 145 MB/sec Read speed (HDTune)
Write speed: 195Mb/s (cm iostats)
Meet Digital Legends Entertainment as they explain the challenges they faced delivering new multi-platform titles and how they transitioned to Plastic SCM to reduce their time to market.
Digital Legends moved from a centralized workflow to a fully parallel one, embracing branching and merging and connecting distributed teams through Plastic SCM.
“The Telltale team adopted Plastic SCM and had contributed to design and test the new Plastic Gluon to improve our version control workflow”
Zac Litton, CTO at Telltale Games.