All platforms – Pending changes are now converted into checkouts before calculating merges.
This is a very important change to improve usability, but it might be surprising for Plastic long-term users.
== Motivation ==
Suppose you have foo.c modified in your workspace and then you merge from main/task001 where foo.c was also modified.
The merge didn't show foo.c as a conflict because the merge only considers the checkouts.
If you checkout foo.c then the merge preview showed the conflict.
The merge was still correct because changes were "promoted to checkouts" during the merge.
But, let's see a different example now: suppose you moved art/ into game-art/ locally and there were changes inside art/ in main/task001. The merge didn't consider your local changes and it would end up causing trouble.
We changed all this by simply "applying changes" before calculating the merge. This means any local "changed" will be converted into "checkout" and also that local moves, local deletes, will be put under control too (checkout for short).
It is an important change to ensure consistency and avoid corner cases.
== What are checkouts exactly ==
Plastic supports two ways of working:
* You can directly modify foo.c, then go to Pending Changes and checkin.
* Or you can checkout foo.c first, then modify, then checkin.
Checkouts are just a way to say Plastic "hey, I modified this file", or "hey, I moved this file this way", so it doesn't have to "guess" what happened. Plastic is very good guessing what you did by looking into the workspace and detecting changes, but it is even faster if it doesn't have to guess because "it knows" a change or a move happened.
Checkouts are not locks. You can checkout foo.c and it won't be locked at all. In fact, checkouts are stored locally and don't impose any performance hit at all.
Checkouts only lock files (a.k.a. exclusive checkouts) when the files are configured to be locked. This way, you can say all .png files must be locked on checkout, and when you checkout game.png it will be locked. Locks are stored on the server, although the performance hit is minimal.
Note for former Perforce users: Plastic checkouts are similar to p4 edit but without the performance hit.
Note for former Git users: Plastic checkouts are like adding something to the Git index.
== How it works ==
When you launch a merge, before actually calculating the merge in the server, your local changes will be converted to "controlled changes" a.k.a. checkouts.
Windows, Linux - DevOps: A new built-in mergebot has just born! Its name: multiliner-bot.
The purpose of this mergebot is to be able to automatically merge a branch to several destination branches dynamically.
The merge is confirmed following an "all-or-nothing" policy. This is: if there are any merge conflicts, or the Continuous Integration system reports a build failure, the branch is rejected, and no merges will be written on any destination branch, even if some merges report no conflicts or CI builds success.
To define which destination branches a branch should be merged to, the multiliner-bot allows us to specify a plastic attribute name for this matter. Then, define a value for this attribute on a source branch (a comma separated list of destination branch names).
Find below a diagram explaining the basics of this multiliner-bot:
* The multiliner-bot requires Plastic Server 126.96.36.19973 or higher to work.
* No labeling support: to avoid label name collisions with several destination branches, this mergebot declines any labeling responsibility.
* The mergebot configuration still requires a 'status' attribute to define when a branch is 'ready', when it 'fails', or when it is 'merged'.
* If you specify several destination branches, and any of the merges fail (due to manual conflicts, or CI plan build), the bot rejects the source branch and marks it as 'failed'.
* If you specify several destination branches, the CI plan for each destination branch triggers sequentially. We will consider adding support for parallel plan triggering in the future.
* This mergebot allows triggering a CI plan after a branch successfully merges to several destination branches. But, if this post-checkin plan fails, it does NOT undo already confirmed merges. You just receive a notification about the post-checkin plan failing (if any notification plug is configured for this mergebot).
* You can configure several notifier plugs with this mergebot. All of them are optional. So far, there are up to two notification plugs in the mergebot configuration template.
* If any of the specified destination branches do not exist, the branch being processed by this mergebot is marked as 'failed'.
Windows, Linux - DevOps: Server and Jenkins Plug: Added support to specify a Jenkins job inside a Jenkins folder as the plan to execute by a mergebot (e.g. trunk-bot). Only top-level plan names were available before this release.
Now, you can specify a Jenkins job inside a folder in your favorite mergebot as the plan to build. Example:
* Your Jenkins server has a folder named "projects"
* Inside that folder, there is a job named "pipeline-debug".
* You can type "projects/pipeline-debug" in the "Continuous Integration" section, having a Jenkins plug available for it.
* Internally, he Jenkins plug will try to access to the config.xml file that defines the job on the following Uri path as an example: job/projects/job/pipeline-debug/config.xml.
Make sure this Uri is available on your Jenkins server. Otherwise, contact support [at] codicesoftware [dot] com and we will provide a custom Jenkins plug that fits your needs.
All platforms - Plastic: Merge now has file download progress! And it is ready for Linux, Windows, and macOS.
An image is worth a thousand words:
Why is this important? Well, during merges involving tons of big files, the UX was not good because the GUI didn't say much. So, users usually thought the app was not responding while it was downloading gigabytes of data. This is now finally solved!
This scenario is even more common for teams working on a single branch (game studios, for instance). Now the GUI shows download information while updating to the latest changes before a checkin.
This is part of the ongoing Incoming Changes effort, a much better way to work on a single branch.
All platforms - Triggers: When you run an update there are two client-side triggers: before-update and after-update. Now you have extra environment variables in both of them, we hope you find them useful for your scripts!
* PLASTIC_INITIAL_CHANGESET: the ID of the changeset your workspace is (or was) pointing at when the update begins.
* PLASTIC_FINAL_CHANGESET: the ID of the changeset your workspace is (or will be) pointing at when the update finishes.
So, with an example, let's say that your workspace is pointing to changeset 254, and you switch to branch /main. The head of /main is changeset 260.
The PLASTIC_INITIAL_CHANGESET will be 254, and the PLASTIC_FINAL_CHANGESET will be 260, for both the before-update and after-update triggers.
Bear in mind that we do not have "source" and "destination" changesets in partial and fast updates. When updating from Gluon, "cm partial update" command, or running a fast update from the GUI, the new environment variables will be -1.
You will notice that the update triggers run often for the plastic-global-config repository. Use filters to fine-tune in which repositories the trigger should run.
All platforms - Plastic, Gluon: A few releases earlier we introduced the "Location" column in the workspaces list. That column indicates the current object the workspace is pointing at. It included the full repository spec with the repository server, which is not useful, as it has a column of its own. Now, the location column shows the object spec without the repository. Check how it looks!
Windows - Plastic: Rejoice! We fixed the tab order of the "Other options" preferences panel.
All platforms - Server: A wrong SSL configuration won't prevent the server to startup correctly.
We changed how the server starts up, so a wrong SSL configuration won't prevent the server to startup. The server will listen on the other ports and will ignore the failing SSL one.
This is helpful when you wrongly configure a SSL port from the WebAdmin, because now you can go again to the WebAdmin and reconfigure instead of having to go to the command line.
Before, the server simply refused to start if it had a wrong SSL cert password setup.
All platforms - Plastic: we improved the usability of the Create Xlink dialog. Let me explain to you how: a xlink can be either "read-only" (Xlink) or "writable" (wXlink). Xlinks always point to the same changeset in the target repository, but wXlinks change according to the expansion rules. This means that expansion rules are useless for Xlinks - they only work for wXlinks. Yet when creating and editing Xlinks from the GUI, the expansion rules' list and buttons were enabled, which is confusing. No more! When creating or editing Xlinks, you can not create nor edit expansion rules.
Windows - Plastic: In the new Code Review system, when a requested change was applied, if you double-clicked it, it navigated to the changeset where it was applied. But you couldn't navigate to the comment itself anymore.
We changed that behavior so that, if you double-click the comment, you navigate to it. And, if the requested change is applied, you can navigate to the changeset where it was applied by clicking on its status.
Remember, right now, you need to launch the application with "plastic --codereview" to enjoy the new feature.
All platforms - Proxy Server: Made some improvements as part of the effort of modernizing the proxy.
1) Improved request log:
Now a Proxy server with the log configured to INFO (logger Proxy) can give quite meaningful info as follows:
INFO Proxy - Request: 1. Type: cache. Files requested: 332. Cache misses: 0. Total time: 109 ms. Downloaded: 0.00 Mb from quake@localhost:6060. Total returned: 1.17 Mb INFO Proxy - Request: 3. Type: mixed. Files requested: 224. Cache misses: 16. Total time: 62 ms. Downloaded: 0.13 Mb from quake@localhost:6060. Total returned: 1.80 Mb INFO Proxy - Request: 113. Type: downl. Files requested: 15. Cache misses: 15. Total time: 31 ms. Downloaded: 3.85 Mb from quake@localhost:6060. Total returned: 3.85 Mb
Where the type is:
* cache => full cache hit, everything read from cache.
* downl => everything read remotely.
* mixed => some from network, some from cache.
You can add this to your plasticcached.log.conf to see this log:
<logger name="Proxy"><level value="INFO" /></logger>
2) The Proxy server now dumps status every 30 seconds, just like a regular Plastic server does. This will help us diagnose problems.
The log is called ServerStats and looks like this (just a fragment):
2019-10-22 14:23:54,541 7 INFO ServerStats - PLASTIC SCM SERVER VERSION: 188.8.131.5243 2019-10-22 14:23:54,544 7 INFO ServerStats - PROCESS INFO 2019-10-22 14:23:54,545 7 INFO ServerStats - Entry Value 2019-10-22 14:23:54,545 7 INFO ServerStats - ======================== ======================= 2019-10-22 14:23:54,546 7 INFO ServerStats - Proc Id 14608 2019-10-22 14:23:54,554 7 INFO ServerStats - Handle count 368 2019-10-22 14:23:54,555 7 INFO ServerStats - Thread count 15 2019-10-22 14:23:54,556 7 INFO ServerStats - Non paged system mem 00.02 Mb 2019-10-22 14:23:54,558 7 INFO ServerStats - Paged mem size 21.31 Mb 2019-10-22 14:23:54,582 7 INFO ServerStats - USER STATS 2019-10-22 14:23:54,583 7 INFO ServerStats - Sent 518.43 Mb. Received 233.19 Kb
3) Now you can optionally configure number of threads in plasticcached.conf. The following JUST configures the threads (although more settings are available):
>cat plasticcached.conf <PlasticCacheConf> <MaxThreads>4</MaxThreads> </PlasticCacheConf>
All platforms - Proxy Server: We disabled a thread abort code that was potentially causing problems and could make the proxy unstable and stop responding requests.
The Proxy has a safety code to kill a request if it detects that the client aborted the connection. This typically happens when you CTRL-C a command line. When that happens, the Proxy aborts the thread handling the request, and under some circumstances this could make the server very unstable.
We removed the abort code now.
All platforms - Server: We detected that the server initialized and shut down its internal services several times. This scenario was under control and it didn't affect functionality or performance. Its downside was that it polluted log files. That's fixed now.
macOS - Plastic: In the Branch Explorer the "dynamic date filter" was reset to "A given date" when changing a display option, even if you didn't change the start date filter there! That's fixed now.
All platforms - Plastic, Gluon: We fixed the search text entries in the Plastic toolbar and Gluon search dialog to properly protect regex-like characters. Before this fix, filenames such as 'file (new).txt' or 'file+15.txt' were really difficult to find because search patterns like '(new' or '+15.txt' wouldn't match anything.
Windows - Plastic: a null exception was thrown in the new Code Review system when you tried to navigate to an applied change and the changeset where the change was applied didn't contain the file where the change was requested. Now it's fixed.
Windows, Linux - Server (DevOps): There was a small issue while configuring a new mergebot. The WebAdmin page did not reload the configuration template when changing mergebot type. Unless you tweaked the URL, you could not configure a mergebot different than the first one on the list. That's now fixed!