Patrick Schriner (with whom I had the pleasure of studying together at the University of Bonn) asked us at what project size continuous integration is worthwhile in PHP projects. I'm happy to take up this question, but I'd like to look at it in a slightly broader context.
Continuous integration means that every developer shares his state of development with his team members at least once a day, i.e. transfers his changes to the code into version control. The introduction of continuous integration is often misunderstood as "we install a continuous integration server". Such a server can automatically fetch the current state of the software from version control and use automated tests to ensure that this state of the software works correctly. However, this is not a prerequisite for developers to regularly synchronize their state of the code. On the contrary, using a Continuous Integration Server is only worthwhile if the developers are already doing this - and, of course, there are automated tests that can be run.
If developers integrate their state of development several times a day, they reduce the risk of conflicts occurring when merging their changes. The less often you merge changes, the more likely and problematic these conflicts are. The common expression "merge hell" describes the situation a team gets into when integration only takes place every few days, weeks, or months, and then nothing really fits together anymore because the developments have diverged too much. And even if such an integration can be done without a merge conflict, it doesn't mean that the software will actually work the way the developers envisioned it.
For example, a team I once worked with a few years ago only integrated once a month. And then spent a week resolving the resulting merge conflicts and getting the software back to a state where it could be used. But the real problem with this team was a communication one: the developers didn't talk to each other enough. This was reflected in the code with its merge conflicts.
Modern, distributed version control systems like Git have made short-lived feature branches popular. Here, at the beginning of work on a new feature, a separate branch is created for that feature. When the feature is fully implemented, the corresponding branch is merged into the main development branch. However, the concept of feature branches only works if these development branches are short-lived. Long-lived branches lead to merge hell if the changes from the main development branch are not integrated into the branch at least once a day.
If all developers work on the same main development branch (trunk-based development) and commit to it several times a day, major code rebuilds (refactorings) can become difficult. As a rule, it is not possible to change all the relevant parts of the code in one go or in one commit. And even if a developer succeeds in doing this, there is still a risk that the software will now work only for him or only for his colleagues.
The best practice "Branch by Abstraction" can help with trunk-based development and major rebuilds. Here, you introduce an abstraction layer for the component of the software that you want to change. After refactoring the rest of the system to use the abstraction layer (rather than the component you want to change), you implement the new version of the component in new classes. Then you adjust the abstraction layer to use the new classes instead of the old ones. Now both the old implementation and the abstraction layer can be removed.
An alternative to "Branch by Abstraction", in order to facilitate the work with Trunk-Based Development, is the use of Feature Flags. Here, a new feature is implemented in such a way that it can be activated and deactivated - for example, via a configuration setting. If the use of feature flags is possible, new features can be developed in an experiment-driven manner. We have already covered this exciting topic in our article on integration testing .
If you not only run tests in the build process performed by a Continuous Integration Server, but also use static code analysis here, we talk about continuous inspection. In this case, metrics are recorded for each development stage of the software, which record various aspects of the internal quality of the software. These are those aspects of the software quality, which are relevant for the developers. For example, it is important that the code is easy to read, understand, adapt and extend. If this is not the case, it becomes increasingly difficult and thus more expensive over time to implement the customer's continuously made and usually unpredictable change requests. At some point, even minimal changes lead to unexpected side effects.
The reports generated on the basis of the data from the continuous inspection provide developers with important arguments when they have to explain the concept of technical debt and the need for refactoring to their management or the customer:
In the last sprint we didn't implement any new features (or less than usual). But we cleaned up the code and removed code duplicates, reduced complexity, wrote missing tests, etc., as you can see in the developments of these diagrams. Thanks to this cleanup, we have lowered our technical debt and will be able to implement new features faster and more reliably in the next sprints.
The original question was at what project size is continuous integration worthwhile. We believe that in projects of any size, it is worthwhile for all developers to develop in small steps and share their current development status with the team in the form of small commits. Whether it is worthwhile to use a Continuous Integration Server depends less on the size of the project and more on the lifetime of the developed software. For the development of a website that will be in production for only four weeks as part of an advertising campaign, a Continuous Integration Server is less worthwhile than for the development of a mission-critical application that will be in production for years. We see a lot of value in having continuous integration supported by a Continuous Integration Server. This "enforces" that the automated tests are run and software metrics are collected over time.