I realise that there are such tools that are called/perform 'continuous integration'. E.g. travis and jenkis.
I have never really got to understand what does continuous integration really mean.
I also would like to know do these kinds of tools automatically create and run tests of your codes? Do the programmers using these tools still need to write their own tests ?
โ03-22-2022 11:48 PM
Semantics. The TL;DR of the article: set up your CI as a pre-submission model instead of post-submission.
In reality, I find you actually need a mix, although you should favour post-submission. Anything that's quick & cheap to do (i.e. build source code incrementally, run some set of unit tests, etc) goes in pre-submission. Anything that's more expensive goes into post-submission CI. Post-submission CI should be the exception, not the rule.
In our system, we have several types of CI:
Feature-testing: every developer will have their branch they send out for review tested with roughly the same set of tests as when they submit their code.
Pre-submission: every feature gets a merge request. the merged result goes through testing. once everything passes, the review is closed out, bug tracker updated & original feature branch deleted. New master is published.
Long-running CI: we have reports that can take 4 hours to generate. on the assumption that most of the time there's no change, this runs as a post-submission step (to minimize interruption).
Clean-builds of our deliverable executable to another team. These take longer & are unlikely to break if the pre-submission passed.
Large-scale integration testing: we have some set of integration testing that can run for a very long.
A good rule of thumb is that if you're finding any post-submission test breaking frequently enough, then you are very well served by trying to find a cheaper proxy test that has a high correlation to the failure you are seeing & adding that as a pre-submission test.
โ03-23-2022 03:42 AM