If the number of new errors is 1 but the number of resolved errors is 1 or more in the same build, then the new error is ignored when evaluating the severity levels. You can reproduce this by setting up a simple build that generates a dummy cppcheck.xml containing 1 error that is unique for every build, which generates +1 and -1 errors for each build. For the first / initial build, leave the new failure threshold unset to generate a stable build from which you can base the behaviour on. After this initial build has been created, set the new failure threshold level to 0 (any more than zero errors is a failure).
Add a shell command as a build step and run the following:
msg="An error with a date of $dt to make it unique"
lineno=$(( $RANDOM % $(wc -l <$file) + 1 ))
cat >cppcheck.xml <<HERE
<?xml version="1.0" encoding="UTF-8"?>
<error id="PerlErrors" severity="warning" msg="$msg" verbose="WARNING $errno:$msg">
<location file="$file" line="$lineno"/>
This will create a new cppcheck.xml file for each build, that will be shown as one error resolved and one new error. Observe that the build is still stable and is never marked as unstable. Also observe the message in the log:
[Cppcheck] Not changing build status, since no threshold has been exceeded.
If you inspect the report, you can observe the delta:
new /usr/bin/splain 583 warning PerlErrors false An error with a date of Mon Oct 20 11:30:03 BST 2014 to make it unique
solved /usr/bin/splain 498 warning PerlErrors false An error with a date of Mon Oct 20 11:29:23 BST 2014 to make it unique
This is obviously a major issue, because it is allowing new errors to go unnoticed in CI builds.