In our project we ran into a weird issue after integrating lint-staged along with prettier and tslint.
The idea was to apply prettier
and then tslint
to all files in the commit using husky
pre-commit git hook.
In order for the whole project to conform to the new code styling described by the prettier
, we decided to first run prettier
for the whole project and then commit all those files with the git hook described above.
After running prettier
we ended up with 400+ files to commit. So, when running git commit
lint-staged
passed all those 400+ files as arguments to prettier
and tslint
scripts.
Originally we had the tslint
as a script inside package.json
which looked like the following:
"lint": "tslint -c tslint.json --project src/tsconfig.json"
And the lint-staged
config looked like the following:
{
"linters": {
"*.ts": ["prettier --write", "npm run lint", "git add"]
},
"ignore": ["**/*.spec.ts"]
}
When we ran git commit
npm ended up with error at the linting stage. No error description was given in the output.
Then we tried to copy all the file paths lint-staged gave us into the terminal and ran npm run lint
with all those file paths manually.
The error we had said that Argument list too long
.
sh: /path-to-app/node_modules/.bin/tslint: Argument list too long
npm ERR! code ELIFECYCLE
npm ERR! errno 126
By further trial we figured out that the maximum number of file paths that can be accepted without any errors is 357. So, when we were running the linting script via npm run lint
we could only pass 357 file paths as arguments maximum.
However, the interesting thing was that if we change the lint staged
script to use tslint
straight away (without npm run lint
):
{
"linters": {
"*.ts": ["prettier --write", "tslint -c tslint.json --project src/tsconfig.json", "git add"]
},
"ignore": ["**/*.spec.ts"]
}
This error with Argument list too long
was gone and linting started to work without errors - no matter how many files were passed as arguments.
Thus, the problem itself was solved. But the question stays - what's the reason for such behaviour? Basically, when running scripts with npm run
we can pass the limited number of arguments while otherwise - no problems occur no matter how many arguments.
There are limits to the number of parameters and environment variables that you can pass to a new process. These limits come from the kernel itself.
However, if you are on modern Linux x86_64 (likely), this should not be a problem; certainly not with only 357 parameters.
Now, a single parameter cannot be more than 128 KiB in length. If you were passing all the parameters in a single string (which counts as a single argument), you could hit the limit if each of the paths were really big (around 350 characters each).
If that was not the case, then maybe your shell (or some other tool in the chain) has some artificial, more stringent limits.