I understand why we have a training set, validation set, and test set. In short:
I understand that in many situations you want an unbiased estimate of model performance, for example publishing a paper or reporting results to a client. However, in my situation I do not care about getting an unbiased estimate of model performance on new data. I simply want to find the best model and use it. Additionally, I do not have that much data and would rather have a larger training set and validation set. Is there any other reason to include a test set besides an unbiased estimate of model performance? Does it make sense for me to just use train and validation?
The performance of your model will be difficult to measure if you are tuning the parameters to perform well on the validation set and never testing the model without any further tuning.
The validation set allows you to get feedback on the models performance and change the hyperparameters / features / etc. but to truly measure your model and how well it is expected to perform on new data, it should be measured with some data that it has not yet seen.
In short, it is easy to overfit the validation set by tuning parameters and engineering specific features that can inflate the model's true performance on real data. That is where a testing set provides value.