Currently working on processing a large number of MILP calculations with a relatively high degree of variability in complexity. Normally you can tune the settings for the specific problem, but in my case the range goes from some simple calcs that are considerably faster with almost all pre-processing turned off all the way up to problems where advanced pre-processing is advantageous.
In a situation like this how does one normally go about optimizing settings so that they be used to dynamically adjust to the complexity of the problem? I could manually take a subset and tune for specific ranges in complexity, but that just doesn't seem near as robust of an option.
This is actually a very hard problem and to my knowledge not even close to being solved. If we would know the best or let alone "good" settings for arbitrary MIPs we could lots of computing time. Please have a look at the MIPLIB-2010 paper that shows that even after only permuting rows and columns of a problem the performance of a solver can dramatically change (for better or worse). To date it's not even possible to predict how such a change may influence the performance. Introducing specific parameter settings lifts this problem to a whole new level of complexity.
Short answer: Your best choice currently is to classify your instances and try to find reasonable parameters for every class.