Let's say we have a data object X and some "processor" objects/methods A, B, C and D. A(X) produces a new X with some additional data (the result of A processing). B(X) produces a new X with some other additional data. C(X) also produces a new X with some additional data but it requires that A has already been ran against X. So: A(X).B(X).C(X).D(X) should run properly. B(X).D(X).A(X).C(X) should also run properly. B(X).C(X).A(X).D(X) should fail (because C requires the info A produces).
Is this possible to implement in C# so that the order constraints are enforced in compile time? If not, is there a design pattern or some common strategy of how this should be implemented? There can be many processors and many constraints, what I'd like to avoid is having to declare a factorial number of types to keep track of whether a processor has been ran or not.
You can use inheritance, combined with generic constraints:
class Data {
}
class ExtendedData : Data {
}
static class Pipeline {
public static ExtendedData A<T>(this T value) where T : Data {
if (value is ExtendedData extended) {
return extended;
}
else {
return new ExtendedData():
}
}
public static T B<T>(this T value) where T : Data {
return value;
}
public static ExtendedData C(this ExtendedData value) {
return value;
}
}
These variants will work:
new Data().A().B().C();
new Data().B().A().C();
new Data().A().C().B();
This variant will be rejected by the compiler:
new Data().B().C().A();
C()
will expect an ExtendedData
, while B()
will only deliver Data
.