I am learning more advanced OO tactics for javascript coming from a C# background and am wondering about how to or if its even a good idea to implement prototype based validation. For instance when an object or function requires one of its parameters to satisfy a certain interface, you could check its interface like so,
var Interface = function Interface(i) {
var satisfied = function (t, i) {
for (var key in i) {
if (typeof t !== 'object') {
return false;
}
if (!(key in t && typeof t[key] == i[key])) {
return false;
}
}
return true;
}
this.satisfiedBy = function (t) { return satisfied(t, i); }
}
// the interface
var interfacePoint2D = new Interface({
x: 'number',
y: 'number'
});
// see if it satisfies
var satisfied = interfacePoint2D.satisfiedBy(someObject);
I came up with this strategy to validate an object by its interface only, ignoring the internal implementation of the object.
Alternatively say you are using prototype-based inheritance, should you or should not validate parameters based on their prototype functions? I understand that you'd use a prototype to implement default functionality whereas an interface doesn't specify any default functionality. Sometimes the object you are passing into a function might need certain default functionality in order for that function to work. Is it better to only validate against an interface, or should you ever validate against a prototype, and if so, whats the best way to do it?
EDIT -- I am providing some more context as to why I am asking this,
Say for instance in online game design (games written mostly in javascript). There are 2 main reasons I am interested in validation within this context,
1) Providing a strong public API for modding the game if desired
2) Preventing (or atleast discouraging greatly) potential cheaters
Which requires a balance between customizability and abuse. Specifically one situation would be in designing physics engine where objects in the game react to gravity. In a realistic system, users shouldn't be able to add objects to the system that do not react to gravity. The system has a function that expresses the global effect of gravity at any given point:
function getGravityAt(x, y) {
// return acceleration due to gravity at this point
}
And objects which react have a method that uses this to update their acceleration:
function update() {
this.acceleration = getGravity(this.position);
}
The minimum thing to do might be to ensure that any object added to the system has an 'update' method, but you still aren't ensuring that the update() method really is intended to react to gravity. If only objects that inherit from a prototypical update() method are allowed, then you know at least to some degree everything in the system reacts realistically.
This is a pretty subjective question. I'll pass on the question of whether it's a good idea to do interface-based validation in Javascript at all (there may well be good use-cases for it, but it's not a standard approach in the language). But I will say that it's probably not a good idea to validate objects based on their prototypes.
If you're validating by interface at all, you're probably working with objects created by other programmers. There are lots of ways to create objects - some rely on prototypes, some do not, and while they each have their proponents, they're all valid and likely approaches. For example:
var Point = function(x,y) {
return {
x: function() { return x },
y: function() { return y }
};
};
var p = new Point(1,1);
The object p
conforms to an interface similar to yours above, except that x
and y
are functions. But there's no way to validate that p
satisfies this interface by inspecting its constructor (which is Object()
) or Point.prototype
. All you can do is test that p
has attributes called x
and y
and that they are of type "function"
- what you're doing above.
You could potentially insist that p
has a specific ancestor in its prototype chain, e.g. AbstractPoint
, which would include the x
and y
functions - you can use instanceof
to check this. But you can't be sure that x
and y
haven't been redefined in p
:
var AbstractPoint = function() {};
AbstractPoint.prototype.x = function() {};
AbstractPoint.prototype.y = function() {};
var Point = function(x,y) {
var p = new AbstractPoint(x,y);
p.x = "foo";
return p;
}
var p = new Point(1,1);
p instanceof AbstractPoint; // true
p.x; // "foo"
And perhaps more importantly, this makes it harder to drop in custom objects that also satisfy the interface but don't inherit from your classes.
So I think what you're currently doing is probably the best you can hope for. In my experience, Javascript programmers are much more likely to use on-the-fly duck-typing than to try to mimic the capabilities of statically typed languages:
function doSomethingWithUntrustedPoint(point) {
if (!(point.x && point.y)) return false; // evasive action!
// etc.
}