As a silly example, let's say I have a function int f(vector<int> v)
, and for some reason, I need to do a couple of operations on v
several times in f
. Instead of putting a helper function elsewhere (which could increase clutter and hurt readability), what are the advantages and disadvantages to doing something like this (efficiency, readability, maintainability, etc.):
int f(vector<int> v)
{
auto make_unique = [](vector<int> &v)
{
sort(begin(v), end(v));
auto unique_end = unique(begin(v), end(v));
v.erase(unique_end, end(v));
};
auto print_vector = [](vector<int> const &v)
{
copy(begin(v), end(v), ostream_iterator<int>(cout, " "));
cout << endl;
};
make_unique (v);
print_vector(v);
// And then the function uses these helpers a few more times to justify making
// functions...
}
Or is there some preferred alternative?
The advantage of such locally scoped functions is that they don’t pollute the surrounding code with “helper” definitions—all of the behaviour can be restricted to a single scope. And since they have access to the lexical scope of the surrounding function, they can be used to factor behaviour without passing many parameters.
You can also use them to create small DSLs for abstracting the mechanical details of a function, allowing you to change them later. You define constants for repeated values; why not do the same for code?
For a tiny example, a state machine:
vector<int> results;
int current;
enum { NORMAL, SPECIAL } state = NORMAL;
auto input = [&]{ return stream >> current; }
auto output = [&](int i) { results.push_back(i); };
auto normal = [&]{ state = NORMAL; };
auto special = [&]{ state = SPECIAL; };
while (input()) {
switch (state) {
case NORMAL:
if (is_special(current))
special();
else output(current);
break;
case SPECIAL:
if (is_normal(current))
normal();
break;
}
}
return results;
A disadvantage is that you may be unnecessarily hiding and specialising a generic function that could be useful to other definitions. A uniquify
or print_vector
function deserves to be floated out and reused.