I'm working on some experiments for comparing different labeling heuristics in Sicstus Prolog.
But I keep getting into 'Resource error: insufficient memory'.
I'm pretty sure I'm doing something wrong in my testcode.
The following code will replicate my problem:
:- use_module(library(clpfd)).
:- use_module(library(lists)).
atest( R, C ):-
X is R * C,
length( M, X),
domain( M, 0, X),
all_distinct( M ),
statistics(walltime, [_,_SinceLast]),
labeling( [],M ),
statistics(walltime, [_,SinceLast]),
write('Labeling time: '), write(SinceLast),write('ms'),nl.
% Testcode for looping through alot of variations and run one test for each variant
t1:-
Cm = 1000,
Cr = 1000,
(
for(C,0, Cm),
param([Cm,Cr])
do
(
for(R, 0, Cr ),
param([C])
do
atest( C, R )
)
).
A short time after I call the t1 predicate, I get a 'Resource error: insufficient memory' exception.
I guess I should do something after I call atest to release resources?
Also: Is this the correct way to measure labeling-time? Any better way to do this?
You are not doing anything strictly wrong, but you are trying to run
length(Xs,N), domain(Xs,0,N), all_distinct(Xs), labeling([],Xs).
for N up to 1000000. The system constructs a search tree with depth N, and has to store intermediate states of the variables and the constraint system for each level. This takes a lot of memory for large N, and it is quite possible that you get a memory overflow already for a single run with large N.
The second issue is that you are running your benchmarks in a recursive loop, i.e. you are effectively creating a conjunction
atest(0,0), ..., atest(1000,1000)
and since each call to atest/2 succeeds with its first solution and keeps its search state around, this means you are trying to create a search tree with 250500250000 levels...
The simplest improvement is to cut each search after the first solution by changing your code to once(atest(C,R))
. A further improvement is to run the benchmarks in a failure-driven loop
(
between(0,Cm,C),
between(0,Cr,R),
once(atest(C,R)),
fail
;
true
)
which will free memory sooner and faster, and reduce distortion of your measurements due to garbage collection.