Search code examples
memorystack-overflowpostscript

Why closing dictionary in PostScript?


PostScript books always suggest to carefully handle dictionaries. Separate user dictionaries and keep them small, close dictionary when it is no longer needed, avoid overloading global dictionary, etc.

However, I think, these instructions back to old ages where there were severe hardware limitation for memory. Closing a dictionary to free memory. The memory a PS script needs were probably comparable with the machine available memory. In modern time, the memory needed for a heavy PostScript task (e.g. long document or complicated drawing) is much lesser than the machine memory, and closing a dictionary with hundreds or even thousands of elements should have no significant effect on the machine performance.

Correct me if I'm wrong! Consider that we put everything in one dictionary or global dictionary; does it have a negative impact on the PostScript performance?

Does it still beneficial (from performance point of view, not ease of coding) to separate dictionary, and more importantly close them when not needed? Or it just frees a tiny fraction of memory?


Solution

  • The memory and performance issues here are almost entirely separate.

    Level-1 Postscript describes only one way to "free" memory: by restore-ing a previously save-d memory state. Level-2 (and beyond) Postscript incorporates garbage collection, so memory is available to be freed when there are no accessible references to it. Garbage collection can be disabled to reduce performance overhead (this is necessary for profiling code for speed), but of course your memory consumption is likely to increase unless you're using save and restore appropriately.

    The inclusion of garbage collection makes it appropriate to add automatically-expanding dictionaries, and they did. But there's a performance cost: allocating a larger dictionary and rehashing all the keys. So if it's easy to predict the maximum size of a dictionary, you can save some of this time by creating a big-enough dictionary in the first place. You may be able to get a further speed increase by making your dictionaries twice the maximum size, as this should reduce hash collisions.

    And performance is adversely affected by having extra dictionaries on the dictstack (if you don't need them). Since systemdict (where all the operators are) is always the bottom entry on the stack, all lookups for operator-names will search (unsuccessfully) each dictionary that's in the way before reaching systemdict.

    The increase in the memory-size and processing-power of desktop computer makes these concerns somewhat less necessary (since you can ignore them and still have a program that "works"), but they are still useful (especially as your programs become larger and more complex).

    A very good resource for this kind of info is Adobe's "green book" which is devoted to strategies of organizing your programs for size or speed (sometimes both).


    I just had a crazy idea. There may be a way to get both! Suppose you pack your dictionary exactly to capacity (to use minimal memory), then in a critical section add one more element (forcing the dict to expand), but bracket the section with save and restore?

    4 dict begin
    /x 5 def
    /y 7 def
    /z 9 def
    /t 12 def
    currentdict end
    
    %critical section
    begin /save save def
        %Do something critical
    save end restore
    

    Of course this discards any updates to the dict, so if you need these updated entries, you'll have to make a copy to expand (after the save, so restore will destroy it), and copy the desired entries back into the original. And of course this is quite a bit of extra overhead; so the code that needs this trick will have to be damned critical. :)