Search code examples
algorithmdata-structurescomplexity-theorybig-osuffix-tree

How is it possible to build a suffix tree in linear time?


To build a suffix tree, in the worst case if all the letter of the string are different the complexity would be something like

n + (n-1) + (n-2) ... 1 = n*(n+1)/2

which is O(n^2).

However according to http://en.wikipedia.org/wiki/Suffix_tree building a suffix tree takes O(n) time. What am I missing here?


Solution

  • Your intuition behind why the algorithm should be Θ(n2) is a good one, but most suffix trees are designed in a way that eliminates the need for this time complexity. Intuitively, it would seem that you need Θ(n2) different nodes to hold all of the different suffixes, because you'd need n + (n - 1) + ... + 1 different nodes. However, suffix trees are typically designed so that there isn't a single node per character in the suffix. Instead, each edge is typically labeled with a sequence of characters that are substrings of the original string. It still may seem that you'd need Θ(n2) time to construct this tree because you'd have to copy the substrings over to these edges, but typically this is avoided by a cute trick - since all the edges are labeled with strings that are substrings of the input, the edges can instead be labeled with a start and end position, meaning that an edge spanning Θ(n) characters can be constructed in O(1) time and using O(1) space.

    That said, constructing suffix trees is still really hard to do. The Θ(n) algorithms referenced in Wikipedia aren't easy. One of the first algorithms found to work in linear time is Ukkonen's Algorithm, which is commonly described in textbooks on string algorithms (such as Algorithms on Strings, Trees, and Sequences). The original paper is linked in Wikipedia. More modern approaches work by first building a suffix array and using that to then construct the suffix tree.

    Hope this helps!