In most implementations of skip lists I've seen, they use a randomized algorithm to decide if an element must be copied into the upper level.
But I think using odd indexed elements at each level to have copies in the upper level will give us logarithmic search complexity. Why isn't this used?
E.g. : Data : 1 2 3 4 5 6 7 8 9
Skip List:
1--------------------
1--------------------9
1--------5----------9
1----3---5----7----9
1-2-3-4-5-6-7-8-9
It is not used because it is easy to maintain while building the list from scratch - but what happens when you add/remove element to an existing list? Elements that used to be odd indexed are even indexed now, and vise versa.
In your example, assume you now add 3.5, all to maintain the DS as you described, it will require O(k + k/2 + k/4 + ... )
changes to the DS, where k
is the number of elements AFTER the element you have just inserted.
This basically gives you O(n/2 + n/4 + ...) = O(n)
add/remove complexity on average.