In Linux, every shared object filename has libfoo.so.X.Y.Z
scheme, with symbolic link libfoo.so.X -> libfoo.so.X.Y.Z
, and symbolic link libfoo.so -> libfoo.so.X
usually coming from devel package. SONAME
attribute of the corresponding ELF file is libfoo.so.X
. Usually the libtool documentation is referred to explain the scheme: https://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html#Updating-version-info
There are three counters introduced: current (c
), age (a
), and revision (r
) with the following set of rules how to increment this counters:
If the library source code has changed at all since the last update, then increment revision (‘c:r:a’ becomes ‘c:r+1:a’).
If any interfaces have been added, removed, or changed since the last update, increment current, and set revision to 0.
If any interfaces have been added since the last public release, then increment age.
If any interfaces have been removed or changed since the last public release, then set age to 0.
Then libtool creates shared object libfoo.so.X.Y.Z
where X = c - a
, Y = a
, Z = r
as one can see from libtool sources.
However:
ld-linux.so.2
(dynamic shared object loader) relies on SONAME
only (as far as I can see from its sources), so both Y
and Z
are not accounted at the runtime linking.So, the whole scheme seems to be redundant and over complicated at Linux. Do I guess right that libtool scheme came from the times when alternative (to Linux) UNIX-like operating systems were not as uncommon as nowadays? And some of this systems were implemented more sophisticated shared object lookup than Linux does.
Do I understand right that the better versioning scheme for Linux is just increment X
at every ABI update?
For instance, imagine that I have libfoo.so.1.0.0
and I added new function to the interface, then, following the libtool rules I would get libfoo.so.1.1.0
.
This would allow application binaries previously linked to libfoo.so.1.0.0
to use new libfoo.so.1.1.0
. I can compile new binary (which uses the new function) against libfoo.so.1.1.0
and it would have libfoo.so.1
in NEEDED
anyway, and would fail to run with libfoo.so.1.0.0
,so what is the reason?
In Linux, every shared object filename has libfoo.so.X.Y.Z scheme,
Lots of system libraries on a typical GNU/Linux system use it, but it is only a recommendation (supported by ldconfig
), and only for shared libraries. So,
not every shared library uses it.
Many programs are shared objects too, but they are not ordinarily named according to this scheme.
Usually the libtool documentation is referred to explain the scheme
I'm not sure what "usually" means here, but any such sources are wrong. Including if I ever said that myself, which is possible. GNU Libtool does produce names of that form when generating shared libraries for Linux, but the resulting X.Y.Z
are not necessarily the Libtool version triple (current
, revision
, age
), in any order.
For example, when Libtool builds an ELF shared library libfoo
for Linux with -version-info 2:0:1
, the resulting shared library name is libfoo.so.1.1.0
, and the soname is libfoo.so.1
. Note that there is no "2" in that.
This is Libtool implementing the naming / versioning semantics of the target system, not libtool
's versioning system being adopted or used by Linux.
However:
- libtool is the tool from the specific build system (autotools).
Libtool is part of the Autotools suite, but it is the member of that suite that is best suited for use as a general-purpose tool. You don't have to otherwise use the Autotools to use Libtool.
But that's moot, because the recommended Linux scheme is not the Libtool scheme.
ld-linux.so.2
(dynamic shared object loader) relies onSONAME
only (as far as I can see from its sources), so both Y and Z are not accounted at the runtime linking.
Yes, and this is a feature. It facilitates binary compatibility between versions of the same shared library. Should I really need to rebuild every dependency of libfoo when I roll a bugfix release?
So, the whole scheme seems to be redundant and over complicated at Linux.
I do not agree that the facts given support that conclusion.
Do I guess right that libtool scheme came from the times when alternative (to Linux) UNIX-like operating systems were not as uncommon as nowadays?
Sort of. The GNU software suite was conceived long before Linus rolled out the first version of the Linux kernel. Its availability in source form was a key enabling factor in the rise of GNU/Linux (which is commonly called simply "Linux", though that's a bit of a misnomer). The Libtool manual's description of the motivation for Libtool places that specific tool's inception in 1995, at which time Linux adoption and usage was still pretty low.
But again, this is moot.
And some of this systems were implemented more sophisticated shared object lookup than Linux does.
The key issue here is not sophistication, but simply difference. Libtool was invented because of the unfortunately large variance among systems with respect to the details of building and using libraries, especially shared ones. The Libtool manual actually says that some of the other shared-library management systems are more complex than it is, though.
And again, moot.
Do I understand right that the better versioning scheme for Linux is just increment X at every ABI update?
It is in no way as easy as that. What do you do to accommodate an ABI update that maintains binary compatibility with the previous version? There are conflicting priorities:
on one hand, you want to avoid rebuilding everything linked against the original version of the library when you upgrade to a new, backwards compatible library ABI.
on the other hand, you want newly-built binaries to express their true dependencies.
You cannot achieve both with ELF shared libraries, because they identify themselves via a single soname that, as far as ELF is concerned, is an opaque string. In practice, it is the former that most, perhaps all, Linux distributions aim to accomplish. This would not be served by incrementing library sonames when backwards-compatible ABI changes are released.
For instance, imagine that I have
libfoo.so.1.0.0
and I added new function to the interface, then, following the libtool rules I would getlibfoo.so.1.1.0
.
Again, that's the operation of Libtool for ELF-based systems, which is separate from Libtool's own internal versioning rules.
This would allow application binaries previously linked to
libfoo.so.1.0.0
to use newlibfoo.so.1.1.0
.
Yes.
I can compile new binary (which uses the new function) against
libfoo.so.1.1.0
and it would havelibfoo.so.1
inNEEDED
anyway, and would fail to run withlibfoo.so.1.0.0,so
[sic] what is the reason?
As already discussed, the recommended Linux approach prioritizes enabling (backwards-compatible) library upgrades over enabling ELF objects to express their dependencies as precisely as one might like. You can't have both without providing multiple versions of the library. Occasionally, multiple versions indeed are provided, but that has to be done very carefully, because there is a significant risk of serious misbehavior in the event that one binary ends up with (indirect) dependencies on multiple versions of the same shared library.
From a whole-system perspective, and based on the characteristics of ELF, the typical Linux approach seems the one most appropriate for a typical distribution's goals. Your suggestion to change library sonames with every ABI change would be next best, I guess. But if I were managing a Linux distribution, I would never consider that unless I was freezing all library ABIs anyway for each version of the distribution.