Search code examples
python-3.xpytorchh5pypytables

How to solve no such node error in pytables and h5py


I built an hdf5 dataset using pytables. It contains thousands of nodes, each node being an image stored without compression (of shape 512x512x3). When I run a deep learning training loop (with a Pytorch dataloader) on it it randomly crashes, saying that the node does not exist. However, it is never the same node that is missing and when I open the file myself to verify if the node is here it is ALWAYS here.

I am running everything sequentially, as I thought that I may have been the fault of multithreading/multiprocessing access on the file. But it did not fix the problem. I tried a LOT of things but it never works.

Does someone have an idea about what to do ? Should I add like a timer between calls to give the machine the time to reallocate the file ?

Initially I was working with pytables only, but in an attempt to solve my problem I tried loading the file with h5py instead. Unfortunately it did not work better.

Here is the error I get with h5py: "RuntimeError: Unable to get link info (bad symbol table node signature)"

The exact error may change but every time it says "bad symbol table node signature"

PS: I cannot share the code because it is huge and part of a bigger basecode that is my company's property. I can still share part of the code below to show how I load the images:

with h5py.File(dset_filepath, "r", libver='latest', swmr=True) as h5file:
    node = h5file["/train_group_0/sample_5"] # <- this line breaks
    target = node.attrs.get('TITLE').decode('utf-8')
    img = Image.fromarray(np.uint8(node))
    return img, int(target.strip())

Solution

  • Before accessing the dataset (node), add a test to confirm it exists. While you're adding checks, do the same for the attribute 'TITLE'. If you are going to use hard-coded path names (like 'group_0') you should check all nodes in the path exist (for example, does 'group_0' exist? Or use one of the recursive visitor functions (.visit() or .visititems() to be sure you only access existing nodes.

    Modified h5py code with rudimentary checks looks like this:

    sample = 'sample_5' 
    with h5py.File(dset_filepath, 'r', libver='latest', swmr=True) as h5file:
        if sample not in h5file['/train_group_0'].keys():
            print(f'Dataset Read Error: {sample} not found')
            return None, None
        else:
            node = h5file[f'/train_group_0/{sample}'] # <- this line breaks
            img = Image.fromarray(np.uint8(node))
            if 'TITLE' not in node.attrs.keys():
                print(f'Attribute Read Error: TITLE not found')
                return img, None
            else:
                target = node.attrs.get('TITLE').decode('utf-8')
                return img, int(target.strip())
    

    You said you were working with PyTables. Here is code to do the same with PyTables package:

    import tables as tb
    sample = 'sample_5'
    with tb.File(dset_filepath, 'r', libver='latest', swmr=True) as h5file:
        if sample not in h5file.get_node('/train_group_0'):
            print(f'Dataset Read Error: {sample} not found')
            return None, None
        else:
            node = h5file.get_node(f'/train_group_0/{sample}') # <- this line breaks
            img = Image.fromarray(np.uint8(node))
            if 'TITLE' not in node._v_attrs:
                print(f'Attribute Read Error: TITLE not found')
                return img, None
            else:
                target = node._v_attrs['TITLE'].decode('utf-8')
                return img, int(target.strip())