Search code examples
pythonduplicatespython-dedupe

Dedupe Python - "Records do not line up with data model"


I am stuck with setting up python and the library dedupe from dedupe.io to deduplicate a set of entries in a postgres database. The error is - "Records do not line up with data model" which should be easy to solve but I just do not get why I get this message.

What I have now (focused code and removed other functions)

# ## Setup
settings_file = 'lead_dedupe_settings'
training_file = 'lead_dedupe_training.json'

start_time = time.time()

...

def training():
    # We'll be using variations on this following select statement to pull
    # in campaign donor info.
    #
    # We did a fair amount of preprocessing of the fields in

    """ Define Lead Query """
    sql = "select id, phone, mobilephone, postalcode, email from dev_manuel.somedata"

    # ## Training

    if os.path.exists(settings_file):
        print('reading from ', settings_file)
        with open(settings_file, 'rb') as sf:
            deduper = dedupe.StaticDedupe(sf, num_cores=4)
    else:

        # Define the fields dedupe will pay attention to
        #
        # The address, city, and zip fields are often missing, so we'll
        # tell dedupe that, and we'll learn a model that take that into
        # account
        fields = [
                {'field': 'id', 'type': 'ShortString'},
                {'field': 'phone', 'type': 'String', 'has missing': True},
                {'field': 'mobilephone', 'type': 'String', 'has missing': True},
                {'field': 'postalcode', 'type': 'ShortString', 'has missing': True},
                {'field': 'email', 'type': 'String', 'has missing': True}
                ]

        # Create a new deduper object and pass our data model to it.
        deduper = dedupe.Dedupe(fields, num_cores=4)


        # connect to db and execute
        conn = None
        try:
            # read the connection parameters
            params = config()
            # connect to the PostgreSQL server
            conn = psycopg2.connect(**params)
            print('Connecting to the PostgreSQL database...')

            cur = conn.cursor()
            # excute sql
            cur.execute(sql)

            temp_d = dict((i, row) for i, row in enumerate(cur))

            print(temp_d)

            deduper.sample(temp_d, 10000)

            print('Done stage 1')

            del temp_d

            # close communication with the PostgreSQL database server
            cur.close()

        except (Exception, psycopg2.DatabaseError) as error:
            print(error)
        finally:
            if conn is not None:
                conn.close()
                print('Closed Connection')

        # If we have training data saved from a previous run of dedupe,
        # look for it an load it in.
        #
        # __Note:__ if you want to train from
        # scratch, delete the training_file
        if os.path.exists(training_file):
            print('reading labeled examples from ', training_file)
            with open(training_file) as tf:
                deduper.readTraining(tf)

        # ## Active learning

        print('starting active labeling...')
        # Starts the training loop. Dedupe will find the next pair of records
        # it is least certain about and ask you to label them as duplicates
        # or not.

        # debug
        print(deduper)
        # vars(deduper)

        # use 'y', 'n' and 'u' keys to flag duplicates
        # press 'f' when you are finished
        dedupe.convenience.consoleLabel(deduper)
        # When finished, save our labeled, training pairs to disk
        with open(training_file, 'w') as tf:
            deduper.writeTraining(tf)

        # Notice our argument here
        #
        # `recall` is the proportion of true dupes pairs that the learned
        # rules must cover. You may want to reduce this if your are making
        # too many blocks and too many comparisons.
        deduper.train(recall=0.90)

        with open(settings_file, 'wb') as sf:
            deduper.writeSettings(sf)

        # We can now remove some of the memory hobbing objects we used
        # for training
        deduper.cleanupTraining()

The error message is "Records do not line up with data model. The field 'id' is in data_model but not in a record". As you can see, I am defining 5 fields to be "learned". The query I am using returns me exactly these 5 columns with the data in it. The output of

print(temp_d)

is

{0: ('00Q1o00000OjmQmEAJ', '+4955555555', None, '01561', None), 1: ('00Q1o00000JhgSUEAZ', None, '+4915555555', '27729', '[email protected]')}

Which looks to me like valid input for the dedupe library.

What I tried

  • I checked if he already wrote a file as training set which would get somehow read and be used, this is not the case (code would even say it)
  • I tried debugging the "deduper" object where the definition of the fields and such go in, I can see the fields definition
  • looking at other examples like csv or mysql which do pretty much the same I do.

Please point me in the direction where I am wrong.


Solution

  • It looks like the issue may be that your temp_d is a dictionary of tuples, as opposed to the expected input of a dictionary of dictionaries. I just started working with this package and found an example here which works for my purposes, which provides this function for setting up the dictionary albeit from a csv instead of the data pull you have in yours.

    data_d = {}
    with open(filename) as f:
        reader = csv.DictReader(f)
        for row in reader:
            clean_row = [(k, preProcess(v)) for (k, v) in row.items()]
            row_id = int(row['Id'])
            data_d[row_id] = dict(clean_row)
    
    return data_d