Search code examples
pythonsparqlsparqlwrapper

Opening the connection and getting response takes too much time


I wrote a python script for querying this endpoint using SPARQL in order to get some information about genes. This is how the script works:

Get genes
Foreach gene:
    Get proteins
        Foreach proteins
            Get the protein function
            .....
    Get Taxons
    ....

but the script takes too much time to execute. I did the profiling using pyinstrument and I got the following results:

  39.481 <module>  extracting_genes.py:10
  `- 39.282 _main  extracting_genes.py:750
     |- 21.629 create_prot_func_info_dico  extracting_genes.py:613
     |  `- 21.609 get_prot_func_info  extracting_genes.py:216
     |     `- 21.596 query  build/bdist.linux-x86_64/egg/SPARQLWrapper/Wrapper.py:780
     |        `- 21.596 _query  build/bdist.linux-x86_64/egg/SPARQLWrapper/Wrapper.py:750
     |           `- 21.588 urlopen  urllib2.py:131
     |              `- 21.588 open  urllib2.py:411
     |                 `- 21.588 _open  urllib2.py:439
     |                    `- 21.588 _call_chain  urllib2.py:399
     |                       `- 21.588 http_open  urllib2.py:1229
     |                          `- 21.588 do_open  urllib2.py:1154
     |                             |- 11.207 request  httplib.py:1040
     |                             |  `- 11.207 _send_request  httplib.py:1067
     |                             |     `- 11.205 endheaders  httplib.py:1025
     |                             |        `- 11.205 _send_output  httplib.py:867
     |                             |           `- 11.205 send  httplib.py:840
     |                             |              `- 11.205 connect  httplib.py:818
     |                             |                 `- 11.205 create_connection  socket.py:541
     |                             |                    `- 9.552 meth  socket.py:227
     |                             `- 10.379 getresponse  httplib.py:1084
     |                                `- 10.379 begin  httplib.py:431
     |                                   `- 10.379 _read_status  httplib.py:392
     |                                      `- 10.379 readline  socket.py:410
     |- 6.045 create_gene_info_dico  extracting_genes.py:323
     |  `- 6.040 ...
     |- 3.957 create_prots_info_dico  extracting_genes.py:381
     |  `- 3.928 ...
     |- 3.414 create_taxons_info_dico  extracting_genes.py:668
     |  `- 3.414 ...
     |- 3.005 create_prot_parti_info_dico  extracting_genes.py:558
     |  `- 2.999 ...
     `- 0.894 create_prot_loc_info_dico  extracting_genes.py:504
        `- 0.893 ...

Basically I'm executing multiple queries multiple times (+60000) so what I've understood is that opening the connection and getting response are done mutiple times which slows the execution.

Does anyone have an idea how to tackle this issue ?


Solution

  • As @Stanislav montioned, the urllib2 which's used by SPARQLWrapper Doesn't support persistent connections but I found a way to keep the connection alive, using setUseKeepAlive() function defined in SPARQLWrapper/Wrapper.py.

    I had to install the keepalive package first:

    pip install keepalive
    

    It reduced the excution time by almost 40%.

    def get_all_genes_uri(endpoint, the_offset):
        sparql = SPARQLWrapper(endpoint)
        sparql.setUseKeepAlive() # <--- Added this line
        sparql.setQuery("""
            #My_query
        """)
        ....
    

    And got the following results:

      24.673 <module>  extracting_genes.py:10
      `- 24.473 _main  extracting_genes.py:750
         |- 12.314 create_prot_func_info_dico  extracting_genes.py:613
         |  `- 12.068 get_prot_func_info  extracting_genes.py:216
         |     |- 11.428 query  build/bdist.linux-x86_64/egg/SPARQLWrapper/Wrapper.py:780
         |     |  `- 11.426 _query  build/bdist.linux-x86_64/egg/SPARQLWrapper/Wrapper.py:750
         |     |     `- 11.353 urlopen  urllib2.py:131
         |     |        `- 11.353 open  urllib2.py:411
         |     |           `- 11.339 _open  urllib2.py:439
         |     |              `- 11.338 _call_chain  urllib2.py:399
         |     |                 `- 11.338 http_open  keepalive/keepalive.py:343
         |     |                    `- 11.338 do_open  keepalive/keepalive.py:213
         |     |                       `- 11.329 _reuse_connection  keepalive/keepalive.py:264
         |     |                          `- 11.280 getresponse  httplib.py:1084
         |     |                             `- 11.262 begin  httplib.py:431
         |     |                                `- 11.207 _read_status  httplib.py:392
         |     |                                   `- 11.204 readline  socket.py:410
         |     `- 0.304 __init__  build/bdist.linux-x86_64/egg/SPARQLWrapper/Wrapper.py:261
         |        `- 0.292 resetQuery  build/bdist.linux-x86_64/egg/SPARQLWrapper/Wrapper.py:301
         |           `- 0.288 setQuery  build/bdist.linux-x86_64/egg/SPARQLWrapper/Wrapper.py:516
         |- 4.894 create_gene_info_dico  extracting_genes.py:323
         |  `- 4.880 ...
         |- 2.631 create_prots_info_dico  extracting_genes.py:381
         |  `- 2.595 ...
         |- 1.933 create_taxons_info_dico  extracting_genes.py:668
         |  `- 1.923 ...
         |- 1.804 create_prot_parti_info_dico  extracting_genes.py:558
         |  `- 1.780 ...
         `- 0.514 create_prot_loc_info_dico  extracting_genes.py:504
            `- 0.510 ...
    

    Honestly, the execution time is still not as quick as I want, I'll see if there is somethings else that I can do.