Search code examples
pythongrpcgrpc-python

Python GRPC 13 Internal Error when trying to yield response


When I print the response, everything seems to be correct, and the type is also correct.

Assertion: True
Response type: <class 'scrape_pb2.ScrapeResponse'>

But on postman I get "13 INTERNAL" With no additional information:

Error Screenshot

I can't figure out what the issue is, and I can't find out how to log or print the error from the server side.

Relevant proto parts:

syntax = "proto3";

service ScrapeService {
  rpc ScrapeSearch(ScrapeRequest) returns (stream ScrapeResponse) {};

}

message ScrapeRequest {
  string url = 1;
  string keyword = 2;
}

message ScrapeResponse {
  oneof result {
    ScrapeSearchProgress search_progress = 1;
    ScrapeProductsProgress products_progress = 2;
    FoundProducts found_products = 3;
  }
}


message ScrapeSearchProgress {
  int32 page = 1;
  int32 total_products = 2;
  repeated string product_links = 3;

}

scraper.py

def get_all_search_products(search_url: str, class_keyword: str):
    search_driver = webdriver.Firefox(options=options, service=service)
    search_driver.maximize_window()
    search_driver.get(search_url)
    # scrape first page
    product_links = scrape_search(driver=search_driver, class_keyword=class_keyword)
    page = 1
    search_progress = ScrapeSearchProgress(page=page, total_products=len(product_links), product_links=[])
    search_progress.product_links[:] = product_links

    # scrape next pages
    while go_to_next_page(search_driver):
        page += 1
        print(f'Scraping page=>{page}')
        product_links.extend(scrape_search(driver=search_driver, class_keyword=class_keyword))
        print(f'Number of products scraped=>{len(product_links)}')

        search_progress.product_links.extend(product_links)

        # TODO: remove this line
        if page == 6:
            break

        search_progress_response = ScrapeResponse(search_progress=search_progress)

        yield search_progress_response

Server:

class ScrapeService(ScrapeService):
    def ScrapeSearch(self, request, context):
        print(f"Request received: {request}")
        scrape_responses = get_all_search_products(search_url=request.url, class_keyword=request.keyword)

        for response in scrape_responses:
            print(f"Assertion: {response.HasField('search_progress')}")
            print(f"Response type: {type(response)}")
            yield response

Solution

  • Turns out it's just an issue with postman. I set up a python client and it worked.