Search code examples
pythontextgoogle-api

How to use Google Perspective API to check toxicity level in comments?


I am working on a project that involves analyzing the toxicity level of text using Google Perspective API. While I have a basic understanding of APIs, I am unsure about the specific steps to integrate the Perspective API into my code.

How to effectively use the Google Perspective API in my code? I have API now, but I do not know how to use that for toxicity detection. I would greatly appreciate any code examples or step-by-step instructions to get started.


Solution

  • Assuming you have your API key setup on google Cloud platform, just follow the docs here:

    from googleapiclient import discovery
    import json
    
    API_KEY = 'copy-your-api-key-here'
    
    client = discovery.build(
      "commentanalyzer",
      "v1alpha1",
      developerKey=API_KEY,
      discoveryServiceUrl="https://commentanalyzer.googleapis.com/$discovery/rest?version=v1alpha1",
      static_discovery=False,
    )
    
    analyze_request = {
      'comment': { 'text': 'friendly greetings from python' },
      'requestedAttributes': {'TOXICITY': {}}
    }
    
    response = client.comments().analyze(body=analyze_request).execute()
    print(json.dumps(response, indent=2))
    

    The analyze-request is what you want to focus on. Pass in your comment to parse for toxicity under the 'comment' field of the object, in the format they have specified. From there, you can request whatever attributes you want, i.e. Toxicity levels. The threshold of toxicity you want to allow is up to you to implement, and I would suggest testing yourself.

    For future reference, please review the documentation before asking questions like this, so you can bring up specifics that you do not understand.