Understanding Image Moderation API

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

Azure Content Safety provides the /contentsafety/image:analyze API for image analysis and moderation purposes. It’s similar to Azure’s text moderation API in a number of ways.

It takes three input parameters in the request body:

  • image (required): This is the main parameter of the API. You provide the image data that you want to analyze. You can either give the Base64 encoded image or blobUrl of the image.
  • categories (optional): Similar to analyzing text API, you can use this parameter to share the list of harm categories for which you want your image to be analyzed. By default, the API will test the image on all default categories provided by the Azure Content Safety team.
  • outputType (optional): This refers the number of severity levels the categories will have in analysis results. This API only supports FourSeverityLevels. That is, severity values for any category will be 0, 2, 4, and 6.

A sample request body for image analysis can look something like this:

{
  "image": {
    "content": "Y29udGVudDE="
  },
  "categories": ["Hate", "SelfHarm", "Violence", "Sexual"],
  "outputType": "FourSeverityLevels"
}

Upon successful API call, the response body can look something like this:

{
  "categoriesAnalysis": [
    {
      "category": "Hate",
      "severity": 0
    },
    {
      "category": "SelfHarm",
      "severity": 0
    },
    {
      "category": "Sexual",
      "severity": 0
    },
    {
      "category": "Violence",
      "severity": 2
    }
  ]
}

The returned response will contain categoriesAnalysis, which is a list of ImageCategoriesAnalysis JSON objects that include the category and its severity level, as determined by the moderation API.

You can learn more about the API, error responses, and other definitions at Image Operations - Analyze Image.

Since this module will use the Python SDK provided by the Azure team instead of making raw API calls, let’s quickly cover everything you need to know about the SDK for image moderation.

Understanding Azure AI Content Safety Python Library for Image Moderation

The first step for creating an image moderation system using Azure’s Python SDK is to create an instance of ContentSafetyClient — similar to what you have for Text moderation.

from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import ContentSafetyClient

# Create an Azure AI Content Safety client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
content_safety_client = ContentSafetyClient(endpoint, credential)
# Build request
with open(image_path, "rb") as file:
    request = AnalyzeImageOptions(image=ImageData(content=file.read()))

# Analyze image
response = client.analyze_image(request)

Understanding AnalyzeImageOptions

Similar to AnalyzeTextOptions, AnalyzeImageOptions object is used to construct the request for image analysis. It has the following properties:

analyze_text_request = AnalyzeTextOptions(
    image=ImageData(blob_url="<your-blob-url>"),
    categories=[ImageCategory.HATE, ImaegCategory.VIOLENCE],
    output_type=AnalyzeTextOutputType.FOUR_SEVERITY_LEVELS
)

Processing Analysis Response

Once the image analysis is finished, you can use the response received from the method client.analyze_image to decide whether to approve the image or block it.

# 1. Analyze image
try:
    response = client.analyze_image(request)
except HttpResponseError as e:
    print("Analyze image failed.")
    if e.error:
        print(f"Error code: {e.error.code}")
        print(f"Error message: {e.error.message}")
        raise
    print(e)
    raise

# 2. extract result for each category
hate_result = next(item for item in response.categories_analysis if 
  item.category == ImageCategory.HATE)
self_harm_result = next(item for item in response.categories_analysis if 
  item.category == ImageCategory.SELF_HARM)
sexual_result = next(item for item in response.categories_analysis if 
  item.category == ImageCategory.SEXUAL)
violence_result = next(item for item in response.categories_analysis if 
  item.category == ImageCategory.VIOLENCE)

# 3. print the harmful category found in the text content
if hate_result:
    print(f"Hate severity: {hate_result.severity}")
if self_harm_result:
    print(f"SelfHarm severity: {self_harm_result.severity}")
if sexual_result:
    print(f"Sexual severity: {sexual_result.severity}")
if violence_result:
    print(f"Violence severity: {violence_result.severity}")
See forum comments
Download course materials from Github
Previous: Exploring Image Moderation in Content Safety Studio Next: Implementing Image Moderation Using Azure Content Safety API