Understanding Azure Content Safety Text Moderation API

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

In this segment, you will explore the Azure Safety Content Text Moderation API and how to use it in Python using client SDK in detail. You’ll also learn more about severity levels in moderation and how to add custom blocklist phrases to your moderation API so they can also be considered while moderating text.

Understanding Azure Safety Content Text Moderation API

For text moderation, Azure Safety Content provides two types of API:

{
  "text": "A sample example",
  "blocklistNames": ["block_terms"],
  "categories": ["Hate", "SelfHarm", "Violence"],
  "haltOnBlocklistHit": false,
  "outputType": "EightSeverityLevels"
}
{
  "blocklistsMatch": [],
  "categoriesAnalysis": [
    {
      "category": "Hate",
      "severity": 0
    },
    {
      "category": "SelfHarm",
      "severity": 3
    },
    {
      "category": "Sexual",
      "severity": 0
    },
    {
      "category": "Violence",
      "severity": 2
    }
  ]
}

Understanding the Azure AI Content Safety Client Python Library

The first step to using a content safety client is to create an instance of it. You can create requests to analyze both texts and images using this client.

from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import ContentSafetyClient

# Create an Azure AI Content Safety client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
content_safety_client = ContentSafetyClient(endpoint, credential)
# Construct request
request = AnalyzeTextOptions(text="Your input text")

# Analyze text
response = client.analyze_text(request)

Understanding AnalyzeTextOptions

AnalyzeTextOptions object is used to construct the request for text analyses. It also allows you to customize text analysis requests to suit your specific needs. It has the following properties:

# Create AnalyzeTextOptions
analyze_text_request = AnalyzeTextOptions(
    text="This is the text to analyze.",
    categories=[TextCategory.HATE, TextCategory.VIOLENCE],
    blocklist_names=["block_list"],
    halt_on_block_list_match=True,
    output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS
)

Processing Analyses Response

Once the analyses of text content is finished, you can use the response received from the method client.analyze_text to decide whether to approve the content or block it.

# 1. Analyze image
try:
    response = client.analyze_image(request)
except HttpResponseError as e:
    print("Analyze image failed.")
    if e.error:
        print(f"Error code: {e.error.code}")
        print(f"Error message: {e.error.message}")
        raise
    print(e)
    raise

# 2. extract result for each category
hate_result = next(item for item in response.categories_analysis if 
  item.category == TextCategory.HATE)
self_harm_result = next(item for item in response.categories_analysis if 
  item.category == TextCategory.SELF_HARM)
sexual_result = next(item for item in response.categories_analysis if 
  item.category == TextCategory.SEXUAL)
violence_result = next(item for item in response.categories_analysis if 
  item.category == TextCategory.VIOLENCE)

# 3. print the found harmful category in the text content
if hate_result:
    print(f"Hate severity: {hate_result.severity}")
if self_harm_result:
    print(f"SelfHarm severity: {self_harm_result.severity}")
if sexual_result:
    print(f"Sexual severity: {sexual_result.severity}")
if violence_result:
    print(f"Violence severity: {violence_result.severity}")

Add custom blocklist phrases

If required, you can further customize the text moderation API results to detect blocklist terms that meet your platform needs. You’ll first need to add the blocklist terms to your moderation resource. Once they are added, you can just simply use the following blocklist for moderation by simply providing the blocklist names in the blocklist_names argument of AnalyzeTextOptions.

from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import BlocklistClient

# Create an Azure AI blocklist client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
client = BlocklistClient(endpoint, credential)
# 1. define blocklist name and description
blocklist_name = "TestBlocklist"
blocklist_description = "Test blocklist management."

# 2. call create_or_update_text_blocklist to create the block list
blocklist = client.create_or_update_text_blocklist(
    blocklist_name=blocklist_name,
    options=TextBlocklist(blocklist_name=blocklist_name, 
      description=blocklist_description),
)

# 3. if block list created successfully notify the user using print function
if blocklist:
    print("\nBlocklist created or updated: ")
    print(f"Name: {blocklist.blocklist_name}, Description: {blocklist.description}")
# 1. define the variable containing blocklist_name and block items
#    (terms that needs screened in text)
blocklist_name = "TestBlocklist"
block_item_text_1 = "k*ll"
block_item_text_2 = "h*te"

# 2. create the block item list that can be passed to client
block_items = [TextBlocklistItem(text=block_item_text_1), 
  TextBlocklistItem(text=block_item_text_2)]

# 3. add the block item list inside the blocklist_name using the 
#    function AddOrUpdateTextBlocklistItemsOptions
try:
    result = client.add_or_update_blocklist_items(
        blocklist_name=blocklist_name, options=AddOrUpdateTextBlocklistItemsOptions(
          blocklist_items=block_items)
    )
    # 4. print the response received by the server on successful addition
    for block_item in result.blocklist_items:
        print(
            f"BlockItemId: {block_item.blocklist_item_id}, Text: {block_item.text}, 
              Description: {block_item.description}"
        )
# 5. Catch exception and notify the user if any error happened during
#    adding the block terms
except HttpResponseError as e:
    print("\nAdd block items failed: ")
    if e.error:
        print(f"Error code: {e.error.code}")
        print(f"Error message: {e.error.message}")
        raise
    print(e)
    raise
See forum comments
Download course materials from Github
Previous: Exploring Text Moderation in Content Safety Studio Next: Implementing Text Moderation Using Azure Content Safety API