Azure Content Safety

Nov 15 2024 · Python 3.12, Microsoft Azure, JupyterLabs

Lesson 04: Advanced Content Moderation Strategies

Implement Multi-Modal Content Moderation

Episode complete

Play next episode

Next

Heads up... You’re accessing parts of this content for free, with some sections shown as obfuscated text.

Heads up... You’re accessing parts of this content for free, with some sections shown as obfuscated text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

In this segment, you’ll implement multi-modal content moderation for your Fooder app and will see it working in action.

Explore the Starter Project

The starter project contains two files:

Create Azure AI Content Safety Client

It’s time to build the app. Above check_content_safety function, write the following code:

# 1
import os
from dotenv import load_dotenv
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import ContentSafetyClient

# 2. Load your Azure Safety API key and endpoint
load_dotenv()

key = os.environ["CONTENT_SAFETY_KEY"]
endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]

# 3. Create a Content Safety client
moderator_client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
pip install azure-ai-contentsafety; pip install python-dotenv; pip install
  streamlit
CONTENT_SAFETY_KEY=<your-content-safety-key>
CONTENT_SAFETY_ENDPOINT=<your-endpoint>

Add Text and Image Analysis Code

Once you’re done creating the moderation client, the next step will be to write the code to analyze text and image content. Open the starter/business_logic.py file again and replace # TODO: Check for the content safety with the following code:

# 1. Check for the content safety
text_analysis_result = analyze_text(client=moderator_client, text=text)
image_analysis_result = analyze_image(client=moderator_client, image_data=image_data)

# 2
## TODO: Logic to evaluate the content

Add analyze_text Function

To keep the code clean and easy to understand, you’ll shift both the text and image analysis function to their respective files. Create a text_analysis.py file inside the root folder and add the following code:

# 1. Import packages
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory,
  AnalyzeTextOutputType

# 2. Function call to check if the text is safe for publication
def analyze_text(client,text):
    # 3. Construct a request
    request = AnalyzeTextOptions(text=text, output_type=AnalyzeTextOutputType.
      EIGHT_SEVERITY_LEVELS)

    # 4. Analyze text
    try:
        response = client.analyze_text(request)
    except HttpResponseError as e:
        print("Analyze text failed.")
        if e.error:
            print(f"Error code: {e.error.code}")
            print(f"Error message: {e.error.message}")
            raise
        print(e)
        raise

    # 5. Extract results
    categories = {
        TextCategory.HATE: None,
        TextCategory.SELF_HARM: None,
        TextCategory.SEXUAL: None,
        TextCategory.VIOLENCE: None
    }

    for item in response.categories_analysis:
        if item.category in categories:
            categories[item.category] = item

    hate_result = categories[TextCategory.HATE]
    self_harm_result = categories[TextCategory.SELF_HARM]
    sexual_result = categories[TextCategory.SEXUAL]
    violence_result = categories[TextCategory.VIOLENCE]

    # 6. Check for inappropriate content
    violations = {}
    if hate_result and hate_result.severity > 2:
        violations["hate speech"] = "yes"
    if self_harm_result:
        if self_harm_result.severity > 4:
            violations["self-harm"] = "yes"
    if sexual_result:
        if sexual_result.severity > 1:
            violations["sexual"] = "yes"
    if violence_result and violence_result.severity > 2:
        violations["violent references"] = "yes"

    return violations

Add analyze_image File

Now, it’s time to move ahead and create analyze_image function. Create image_analysis.py file inside the root folder of the project and add the following code:

# 1. Import the packages
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData,
  AnalyzeImageOutputType, ImageCategory

# 2
def analyze_image(client, image_data):
    # 3. Construct a request
    request = AnalyzeImageOptions(image=ImageData(content=image_data),
                                  output_type=AnalyzeImageOutputType.
                                  FOUR_SEVERITY_LEVELS)

    # 4. Analyze image
    try:
        response = client.analyze_image(request)
    except HttpResponseError as e:
        print("Analyze image failed.")
        if e.error:
            print(f"Error code: {e.error.code}")
            print(f"Error message: {e.error.message}")
            raise
        print(e)
        raise

    # 5. Extract results
    categories = {
        ImageCategory.HATE: None,
        ImageCategory.SELF_HARM: None,
        ImageCategory.SEXUAL: None,
        ImageCategory.VIOLENCE: None
    }

    for item in response.categories_analysis:
        if item.category in categories:
            categories[item.category] = item

    hate_result = categories[ImageCategory.HATE]
    self_harm_result = categories[ImageCategory.SELF_HARM]
    sexual_result = categories[ImageCategory.SEXUAL]
    violence_result = categories[ImageCategory.VIOLENCE]

    # 6. Check for inappropriate content
    violations = {}
    if hate_result and hate_result.severity > 2:
        violations["hate speech"] = "yes"
    if self_harm_result and self_harm_result.severity > 4:
        violations["self-harm references"] = "yes"
    if sexual_result and sexual_result.severity > 0:
        violations["sexual references"] = "yes"
    if violence_result and violence_result.severity > 2:
        violations["violent references"] = "yes"

    return violations

Implement the Logic to Evaluate the Content

Now, you’re ready to integrate everything and finalize your moderation function for the app. Head back to the file starter/business_logic.py and replace ## TODO: Logic to check evaluate the content with the following:

# 1
if len(text_analysis_result) == 0 and len(image_analysis_result) == 0:
    return None

# 2
status_detail = f'Your post contains references that violate our community guidelines.'

if text_analysis_result:
    status_detail = status_detail + '\n' + f'Violation found in text: {','
      .join(text_analysis_result)}'
if image_analysis_result:
    status_detail = status_detail + '\n' + f'Violation found in image: {','
      .join(image_analysis_result)}'

status_detail = status_detail + '\n' + 'Please modify your post to adhere to
  community guidelines.'


# 3
return {'status': "violations found", 'details': status_detail}
from text_analysis import analyze_text
from image_analysis import analyze_image

Text the Moderation System

Enough with the coding for now — let’s run the app and see the code in action!

streamlit run app.py
See forum comments
Cinema mode Download course materials from Github
Previous: Understanding Multi-Modal Content Moderation Next: Realtime Limitations of Azure Content Safety