Azure Content Safety

Nov 15 2024 · Python 3.12, Microsoft Azure, JupyterLabs

Lesson 03: Image Moderation Using Azure Content Safety

Implementing Image Moderation Using Azure Content Safety API

Episode complete

Play next episode

Next

Heads up... You’re accessing parts of this content for free, with some sections shown as obfuscated text.

Heads up... You’re accessing parts of this content for free, with some sections shown as obfuscated text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

In this segment, you’ll implement image moderation for your sample app, Fooder.

# install the packages
%pip install azure-ai-contentsafety
%pip install python-dotenv
# 1
import os

# 2
from dotenv import load_dotenv

# 3
from azure.ai.contentsafety import ContentSafetyClient
from azure.ai.contentsafety.models import ImageCategory
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData

Creating Content Safety Client

Next, you’ll create a content safety client, which will be used to send API requests to Azure Content Safety resources.Replace the # TODO: Create content safety client with the following code:

# 1 Load your Azure Safety API key and endpoint
load_dotenv()

# 2
key = os.environ["CONTENT_SAFETY_KEY"]
endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]

# 3 Create a Content Safety client
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))

Creating Moderate Image Function

Next, create the moderate_image function. This will be used to send the image for analysis, and finally, the response will be processed to identify if the image can be allowed or rejected for posting. Replace # TODO: Implement moderate image function with the following code:

# 1
def moderate_image(image_data):
    # 2 Construct a request
    request = AnalyzeImageOptions(image=ImageData(content=image_data))

    # 3 Analyze image
    try:
        response = client.analyze_image(request)
    except HttpResponseError as e:
        print("Analyze image failed.")
        if e.error:
            print(f"Error code: {e.error.code}")
            print(f"Error message: {e.error.message}")
            raise
        print(e)
        raise

    ## TODO: Process moderation response to determine if the image
    #        is approved or rejected

    #  4 If content is appropriate
    return "Post successful"
# 1 Extract results
categories = {
    ImageCategory.HATE: None,
    ImageCategory.SELF_HARM: None,
    ImageCategory.SEXUAL: None,
    ImageCategory.VIOLENCE: None
}

# 2
for item in response.categories_analysis:
    if item.category in categories:
        categories[item.category] = item

# 3
hate_result = categories[ImageCategory.HATE]
self_harm_result = categories[ImageCategory.SELF_HARM]
sexual_result = categories[ImageCategory.SEXUAL]
violence_result = categories[ImageCategory.VIOLENCE]

# 4 Check for inappropriate content
violations = []
if hate_result and hate_result.severity > 2:
    violations.append("hate speech")
if self_harm_result and self_harm_result.severity > 3:
    violations.append("self-harm references")
if sexual_result and sexual_result.severity > 0:
    violations.append("sexual references")
if violence_result and violence_result.severity > 2:
    violations.append("violent references")

# 5
if violations:
    return f"Your shared image contains {', '.join(violations)} that violate
      our community guidelines. Please modify your image to adhere to
      community guidelines."

Exploring Image Moderation Function

Here comes the fun part! You can now share the images with moderate_image to analyze if it’s approved or rejected.

  'rb') as file:
    image_data = file.read()

moderation_response = moderate_image(image_data)
print(moderation_response)
Post successful
See forum comments
Cinema mode Download course materials from Github
Previous: Understanding Image Moderation API Next: Conclusion