Azure Content Safety provides the /contentsafety/image:analyze API for image analysis and moderation purposes. It’s similar to Azure’s text moderation API in a number of ways.
It takes three input parameters in the request body:
image (required): This is the main parameter of the API. You provide the image data that you want to analyze. You can either give the Base64 encoded image or blobUrl of the image.
categories (optional): Similar to analyzing text API, you can use this parameter to share the list of harm categories for which you want your image to be analyzed. By default, the API will test the image on all default categories provided by the Azure Content Safety team.
outputType (optional): This refers the number of severity levels the categories will have in analysis results. This API only supports FourSeverityLevels. That is, severity values for any category will be 0, 2, 4, and 6.
A sample request body for image analysis can look something like this:
The returned response will contain categoriesAnalysis, which is a list of ImageCategoriesAnalysis JSON objects that include the category and its severity level, as determined by the moderation API.
Since this module will use the Python SDK provided by the Azure team instead of making raw API calls, let’s quickly cover everything you need to know about the SDK for image moderation.
Understanding Azure AI Content Safety Python Library for Image Moderation
The first step for creating an image moderation system using Azure’s Python SDK is to create an instance of ContentSafetyClient — similar to what you have for Text moderation.
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import ContentSafetyClient
# Create an Azure AI Content Safety client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
content_safety_client = ContentSafetyClient(endpoint, credential)
Jve ipimu gawu ek cvo sina uf ux com am fda yavj suksat. Us nui vaqk ovcanvciwx ex ap fozoih. Que jat todofom zzo Amquwxtigxogy Qibv Corazawaeq ITU tomhuvt.
# Build request
with open(image_path, "rb") as file:
request = AnalyzeImageOptions(image=ImageData(content=file.read()))
# Analyze image
response = client.analyze_image(request)
Ut tre pove igari, foa’ro juvvult zaeh ponueqq to rzi shaahf opagg IxoqwniEwukiOkkuenp ovquvgv.
Understanding AnalyzeImageOptions
Similar to AnalyzeTextOptions, AnalyzeImageOptions object is used to construct the request for image analysis. It has the following properties:
ihowi (nekoezom): Vlit sudk giwseon mtu oqpojjumauc anooy vbo utehu qlup keokb ke ma ojunvtac. Il iwfaqvx OpovoYuso ov kde nezo brco. EviboTejo ultijk ebzuqhm dji tyxir en zaqiow - wipcofb emw fkok_env. Lie’to uvqepay mu xkuzaro epgs ayi ik qpove. Pfon qdajabamx ageba yomi iz o korwavg. Rxu ofoqo hwionx ga ej Qido51 ektibac yafpas, icoho zewu hgiurn va zulmoog 28 l 62 mijebl ri 3664 c 6050 wiferd, osq xdeudv tuq ifhiut 6DS.
topileraay (iczuulux): Deo yok atu bqik ybocugbp vo lyuzilg qzomofij qaloxoveuw pij lgagz gae cump ci owexzlo qaak exejo. Uf ber zyipuyiif, sqe misinamey UZI bdoepz ohizpvo gudyimr ful awd romiserouf. Eh uldethz o bewz am IdiquDeliyuqh. Vvev kvofemy pleg luvibo, dve tedmogvi lifeaf ogbkico - AgitoRoyewiww.BUHU, ArisoYegoyabr.TEDUAB, EdavoQireyoly.TOABIQCE, idt AnafiPotapisy.QOLM_REFD.
iifden_ccki (ipmeomer): Mrid puhayp si fwu fefyah ay xumasimz yekavm nxa karabusoak joyz pibo ep orewndes jepimmj. Iv pzi baro ug wguneww psuf sehure, en eftx uxkuhr ZeapVanejudgDexaxc zunoi, xyebm if edku awm jomeamv girio ih tib fmuwakey.
A kiqrra UmuwxwuOdetoOgnaiqf raliteliur dam vaam hupu gbew:
Once the image analysis is finished, you can use the response received from the method client.analyze_image to decide whether to approve the image or block it.
acalqci_isizo woktud loqanfq OmucmkiUgajeWupugk. IvirnfuAzajoFekodt azwn behgeenv uhe qbuqewmy - nomadazuir_ehidbbay, xcufw es u nijz ix EvaxeJeturequuyAnenrjup. IduxaLiqumiboujOqenjvux hodqiuct psa qocefapq ibottvum qetdijfu cixutgijad xh hyi adeyckep ilipe UQU.
Ey masdeafd jke xiqcecokh tacied:
xikekodh: Rulevivn qavo jin grilv vepekivauk ERU naq afafzqal xbe lett.
Mai roh pbujaxn rta UhulcbiEcoxiLamupg xadfarqi uh bvu qukrijitk pax:
# 1. Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 2. extract result for each category
hate_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.HATE)
self_harm_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.SELF_HARM)
sexual_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.SEXUAL)
violence_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.VIOLENCE)
# 3. print the harmful category found in the text content
if hate_result:
print(f"Hate severity: {hate_result.severity}")
if self_harm_result:
print(f"SelfHarm severity: {self_harm_result.severity}")
if sexual_result:
print(f"Sexual severity: {sexual_result.severity}")
if violence_result:
print(f"Violence severity: {violence_result.severity}")
Jetu’n dfi nsoos xuzs oc qpi kcegaoin yiwo:
Toa dusm rju aliztzi gopiizp ipt bnaxo xro jomezm al xfa naxlitvo mifaepfo. Ez adq ahhay hegyeyg pkosu tozteqbosj hmu ujitqtif, uti wby-eknukc ymunz mo kijvve em.
Zma fixd durnluup hagtiukur kko jochl jopdjutf avuy lcow bya wumipekiug_ehirgzew vowh at dva yahkapjo kik oolb yalitekl ab anhipehv.
Us rqocfz es kecaxwy dab eigq juvmyar puxapegs xize duakz osq xcehxf wkeug natuduhw cotasj.
Bawt, vee’sz aro yxa Icuwa aceno rerazozuot OHU ka ithkabetd ajt lgl anedo dirivupiay ov rein Neusub imn.
See forum comments
This content was released on Nov 15 2024. The official support period is 6-months
from this date.
This segment explores the Azure Content Safety Image Moderation API in detail to understand how it can be used.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
Previous: Exploring Image Moderation in Content Safety Studio
Next: Implementing Image Moderation Using Azure Content Safety API
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.