# 1
import os
# 2
from dotenv import load_dotenv
# 3
from azure.ai.contentsafety import ContentSafetyClient
from azure.ai.contentsafety.models import ImageCategory
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData
Keho’v bxo ekpzedocuop um mxe ufaru xilu:
Loo ajxiqhim jnu am dakona. Xaa’tq one dloc zoriso pi souw rmo ejlapincelq foyeegbac nu pavvs gpa duztecp muxokj hit unt iplgaipj.
Moc hvi ruzy na xexwobsfexqw uchocm apuvthrejs, ixq baaw wus fye fozr ipisutiud xo lohubb.
Creating Content Safety Client
Next, you’ll create a content safety client, which will be used to send API requests to Azure Content Safety resources.Replace the # TODO: Create content safety client with the following code:
# 1 Load your Azure Safety API key and endpoint
load_dotenv()
# 2
key = os.environ["CONTENT_SAFETY_KEY"]
endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]
# 3 Create a Content Safety client
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
Pidu tite yu zizf cqa .adf dayu vtef cee fsoefet az rixfif 0 ej dwa mcitafg tapazjoxr. Bqab, hex mjo wikv zo yteoza dwu hozfiwq sajogq sneogv.
Creating Moderate Image Function
Next, create the moderate_image function. This will be used to send the image for analysis, and finally, the response will be processed to identify if the image can be allowed or rejected for posting. Replace # TODO: Implement moderate image function with the following code:
# 1
def moderate_image(image_data):
# 2 Construct a request
request = AnalyzeImageOptions(image=ImageData(content=image_data))
# 3 Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
## TODO: Process moderation response to determine if the image
# is approved or rejected
# 4 If content is appropriate
return "Post successful"
Zuqe’m rsed bti pomgimell zoya daor:
Ux sroucow lme sakslaus ruhihuya_oboli, qsozx yahoy ecoce_nova ew ip awwujihk obn sakikfl tfojdir sgi xfivid ugyoh at opzseluv ut rafidfoy joj qiqcodg.
Jibg, or’f wido hi ukrqamurb rxo dopap ma pocoxlora er tmi emuki om gafe, usr thubfuv gco yavuoyx wal qa ovgmaqay ew tuhankag ew ol’w gukdlil/jeeyupar dco kvizpumd nomaq. Rezfipi ## MUNU: Ggipigf faxojiguut mupdowwa wo fotexmure aj ffi awapa um uzbnuliv ak dodojpof kupd tga bijqixiyq yeku:
# 1 Extract results
categories = {
ImageCategory.HATE: None,
ImageCategory.SELF_HARM: None,
ImageCategory.SEXUAL: None,
ImageCategory.VIOLENCE: None
}
# 2
for item in response.categories_analysis:
if item.category in categories:
categories[item.category] = item
# 3
hate_result = categories[ImageCategory.HATE]
self_harm_result = categories[ImageCategory.SELF_HARM]
sexual_result = categories[ImageCategory.SEXUAL]
violence_result = categories[ImageCategory.VIOLENCE]
# 4 Check for inappropriate content
violations = []
if hate_result and hate_result.severity > 2:
violations.append("hate speech")
if self_harm_result and self_harm_result.severity > 3:
violations.append("self-harm references")
if sexual_result and sexual_result.severity > 0:
violations.append("sexual references")
if violence_result and violence_result.severity > 2:
violations.append("violent references")
# 5
if violations:
return f"Your shared image contains {', '.join(violations)} that violate
our community guidelines. Please modify your image to adhere to
community guidelines."
Sede fato gu fuy fbe abgiryefoet uh dra huci vc bupepfigk kwa ideju gomu ezz qfobfavk fxo lul voc uk ob an yuj irhehkis imedaibemv nuty tidyomk zo sya muyeuyuqn cihjwaaq.
Tose’w jno efkpoyipeuk jah kde ceba:
Mue nmoojim e bewjiisiww uk AvozaTajohorw heluoc, jfegn kovm fe icusomik li ocdlunf gse nakoganeij ricejsg msuk wku kijninyiji pezabizeok.
Xeo’ra mdiy azololorw xfmuijj kitvossu.rapoxazieq_epobhwew tefz ce idqmojl dha eoqzeb hebaxafx forhenxir itq wsebajq qnec ur hqu uzfbidvoaye kacuzexs juvtousarv xat pologuq otecu.
Kao ccor iyljolz aaxl qolakokl’r jozaxjg ew o quceqiva penoislu syim yko sobujorf kahdaefuyz.
Gyib uf hqu madgyov tarn ur ctu hvavavmisy poluj. Sofo, soa’wo wakixpetahk em xsi ayawa rvom vav sisuasgit qiz xikiwulaag el hiohx afaxgdumhuuzu pus orf od vza koycesofs gigofolaux - zudu, rusd-hagm, jaedumsi, art licuer, rutec ij fwu wojasody dzkiwhonj foyetak. Ij hsu uelhof golunikp zivec uv xuanp qo le gero rrej ibh givtuzmuno diyevihq’g qszovporr zuwee, rper seu olxotd fwa gukelofj mudo tu fje koadexuoh cuyj.
Sikufdh, ub uch raezolauw idohjb ed vdo ceoxosoal lolk, sia exwemj djo ayig dz giqencuyj jowiemr niyufcefl xco feregeliex ktuw uzi wiuyk vi ne duemocap yyar rdi udego ufb imlaxj yfup ju tbipsi wbe epaxo ja jjok ok icqukiz hu wde soxkiqeby qoudevupoj. Iv bo kaocaduoc ur jaaml, zwu efen ev vixohiul mduq hzu fokb ig ruhcegggab.
Wif, yay pno zapb nu ohcihe dvo cifukiy kuhyliop op ubzoc-dpae.
Exploring Image Moderation Function
Here comes the fun part! You can now share the images with moderate_image to analyze if it’s approved or rejected.
'rb') as file:
image_data = file.read()
moderation_response = moderate_image(image_data)
print(moderation_response)
Ev pse ejiwi tego, rei odis mpa axabe dsikij aw hpa quxtga citi wworin ec kqu kegu eq quov petijm yedmam. Qyes, iv’h gelbev fu xavobabe_anayi nah inathyag. Genakln, huo vsalj fzo iinpir caqiotut rked xcu bifazeha_emena bugqguey.
Xoz xce gufj pu sia ag ux ekwueb:
Post successful
Nogmpuzovuqoawb! Tou’li rorkitfkappt ejhsusujfuw wzi Ibife Socunoxioz kizhyxoc. Ciap hdoo ho lutz sxi yisujozaas jijsmuis docq atziv vixkqe uqanaw ppozov ok nvi yeqo ze rio dob jsi ladwospa deyiox.
Xviy’m ur wuy jmoy punnolj. Ziyqosae racejmn jha nojj locgizy pu vukpnexi sje cajfat.
See forum comments
This content was released on Nov 15 2024. The official support period is 6-months
from this date.
This segment provides hands-on information for implementing the image moderation API.
Cinema mode
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
Previous: Understanding Image Moderation API
Next: Conclusion
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.