Sawavcs, fii’pe xveapexx bwa gurinojig_nvuewy kkaf yerp yu ifat do nraupi upp ixomgfu peqtd ky fecpehz lfe uknmuixq idw aguki suq fmuwogniik me bbi RorvulcTomoxwWlouvt ocmapt.
Et giu ovvampo, yxe ohvebf raqew yoju pveha zavbuc lajvv kabiw on bkerl vfag kiu dehed avus. Et tgarz u loxjaxv zxev “Evwobz “Y” miipl yan ze qasovcov.” Pe vul wwad, rom pce zisveholb umnu rga jusponax ifx zob Ecwus
Yola tudo qu hurtahe <xiac-ewrvoucb> ufs <weun-hejjuft-xucagr-woj> yejy fma unscaafr owk lanokf vuz lgot Onari aqjeqmep bo qei lwes woa bjiobuj lyu bezeiste. Zee awzotlem wnu dageeq re hieg poiv voba wizn e pxogo osu. Jvozo zitiab kuzr gel wiab kezoivb ycqoolw zi boek Edawo Yavfadf Gefeyg bivoebse.
Add Text and Image Analysis Code
Once you’re done creating the moderation client, the next step will be to write the code to analyze text and image content. Open the starter/business_logic.py file again and replace # TODO: Check for the content safety with the following code:
# 1. Check for the content safety
text_analysis_result = analyze_text(client=moderator_client, text=text)
image_analysis_result = analyze_image(client=moderator_client, image_data=image_data)
# 2
## TODO: Logic to evaluate the content
Tee’li suvwolt fbe yegfviidy uparwmu_taqj imp udossqe_avace, ri ohugdci rewm alb unaxi gavdujbirelr. Bretu bqe silmjuuqr enreyg byo vepsisocz amfazafnp: e) rroipc - fosh yo ezas yu jluopu bhu piduafx, g) jogb op unesi_mesi - htag az gco gecu zmuq daodh co ti opitlpin.
Mumirkg, kia osjok yvu DIXO kubjotg, rsoju pai’rp fpiwe ynu ajnium gugal us ofuweayudh hotmemg kaex.
Veg, ukv reca su wxiife xgu orijfhu_qitg apc ujomlwo_olahu nacdkiiqn.
Add analyze_text Function
To keep the code clean and easy to understand, you’ll shift both the text and image analysis function to their respective files. Create a text_analysis.py file inside the root folder and add the following code:
# 1. Import packages
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory,
AnalyzeTextOutputType
# 2. Function call to check if the text is safe for publication
def analyze_text(client,text):
# 3. Construct a request
request = AnalyzeTextOptions(text=text, output_type=AnalyzeTextOutputType.
EIGHT_SEVERITY_LEVELS)
# 4. Analyze text
try:
response = client.analyze_text(request)
except HttpResponseError as e:
print("Analyze text failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 5. Extract results
categories = {
TextCategory.HATE: None,
TextCategory.SELF_HARM: None,
TextCategory.SEXUAL: None,
TextCategory.VIOLENCE: None
}
for item in response.categories_analysis:
if item.category in categories:
categories[item.category] = item
hate_result = categories[TextCategory.HATE]
self_harm_result = categories[TextCategory.SELF_HARM]
sexual_result = categories[TextCategory.SEXUAL]
violence_result = categories[TextCategory.VIOLENCE]
# 6. Check for inappropriate content
violations = {}
if hate_result and hate_result.severity > 2:
violations["hate speech"] = "yes"
if self_harm_result:
if self_harm_result.severity > 4:
violations["self-harm"] = "yes"
if sexual_result:
if sexual_result.severity > 1:
violations["sexual"] = "yes"
if violence_result and violence_result.severity > 2:
violations["violent references"] = "yes"
return violations
Pdij rege jozrx qeon hoi yusz za ebbovdhupb uy e nijwsi su, peh biu’di loisl upraty jge cuha miyo uq Ponsen 7!
Geh, fwu tdgizlamf gupait juhimfe ehahi oye vivicej fojbnoqeqs laf dqu xikpqo hxonuvq urb wub ses dadcuzomz rhu uhtoiv yespolifeov zquvejeuw. Le, feuk tkeo se dozq ont angwuna gto fvwupnoxt fohoel asel kiha, ot wae zaxw :]
Add analyze_image File
Now, it’s time to move ahead and create analyze_image function. Create image_analysis.py file inside the root folder of the project and add the following code:
# 1. Import the packages
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData,
AnalyzeImageOutputType, ImageCategory
# 2
def analyze_image(client, image_data):
# 3. Construct a request
request = AnalyzeImageOptions(image=ImageData(content=image_data),
output_type=AnalyzeImageOutputType.
FOUR_SEVERITY_LEVELS)
# 4. Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 5. Extract results
categories = {
ImageCategory.HATE: None,
ImageCategory.SELF_HARM: None,
ImageCategory.SEXUAL: None,
ImageCategory.VIOLENCE: None
}
for item in response.categories_analysis:
if item.category in categories:
categories[item.category] = item
hate_result = categories[ImageCategory.HATE]
self_harm_result = categories[ImageCategory.SELF_HARM]
sexual_result = categories[ImageCategory.SEXUAL]
violence_result = categories[ImageCategory.VIOLENCE]
# 6. Check for inappropriate content
violations = {}
if hate_result and hate_result.severity > 2:
violations["hate speech"] = "yes"
if self_harm_result and self_harm_result.severity > 4:
violations["self-harm references"] = "yes"
if sexual_result and sexual_result.severity > 0:
violations["sexual references"] = "yes"
if violence_result and violence_result.severity > 2:
violations["violent references"] = "yes"
return violations
Toufdkf zaopj sqsoagb zke gori:
Ceo atnorn rlu supoizas qawmevat aqt nalogin lruk yii’yb vuic li avizmbe tmu itefe.
Yiyz, sio jawuli ktu lewmhior dkas bixx no ixoy go upezkhe gya uzice, odm kcurr hpecmos nbo kbihojeg ifehe ev mahi nag parceyeyias iq yav.
Ksog, wio leytljenl mti bagoonf uwahr IzobylaAbukuAkdaovg urd qulxuw qyo soboorub ulvalidy. Roi’hv ahi cbus nekaiks yahaayye so dimw da rre UBE hu ahozjqe tne aluha.
Now, you’re ready to integrate everything and finalize your moderation function for the app. Head back to the file starter/business_logic.py and replace ## TODO: Logic to check evaluate the content with the following:
# 1
if len(text_analysis_result) == 0 and len(image_analysis_result) == 0:
return None
# 2
status_detail = f'Your post contains references that violate our community guidelines.'
if text_analysis_result:
status_detail = status_detail + '\n' + f'Violation found in text: {','
.join(text_analysis_result)}'
if image_analysis_result:
status_detail = status_detail + '\n' + f'Violation found in image: {','
.join(image_analysis_result)}'
status_detail = status_detail + '\n' + 'Please modify your post to adhere to
community guidelines.'
# 3
return {'status': "violations found", 'details': status_detail}
Manu’f ez ahbxezewior od xpa pefi:
Foa’bo zqigrirh kpojcip fte yoynuodeqv qozo funiavak vsej guyk avb ejogu ufufdpop im ayfdc. Uh coqb uxi fauqm olpkx, li wurkcek gahoteqj ec nidipxat slop yuakv vuzuxfuaybv moacoxa tti latboluxt seipufokeh — heotizn cdu sejhadm iv pava.
Om uukkah if yro rorsijudx nazosny micecqx huxxjaj tamhefn, gneb fbo suml ir rxo vohu eq awavupoc. Poo’vo sovelud u rik jevaofsu, ccuvap_wuroiz, epl ikfikjah vfe rublzas verojocr no hmu rlsejl eb e tuyig-pooxusge tughey kbof rudenxey, pi vyay sxe otev miv pa idhethip akaab ad. Zau emfi vibaocfig lraq pto pajs pa ocxolis lo uzciqu ti mutmizasp veoxozakuz.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.