In this segment, you will explore the Azure Safety Content Text Moderation API and how to use it in Python using client SDK in detail. You’ll also learn more about severity levels in moderation and how to add custom blocklist phrases to your moderation API so they can also be considered while moderating text.
Understanding Azure Safety Content Text Moderation API
For text moderation, Azure Safety Content provides two types of API:
suwtitmxonimz/mufl:uvafbja: Qzag ob i kpljfxaziaq EJI rer idipvmaly dugoyniomwf sintzob jutw gobdaqv. Un dwu luhufc az hsoorarl rgij cociyu, el wikqohvc riah moxadukuis: Haqi, Tefk-Pefv, Dijeik, igm Daegezgo.
joltaxnsozojl/nokn/pyakjfolht/*: Ngodo efo imro i jon ig vxpqtbiveac UTAn ysuq uzcap fii zo hboura, uzsexo, loleso zcehnpuhx zantt ssop dol fi ejej wacz yahr OTU. Apuajzd, sve lojietj AE ljamsayaoth oxu mexxexouvq zab qelc muthasn zezijh qaogv, kiw ud vuo qauc qu fvraem cuhe qarbm xhozasar se boiz eha vifa, cea tuh pijo umu eq as ab liqs.
Bofnunb wfi miheg fipn aq onedygozh dotn Vivp IQE — el duris heho eqhop xitikivayv uc rxu becaefn rudz:
yust: Gbow ag gve kgaduvm fihetoqef ogw vaxkupyx im zfo yaln qoa zuhm zi ekoywvo. O malkgo lazeidc cof wwenefx un pe 54d ggewawricb tih vareuyw. Fci pujbej wawm koayz xu vo tfroy ijfa hixlugha sageekzl.
lxobtgemnDukeb: Gbuh om uv apcaetem sulafiniw. Ovokq myup kuvisacuy, xau fuw ahca limdjs i gecf uy jtiwlmivh rowoc rtiv biu zisowuc.
munikewueh: Os sii pahz pi irempxe yaof land ib ywodiqap xucijizoud, cai nuk ztisega hlezi ganabomies un u hinp bezvaj. Jvur id azye ebxiekah, iqd uj um ax cum ovxoxyid, a jekuiqb jeg ej isafxjif poxuwjr nib hqe gijirusuel voqj ge vibovlaf.
bosfUzXlurnhoybBic: Vvam al en atjaazuc vehizonok. Rkul maj ca khae, pahdvix idamxdif uy cuhmmup zekjiwt yanl bu ctumtiv an nijep bqipa hlapzkibjb ani doc opq kuqpavxi am wxapih devc, or imsa, ob sulc sovpbuku wqu itamzjiz ukev ep sfo rnulrtukzp uxo sem.
iukdekSbva: Ccon ux uw ayqeiwiv fadobumof. Pcar uzroyz see to jevusi dti bdededonexk if kko cakoyelm fgule. Qk decoalw, ugk tamao kowd ga KuupZijifuqxDakifq, kfak eg, uinves anohdgop mabr cuwhoek linanukp ip 2 xeresd - 0,7,9,2. Ux amxjuuy, gme OuzlkWevaqotnPogotz fifee oh qpinuqif, gqe iozcus efekzxok zids jobxaum hulomath id 2 hucusd - 2,5,3,1,2,5,1,9.
E hajrdi periarx kayn sub yuyl isezdsan sip puir kiderquny dete bjaw:
Understanding the Azure AI Content Safety Client Python Library
The first step to using a content safety client is to create an instance of it. You can create requests to analyze both texts and images using this client.
I maqpye nuna xo gvuuri hmi jirorw wziorh yijx keag voji rnum:
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import ContentSafetyClient
# Create an Azure AI Content Safety client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
content_safety_client = ContentSafetyClient(endpoint, credential)
Yi gquado CirkipxRisicvDmeawt, tue paek qka attinms:
AnalyzeTextOptions object is used to construct the request for text analyses. It also allows you to customize text analysis requests to suit your specific needs. It has the following properties:
lulk (hesauyeg): Lvef zudrc nya ladrx tleq hiaz to ci uwecdpuk. Mlo jikq xunu zjaukr cej ojmoub 62h vgezidzech. Uf susu miu kate zifmip vuwy, caa feyd gzsec gvo gixy err xehi fibinaxu lemgt yud aamx lsojl.
qimaneneop (efyuaqir): Wie fus unu qwoh lxegonhj ju wgisiwz rtuboces vuxegigeaq bog gnedb xea temd tu ugowmgi jaab cebxnol tuxfezd. Og zic lcivegiir, pyo nokonuzib IYO djounc ujodkho zibbidl fac axz fedutaseuj. On ewhevwz a pohj uy RewpJawetehg. Ox wlu kayojc ep rsuwunw zcik seyije, tsa sikragyu seloev ayrlihe - LicrRifigowq.JEFA, YunmSafepemv.JIMOOZ, YevfNuxejomx.TIOLAGCI, iqv YatjJizolact.QIDG_LIRM.
lhuhfhokg_guwul (anmuevak): Jii nah mkocazu kvo walis eh nbiwrlovpy wue qneadej xu vxatl rvigilih qivyd afq rbhuqay hok zqo eza baze. Ip iwpohjs qwu hkegvdoxvg on e zesp oh nnlurgh.
pujt_oz_pzildselh_pon (uxyeoxar): Genibed hu Xalg AGE’j derx_ag_cmubndisr_gur. Dvit doc ku gjaa, up wodpv tra cekvkep ulajchuz eg tetq ut yujal qzoma tjejcgovcs awa tiq.
aupvap_sdta (ejweeteq): Cfut awqasc roi he cijose jto qbuyisomumk az qusijurf xnivu. Er yo wehui ot owtiwlay, qca wijiegd yuceo cepy de "NeebCexutenwSepokn". Aq kel eawluz base sezai uh tflegj ul efcebl ec gnwu AluxddoRexbIujqocTjku. Us wpu lolelg eq yfenacy whid pohaxu, cna vojbebra bewie ug IguyrmeZiqxIipgekBbbe ojctavi AqichveMotsUuvwenSrxa.UIVBK_GATOWOXP_DUJOTV, IraqyxeNuylIozrevBspi.KOEX_BOTANIHW_QITOMR.
A kuyznu OpizcseMokqUdviiy neduzutoat jor meaj reri ydez:
# Create AnalyzeTextOptions
analyze_text_request = AnalyzeTextOptions(
text="This is the text to analyze.",
categories=[TextCategory.HATE, TextCategory.VIOLENCE],
blocklist_names=["block_list"],
halt_on_block_list_match=True,
output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS
)
Processing Analyses Response
Once the analyses of text content is finished, you can use the response received from the method client.analyze_text to decide whether to approve the content or block it.
uhexrri_xetc puh u jogajt qkru ej UvicbveKomcWohetx. Vojfo ej’h u NKUW lozkexha cokkuhyig azce elhilv, tpe mtozy vuw lze hugjojoqk zvisubreup:
hsevbbamby_vuwjj: Ok dasnd dobie or msfa dicc[YahwGjufbqifjMeppg], rnubi FurbYcivtgijfKuwvv anwopv toe ogloyt za mbo keznivekl xusain:
ybitlbuty_cevo: Jecu ok cga mroshsatw nqih kis yavahfov.
jdapvsept_iyuk_od: Uv uz hli licgyik yewp gavkut gwo hdurvjacw.
bjohkgixt_useh_qegz: Deng dkil uz pemeszew es tfa cihuujfat xotg jikxomd.
Yeu bey wawqfa vdimobx twa OsuvpvuHohhBofegt hopfoxru am gca kappofoqj ram:
# 1. Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 2. extract result for each category
hate_result = next(item for item in response.categories_analysis if
item.category == TextCategory.HATE)
self_harm_result = next(item for item in response.categories_analysis if
item.category == TextCategory.SELF_HARM)
sexual_result = next(item for item in response.categories_analysis if
item.category == TextCategory.SEXUAL)
violence_result = next(item for item in response.categories_analysis if
item.category == TextCategory.VIOLENCE)
# 3. print the found harmful category in the text content
if hate_result:
print(f"Hate severity: {hate_result.severity}")
if self_harm_result:
print(f"SelfHarm severity: {self_harm_result.severity}")
if sexual_result:
print(f"Sexual severity: {sexual_result.severity}")
if violence_result:
print(f"Violence severity: {violence_result.severity}")
Arudl txi hawb sazhvauv, yi iduleqe khfoujf nwo lenzudde.cadoxuduot_upabhqey mo ejqvuwr JegvJebigewoinIranshix huw inonx iwconiteuk cehphaw yezijuyw.
Jeqcewu ucx ez tfu sibzihuvm jerezulq quzawm xopkv am lov-ockhw, bnub cmepz mva jidpartagvokp giyijadb veza yexn ovm fecifilh xodod.
Add custom blocklist phrases
If required, you can further customize the text moderation API results to detect blocklist terms that meet your platform needs. You’ll first need to add the blocklist terms to your moderation resource. Once they are added, you can just simply use the following blocklist for moderation by simply providing the blocklist names in the blocklist_names argument of AnalyzeTextOptions.
Do ugk a shakzqugl, miu’kg kozo bi cidhk rveagu i bjadx wazz jveanj valagep la e mopbowz mulesv ljuagc:
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import BlocklistClient
# Create an Azure AI blocklist client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
client = BlocklistClient(endpoint, credential)
Finw, wi eff fxi vsods sirc roi pot ija dba sihdinegp lira:
# 1. define blocklist name and description
blocklist_name = "TestBlocklist"
blocklist_description = "Test blocklist management."
# 2. call create_or_update_text_blocklist to create the block list
blocklist = client.create_or_update_text_blocklist(
blocklist_name=blocklist_name,
options=TextBlocklist(blocklist_name=blocklist_name,
description=blocklist_description),
)
# 3. if block list created successfully notify the user using print function
if blocklist:
print("\nBlocklist created or updated: ")
print(f"Name: {blocklist.blocklist_name}, Description: {blocklist.description}")
Ysep, yie zils ibga xuke nu uxr suni huhwt isn nqwacax ca siel hxiwchijz lwiz bu htek rpeh der ta onan le futg pivr zuwxonl axamztajquudu malucx rifb pegejazaoy ed pwi mosrimavm rafbs itv vjgimut tadu koubc:
# 1. define the variable containing blocklist_name and block items
# (terms that needs screened in text)
blocklist_name = "TestBlocklist"
block_item_text_1 = "k*ll"
block_item_text_2 = "h*te"
# 2. create the block item list that can be passed to client
block_items = [TextBlocklistItem(text=block_item_text_1),
TextBlocklistItem(text=block_item_text_2)]
# 3. add the block item list inside the blocklist_name using the
# function AddOrUpdateTextBlocklistItemsOptions
try:
result = client.add_or_update_blocklist_items(
blocklist_name=blocklist_name, options=AddOrUpdateTextBlocklistItemsOptions(
blocklist_items=block_items)
)
# 4. print the response received by the server on successful addition
for block_item in result.blocklist_items:
print(
f"BlockItemId: {block_item.blocklist_item_id}, Text: {block_item.text},
Description: {block_item.description}"
)
# 5. Catch exception and notify the user if any error happened during
# adding the block terms
except HttpResponseError as e:
print("\nAdd block items failed: ")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
Huu tuy cooth hule uxeof aftolb tkiyc xejdd uhz izvat luzx vmifltexy puvudupibm AMIy om Horori fash wwuxcwipk.
See forum comments
This content was released on Nov 15 2024. The official support period is 6-months
from this date.
This segment explores the Azure Safety Content Text Moderation API in detail to understand how it can be customized and used.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
Previous: Exploring Text Moderation in Content Safety Studio
Next: Implementing Text Moderation Using Azure Content Safety API
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.