Understanding Azure Safety Content

Understanding Azure Content Safety

Microsoft’s Azure Content Safety platform uses AI to detect hateful, violent, sexual, and self-harm content, and assigns it a severity score. This allows businesses to have better control over content moderation — they can also set up a system to decide when to involve human moderators in the process. In addition, the platform claims that its AI-based models are able to handle nuances and context effectively, unlike most other solutions.

To better understand what content moderation looks like in the Azure platform, take a look at the reference sample result shown below:

It showcases the several categories provided by the Azure Safety team — hate, violence, self-harm, and sexual — and how well content performed on these parameters.

For the severity categories ‘self-harm’, ‘sexual’, and ‘hate’, the content was rated low. However, the severity level of ‘violence’ was detected to be medium, leading the content to being blocked/rejected.

You can customize these basic moderation policies based on the domain and your business requirements, so that your final content moderation solution adapts accordingly.

For example, maybe your business does not tolerate self-harm, violence or hate content at all, but a little bit of flirty talk is fine — to a certain degree. Or maybe your platform is about sharing stories of how individuals overcame their hard times — you might want to allow medium-level self-harm content where folks are sharing their stories, but you cannot accept any hate, violent, or sexual content.

Features and Capabilities of Azure Content Safety

Azure Content Safety aims to provide a robust solution for content moderation, even when bearing in the vast amount of data generation in today’s generative AI-obsessed world.

Some of its noteworthy features include:

  • Multi-Model Analysis: It can analyze text and image content types for moderation, providing comprehensive coverage of a large number of applications used today.
  • Multi-Lingual Support: It supports various languages besides English — thanks to its multi-lingual models, which enable it to understand many languages simultaneously.
  • Customizable Policies: It allows developers and organizations to customize the moderation policies based on their specific needs — you can set different thresholds for different severity categories and even turn on or off any severity category. You can also provide a list of keywords and phrases to be moderated, providing more control over content moderation policies.
  • Build Your Own Custom Categories: Apart from the pre-built categories provided for content moderation, you can also build custom categories by training custom content classification models.
  • LLM-Ready Content Moderation Solution: For users that are building LLM-enabled solutions, the platform is working on a few LLM-specific solutions, including checking if text responses generated by LLMs are grounded in the source material provided by users, checking if the generated text is protected by copyright, and detecting any jailbreak attacks using user prompts and document attacks.
  • Scalability: It enables moderation solutions designed to handle and scale based on an immense volume of generated daily content, so it’s suitable for both small businesses and large enterprises.
  • Real-time Moderation: It provides APIs that offer low latency for analyzing the content, meaning it’s ready to meet real-time moderation demands for domains and services, like social media and live-streaming services. This ensures effective moderation without impacting the user experience.

Understanding Azure Content Safety Pricing

At the time of writing this course, Azure offers two pricing options - Free and Standard. Both offer the same features, with a rate limit restriction on the free plan, allowing up to five thousand texts and images to be processed per month.

If you are simply starting to build and test your content moderation solution using Azure AI Safety, you’ll likely find the free plan sufficient for your needs. For production applications, going with a standard plan will probably make more sense. You can learn more about pricing on the Azure section on Microsoft’s website.

Ethical Considerations in Building Automated Content Moderation Solutions

While Azure Content Safety and similar platforms offer powerful tools for content moderation by using artificial intelligence, their use raises crucial ethical considerations that need to be addressed.

A few considerations are:

  • Bias and Fairness: If left unchecked, AI models can accidentally introduce or even amplify social biases. It’s crucial to check regularly and, if required, mitigate bias in the content moderation systems, to ensure fairness among platform users.
  • Freedom of Expression: Stricter content moderation policies can infringe upon an individual’s freedom of expression. It’s essential to balance preventing harm with allowing diverse viewpoints — by creating and regularly updating thoughtful content moderation policies.
  • Transparency: For a platform to retain the trust of users, they must be informed how and why their content is being moderated. Providing clear guidelines and explanations for moderation decisions helps the platform build trust and accountability among users.
  • Cultural Sensitivity: When your platform is operating globally, it’s quite possible that your content moderation may allow some content that is acceptable to one culture but can be offensive to another. Although it might become hard to strike a balance, a moderation system needs to be identified and adapted to different cultural contexts and norms.
  • Human Oversight: While AI-based moderation systems are effective and can understand the nuances and context well, they still are not foolproof every time. Human moderators should always be a part of the moderation process — for both content moderation policy-making and making a final decision on complex and borderline cases.
  • Appeals Process: When using automated moderation tools, it is essential to have an appeal system. This system ensures that a user can report and appeal if they believe that their content was incorrectly moderated or blocked by the system. This not only maintains user trust — it also shows respect for their rights and opinions.

By carefully considering the above ethical aspects while implementing the automated content moderation solution, platforms and organizations can strive to build a safer and more trusting environment among users.

Going ahead, you’ll be working towards building a content moderation solution using Azure AI Content Safety for your Fooder app (social app recipes) that enables users to add recipes and allows them to share their cooking and dining experiences with others in the community.

See forum comments
Download course materials from Github
Previous: Why Content Moderation Next: Conclusion