Moderate content and detect harm with Azure AI Content Safety Studio
Learn how to choose and build a content moderation system in Azure AI Studio.
Configure filters and threshold levels to block harmful content., Perform text and image moderation for harmful content., Analyze and improve the Precision, Recall, and F1 score metrics., Detect groundedness in a model’s output., Identify and block AI-generated copyrighted content., Mitigate direct and indirect prompt injections., Send filter configurations as output to code.
An Azure subscription. Create one for free., Familiarity with Azure and the Azure portal.
There are no reviews yet.