Moderate Content and Detect Harm with Azure AI Content Safety
Learn how to moderate content and detect harm with Azure AI Content Safety.
Perform text and image moderation for harmful content., Detect groundedness in a models output., Identify and block AI-generated copyrighted content., Mitigate direct and indirect prompt injections.
An Azure subscription – Create one for free, Familiarity with Azure and the Azure portal, Ability to understand Python at a beginner level, Developing in browser:
GitHub account, GitHub account, Developing locally:
Docker desktop
Visual Studio Code
Dev Containers extension, Docker desktop, Visual Studio Code
Dev Containers extension, Dev Containers extension
There are no reviews yet.