AI & Robotic
Showing 217–228 of 752 results
Deep Learning with Python for Image Classification
Learn Deep Learning, Machine Learning and Computer Vision for Image Classification with PyTorch using Convolutional Neural Networks CNN Transfer Learning
Deep Learning with PyTorch Step-by-Step: Part I – Fundamentals
This course is ideal for anyone who wants to learn PyTorch, starting from PyTorch basics and expanding to use PyTorch for deep learning.
Deep Learning with TensorFlow: Classification
Build deep learning models to classify data.
Deep Learning with TensorFlow: Classification
Build deep learning models to classify data.
Deep Learning with TensorFlow: Image Classification
Classify image data with deep learning.
Deep Learning with TensorFlow: Image Classification
Classify image data with deep learning.
Deep Reinforcement Learning
Learn about deep reinforcement learning algorithms and their applications in decision-making and control problems.
Deep Reinforcement Learning in Python
Learn and use powerful Deep Reinforcement Learning algorithms, including refinement and optimization techniques.
Defending Against AI-Generated Attacks
Dive into the dangerous world of AI phishing and learn how to protect yourself from attacks and misinformation.
Deploy a pre-built module to the Edge device
Deploy a pre-built temperature simulator module to an IoT Edge device using a container. Check that the module was successfully created and deployed and view simulated data.
Deploy Azure AI services in containers
Learn about Container support in Azure AI services allowing the use of APIs available in Azure and enable flexibility in where to deploy and host the services with Docker containers.
Deploy model to NVIDIA Triton Inference Server
NVIDIA Triton Inference Server is a multi-framework, open-source software that is optimized for inference. It supports popular machine learning frameworks like TensorFlow, Open Neural Network Exchange (ONNX) Runtime, PyTorch, NVIDIA TensorRT, and more. It can be used for your CPU or GPU workloads. In this module, you deploy your production model to NVIDIA Triton server to perform inference on a cloud-hosted virtual machine.