Popular Lesson
Navigate to your resource group and Azure Machine Learning workspace
Open the Azure Machine Learning Studio interface
Access the compute management section
Initiate creation of a new compute instance
Request GPU quota when needed
Choose key options for compute setup, including virtual machine type and settings
Setting up a compute instance is a key part of developing AI models in Azure Machine Learning. This lesson covers where to locate and launch the workspace you created earlier, and how to manage the compute resources critical for running training and evaluation processes. You’ll see why compute instances are the backbone of practical machine learning workflows on Azure: they handle the heavy lifting for tasks that would overwhelm most personal computers.
You'll also encounter Azure's quota system, which restricts certain powerful resources (like GPU machines) on new or free accounts until requested. Knowing how to request and monitor quotas helps you avoid roadblocks and keeps your workflow efficient.
This lesson is important whether you’re working with classical machine learning or deep learning. Whether you’re a data scientist preparing environments, an educator setting up classroom labs, or a developer experimenting with new ideas, being able to spin up purpose-built compute on demand is foundational to modern machine learning projects. For instance, if you need to train neural networks faster or run interactive coding sessions, launching the right compute instance is your starting point.
Configuring compute resources in Azure helps a range of users speed up and scale their machine learning work:
You’ll create a compute instance after setting up your Azure Machine Learning workspace and before running any training jobs or notebooks. This step ensures you have the necessary resources waiting for your code and data—not competing for local machine power or battling with limited compute space.
A compute instance is especially useful if you want to use Azure’s ML Studio interface for interactive work, such as building and testing models in notebooks or scripts. For example, after launching the studio, you might load a dataset, preprocess the data, and start training—all using the resources from your newly created virtual machine. Without this step, most advanced features in Azure ML aren’t accessible.
Before cloud platforms, training models required strong local hardware or waiting in shared queues for cluster time. Azure Machine Learning’s compute instances give you dedicated, on-demand virtual machines tailored to your needs. The process is simple: request what you need, wait for quick approval (when quota applies), and launch.
With this method, users can avoid common issues like running out of memory, slow training runs, or conflicting package setups on a laptop. In practical terms, a GPU-backed instance can turn a multi-hour training session into a much shorter process. This is especially beneficial when iterating frequently, sharing work with a team, or running complex notebooks that need consistent and scalable environments. The quota request workflow in Azure ensures you’re able to access advanced compute securely and as your needs grow.
To reinforce these steps, try setting up your own compute instance in your Azure ML workspace:
Reflection: How did using the Azure quota request system differ from your expectations? If you have access only to CPU machines, consider the differences and what impact that could have on your model training time.
This lesson builds on your earlier setup work by helping you create the compute power behind your AI experiments. Now that you have a workspace and know how to launch Azure ML Studio, adding a compute instance unlocks the full functionality for training models and running code interactively. The next lessons will guide you through using this compute resource to run your first training jobs and start building your AI models. Continue with the course to keep progressing in your cloud ML learning journey.