Explore the role hyperparameters play in designing effective machine learning models.
![[Featured Image] Two aspiring data scientists search “what are hyperparameters” as they complete an assignment collaborating on a machine-learning project.](https://d3njjcbhbojbot.cloudfront.net/api/utilities/v1/imageproxy/https://images.ctfassets.net/wp1lcwdav1p1/2DfVfcTuvsfhwYkknntgVY/86a32f6c29d6c510788e899c3f922dc2/GettyImages-2099345768-converted-from-jpg.webp?w=1500&h=680&q=60&fit=fill&f=faces&fm=jpg&fl=progressive&auto=format%2Ccompress&dpr=1&w=1000)
Hyperparameters are important variables in the machine learning process that control how the model learns from data.
The global machine learning (ML) market is projected to reach $282.13 billion by 2030 [1].
Hyperparameters differ from parameters in that hyperparameter settings are predetermined, whereas parameter values are continuously updated during training.
You can work with hyperparameters in machine learning careers, such as a data scientist or machine learning engineer.
Discover what hyperparameters are and the role they play in developing accurate machine learning models. If you’re ready to start building your machine learning skills, the Machine Learning Specialization from Stanford and DeepLearning.AI can help you develop practical machine learning skills in fundamental areas such as building and training neural networks and practice using techniques like unsupervised and supervised learning.
Hyperparameters are a type of configuration variable used in machine learning to train models effectively. You set these variables before training your model, meaning they control the learning process and how it learns from the data. By predefining model hyperparameters, you can guide your model to optimize its development for your specific goals.
Hyperparameters often relate to your model’s architecture, learning rate, and complexity. They may involve the rate at which your algorithm updates estimates based on new information, the number of layers in the learning pathway, and how the model decides its next step based on previous information.
Hyperparameters are settings you configure before the model training process begins to optimize learning. Conversely, parameters are continually updated and changed during training as your model finds the best settings to fit your data.
For example, consider that you are building a football team. Before the season starts, you should decide on certain predefined settings that affect how your team will function and train. This might include your draft strategy, which players you pick for the team, how many substitutions you get per game, and how many games each player can play in a season. These are your hyperparameters, and they affect the performance and growth of your team throughout the season while remaining relatively fixed.
In this case, your parameters would change week to week based on your team’s performance. Parameters in this case might be the points scored by players and the subsequent ranking of which players “start” versus which players stay on the bench. Similar to how a model might redefine the weights of specific variables to find the optimal combination, you would analyze new data each week to see the “optimal” combination of players to start on the field.
Hyperparameter tuning improves the accuracy and efficiency of your machine learning model. This process, also known as hyperparameter optimization, helps you find the correct configuration to maximize the performance and structure of your model. You can use automated or manual hypertuning, and you’ll generally start with accuracy as your primary target. You then iteratively run your model, changing or “tuning” specific parameters until you find the right fit.
Often, when finding the right hyperparameter combination, you’ll need to find the right trade-off between certain aspects of your model. Depending on your goals and available resources, you might want to prioritize different things; for example, you may want to minimize your computational power requirements. You’ll also need to decide how sensitive your model is to new data (variance) versus how much the model predictions differ from reality (bias).
Different projects and algorithms favor different hyperparameters. You won’t necessarily try to maximize every type. Instead, you’ll tailor your model’s hyperparameters to your goals.
Hyperparameter tuning can require high computational power. It’s also time-consuming, especially when working with deep learning models that have high dimensionality. If your data is noisy, your hyperparameters may have difficulty finding the ideal configuration (known as a “global optimum”), so it’s vital to ensure you set up your data to increase your chances of success.
You can choose between various established techniques to find the best set of hyperparameters. Four of the most common ones include the following.
In a grid search, the model works through all combinations of hyperparameters and performance metrics until it finds the optimal combination. This method is typically effective but can be relatively slow and computationally expensive.
Bayesian optimization uses probabilistic modeling to set the hyperparameters in a way that is most likely to optimize a specific metric. The probability model uses Bayes’ theorem, which relies on current and historical knowledge to make informed guesses, and then uses regression analysis to iterate on these values.
Random search tests a random combination of hyperparameters and continues testing for a predefined number of runs. When you have a relatively small number of hyperparameters, this can be an effective method to find the best combination of parameter values.
Hyperband is an improvement on the random search algorithm that focuses on allocating resources intelligently through early stopping. This technique stops poorly performing models early and prioritizes configurations that produce the strongest results in each iteration.
Professionals involved in building and training machine learning models, such as data scientists, machine learning engineers, and scientific researchers, use hyperparameters. Because of the rise of machine learning applications across industries, jobs in these fields will likely see a fast pace of growth and attractive benefits. The global machine learning market was valued at $55.80 billion in 2024 and is anticipated to continue growing to reach $282.13 billion by 2030 [1]. According to the US Bureau of Labor Statistics, data scientists earn a median annual wage of $112,590 as of May 2024 and have a projected job growth of 34 percent between 2024 and 2034 [2].
Learn more: Machine Learning Career Path: Charting Your Journey in a Dynamic Field
As a data scientist, you might start with a large, unstructured data set and need to develop machine learning algorithms to analyze and accurately predict based on this information. Hyperparameters are essential to model development, making it critical to understand how to use them in this professional field.
As you continue learning about machine learning models and hyperparameters, consider exploring different hyperparameter types. This can help strengthen your understanding of leveraging different combinations of hyperparameter values to optimize your model performance. A few to start exploring include:
Learning rate: How often the algorithm updates its estimates
Learning rate decay: How long it takes for the learning rate to drop over time
Neural network hidden layers: Number of hidden layers in a neural network
Neural network nodes: Number of nodes in each neural network hidden layer
Mini-batch size: Batch size of the training data
Momentum: How strongly the model updates parameters in the same direction as the previous iteration
Subscribe to our weekly LinkedIn newsletter, Career Chat, for insights into in-demand skills and certifications. Then, check out some of our other free resources to continue learning more about machine learning.
Watch on YouTube: Your Machine Learning Roadmap: What to Learn and When
If you want to develop a new skill, get comfortable with an in-demand technology, or advance your abilities, you can keep growing with a Coursera Plus subscription. You’ll get access to over 10,000 flexible courses.
Grand View Research. “Machine Learning Market (2025-2030), https://www.grandviewresearch.com/industry-analysis/machine-learning-market.” Accessed April 5, 2026.
US Bureau of Labor Statistics. “Data Scientists, https://www.bls.gov/ooh/math/data-scientists.htm#tab-1.” Accessed April 5, 2026.
Editorial Team
Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...
This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.