Best Practices
|
October 21, 2024
|
8
min read

Understanding Performance Metrics: Making Sense of Forecast Errors

Olaf Jacobson
Founder & Business Development, Soon

Picture this: You’re trying to measure how well your new forecasting model is doing. You've got all sorts of numbers, but it's hard to know if they're actually good or bad. That’s where performance metrics come in! They help you figure out if your predictions are spot-on or if they’re as off as a weather forecast that predicted sunshine, but it’s pouring rain. Let’s dive into the most common performance metrics, broken down in a simple way.

MAPE (Mean Absolute Percentage Error)

MAPE is like a report card for your forecasts. It tells you, on average, how far off your predictions were, using percentages. Imagine you predicted it would be 100 units, but it ended up being 90. The error here is 10%, and MAPE averages all these errors to give you an overall score.

  • Good for: Understanding the average size of your mistakes in percentage terms.
  • Watch out for: MAPE struggles when your actual values are very close to zero. Even a small error can make the percentage skyrocket!

In short, MAPE is easy to understand because it uses percentages. It gives you a quick sense of how far off your predictions are, but it’s not perfect for every situation. It works well when your values are reasonably large and when consistency in the data is key. However, when dealing with data that has a lot of zeros or is extremely volatile, it might not give the full picture.

For instance, if you are forecasting sales for a store that sometimes has zero sales days, MAPE can get very tricky. Even a small mistake on a day when sales are low can make your error percentage shoot up dramatically, giving you a misleading impression of your model’s performance.

WAPE (Weighted Absolute Percentage Error)

WAPE takes MAPE and gives it a little twist by adding weights. It's like MAPE's smarter cousin, especially when your data is imbalanced. Instead of treating every prediction error the same, WAPE takes into account how big the actual values are.

  • Good for: Handling situations where some data points are more important than others. WAPE makes sure larger values don’t get lost in the mix.
  • Watch out for: If all your data is similar, WAPE might not be much different from MAPE. It really shines when you have a big range in your data values.

WAPE is particularly useful when dealing with data sets where the importance of each point varies. Think of it as prioritizing the important stuff while still accounting for everything else. For example, if you’re forecasting product demand for various products in a store, some products may have a significantly higher sales volume compared to others. In such cases, WAPE gives more weight to the errors on high-selling items, which ultimately has a bigger impact on your overall success.

MAE (Mean Absolute Error)

MAE is all about keeping things simple. It tells you, on average, how far your predictions were from the actual values. If you like numbers that are straightforward, MAE is your friend.

  • Good for: Giving you an easy-to-understand average error. It’s just plain numbers, no percentages, which makes it handy for a lot of different situations.
  • Watch out for: MAE doesn’t tell you how big the errors are relative to the size of the values. A 10-unit error might be small or huge, depending on the context.

MAE is great when you just need a quick average of how much you were off without worrying about any percentages or fancy scaling. If you’re comparing different models and want a simple error measurement, MAE is perfect. It’s also useful because, unlike MAPE, it doesn't exaggerate errors when dealing with smaller numbers.

However, remember that MAE doesn’t care about the size of the error in relation to the magnitude of the actual values. For example, a 10-unit error might be okay if the actual value was 1000, but it could be a huge problem if the actual value was only 20. MAE doesn’t give that context, so you need to interpret it carefully.

RMSE (Root Mean Square Error)

RMSE is like the more precise, math-heavy version of MAE. It takes each error, squares it (which makes big errors stand out even more), and then takes the square root of the average. Basically, RMSE really cares about big mistakes and makes sure you see them.

  • Good for: Highlighting big errors. If you care more about large mistakes and want to be super careful with them, RMSE is a great choice.
  • Watch out for: RMSE can be sensitive to big outliers. Those large errors get even bigger when squared, which might make your performance look worse than it actually is if you have just one or two big misses.

If you’re trying to make sure those rare but huge mistakes don’t sneak by unnoticed, RMSE is the metric for you. It’s particularly useful when you’re dealing with applications where large errors have serious consequences. For example, if you're forecasting hospital bed occupancy, a large error could mean not enough beds are available for patients, which is a major problem.

The squaring process makes RMSE more sensitive to outliers, which can be both a good and a bad thing. It’s a good thing because it helps highlight those large errors that could be problematic, but it’s a downside if your data has outliers that aren’t really important. You need to think about whether those big errors are something you need to pay extra attention to.

Coverage Metrics for Uncertainty Intervals

Sometimes, predictions come with uncertainty intervals, meaning there's a range of possible values instead of just one number. Coverage metrics help you check if those uncertainty ranges are doing their job.

  • Good for: Making sure your uncertainty intervals actually cover the true values most of the time. If you say your model will be right 90% of the time, a coverage metric checks if you’re hitting that mark.
  • Watch out for: If your intervals are too wide, you might cover the actual values every time, but you lose precision. The goal is to be accurate without being overly cautious.

Coverage metrics are extremely useful when you're dealing with forecasts that inherently have a lot of uncertainty. For example, predicting the weather or the stock market comes with many uncertainties. In these cases, instead of making a single prediction, it’s more realistic to give a range. Coverage metrics help make sure your model isn’t overconfident or too uncertain.

If your uncertainty intervals are too narrow, you’ll end up being wrong more often, which means your model is too confident and underestimates risk. On the other hand, if your intervals are too wide, they might not be useful because they aren’t giving precise enough information. The goal is to strike a balance—provide useful information while still being realistic about the uncertainty.

Wrapping It All Up

These metrics are like tools in a toolbox, and each one has its specialty. Whether you want to measure the average error, focus on big mistakes, or check how well your uncertainty intervals are working, there’s a metric that fits the bill:

  • MAPE for percentage-based error reporting.
  • WAPE for giving more weight to important values.
  • MAE for simple, average errors.
  • RMSE for highlighting big errors.
  • Coverage metrics for assessing uncertainty.

Think of it this way: if your forecast is like baking a cake, these metrics are your taste-testers. MAPE might tell you that you added a bit too much sugar on average, while RMSE will point out that one time you dumped the whole bag of sugar by mistake. WAPE helps you focus on the most important parts of the recipe, and coverage metrics tell you if you’re confident enough to serve your cake at a fancy party.

Next time someone asks how good your forecasts are, you’ll know which metric to pull out of your toolbox. Just remember: each one has its strengths and weaknesses, so pick the one that best fits your data and goals.

P.S. Forecasting is all about learning from the past to predict the future. Mistakes are part of the game, but with the right metrics, you can make sure you're always improving! And just like any skill, the more you practice and understand your metrics, the better you’ll get at making accurate predictions.

P.P.S. Don't be afraid to use multiple metrics! Sometimes, one metric alone doesn’t give you the full picture, but combining a few can provide a more complete understanding of how your forecasts are doing. It’s like using different angles to take a picture—you get a better sense of the whole scene. Happy forecasting!

effortless scheduling

See it to believe it

Choose a better way to organize your team's schedule. Get started for free or schedule a demo and discover what Soon is all about.