Juliana O’Rourke, Programme Director for Modelling World and Tom van Vuren, Modelling World Chair, interviewed Warren Hatch of Good Judgment, who will be speaking and giving demos at Modelling World on 16 October
Compared to other forecasting problems, in transport modelling we generally face the challenge that the long-term inputs to our models are provided by outside experts, and assumed as given. Does that make super-forecasting irrelevant in our field?
Backward looking models, for example from government statisticians, can do a lot of the heavy lifting and I would not seek to supplement those with super forecasters, because the machines (being the computers and the formal algorithm models) can do that work better than humans. But where does human judgment start to become more and more important? Those kinds of areas are where there's a lot of information, typically, but not always. But it's incomplete and it can sometimes be contradictory.
The dominant way of dealing with those situations is to rely on a hunch, and the use of imprecise language. How much better would it be to quantify that subjective hunch, that subjective estimate into a number, a number that can then be updated and tracked, a number that can be compared with other such numbers and also a number that can be put into more formal models as an external input? I think the strength is where there's not readily available data, and humans can step in with best methods to crowdsource an estimate for those variables. And the idea really is hybrid - to have a mix of both hard data driven by machines, computer algorithms, combined with human judgment, and relying on the best of both, to come up with a more robust modelling approach, especially if it's something that needs to be monitored on an ongoing basis.
It's not just the forecasts that can be crowdsourced in this way. It's also what matters to the models, model inputs if you like. You can crowdsource the people involved about the key variables that might be getting missed in the formal model so far, something that they want to be keeping an eye on to improve the forecasts.
Aligned to that first question, we also face the issue of relying on off-the-shelf software, with many of the mathematical relationships already embedded. Our hands are tied. What is your super-forecasting suggestion?
OK, you may not have much control over every component in your forecasts, for example if you're stuck with an off the shelf software model. But you want to reduce error, which you can decompose into two different elements. One is systematic error, predictable error that occurs with regularity. And if you can identify it on a regular basis, you can use an algorithm to smooth it out. The other kind of error is non-systematic. It's noise, and that's what a lot of the process is about. It turns out half the improvement of the super forecasters relative to the crowd comes from noise reduction alone, another quarter for bias mitigation and another quarter from improving information overall. So having a way to monitor the information that's coming in, and see if you can sift out the useless stuff is really helpful.
But you're still stuck with software that's all pre-programmed. There are some things that might be useful to do, for example a pre mortem: if this model is wrong, why will that be? And you can crowdsource what those factors might be and rank them. Well, it's missing this, and so this is something that should be watched more carefully. And if this is something tractable, you can even treat it as a forecast question: what's the probability of this actually occurring? Monitor that on an ongoing basis, and if the probability is low, okay, we don't need to worry.
Given what we’ve just told you about transport modelling and forecasting, how would you summarise moving to super-forecasting?
You can put it all together into three elements:
the first is how much confidence should I have in my model. And you could even quantify your confidence and get feedback about your confidence as well.
the second is to identify potential risks that aren't being adequately captured or reflected altogether by that model.
and once you if you identified those and you have sufficient concern that they are material, then the third element is to monitor those risks, alongside the existing model.
It's not instead of it's an addition to: it’s hybrid. A lot of hybrid development is starting with humans, and adding machines. And here, what we're talking about is, well, the machines are there. Let's bring back the humans.
TransportXtra is part of Landor LINKS
© 2020 TransportXtra | Landor LINKS Ltd | All Rights Reserved
Subscriptions, Magazines & Online Access Enquires
[Frequently Asked Questions]
Email: email@example.com | Tel: +44 (0) 20 7091 7959
Shop & Accounts Enquires
Email: firstname.lastname@example.org | Tel: +44 (0) 20 7091 7855
Advertising Sales & Recruitment Enquires
Email: email@example.com | Tel: +44 (0) 20 7091 7861
Events & Conference Enquires
Email: firstname.lastname@example.org | Tel: +44 (0) 20 7091 7865
Press Releases & Editorial Enquires
Email: email@example.com | Tel: +44 (0) 20 7091 7875
Web design sussex by Brainiac Media 2020