Operations Research Asked by Betty on July 27, 2020
In an applied project we are working on currently, we want to use robust or stochastic programming in order to enhance the performance of the systems (by reference to certain metrics). As you may already know, robust/stochastic optimizations allow to model the randomness impact of certain factors on a system behavior (in my case, its performance).
On the other hand, solving robust/stochastic problems is costly in time/resources or both: my question is how frequently the problem needs to be solved in order to provide up-to-date paramaters for the system to operate at maximized/required performance? What would be the best/most efficient interaction scenario for the system to trigger the problem solving process?
Any feedback on previous experience or similar work would be very appreciated.
This heavily depends on the application at hand and could vary all the way from milliseconds to months. It all comes down to rigorously defining the specs.
Many parameters are in play:
Unfortunately there's no silver bullet here, your team will have to do bespoke engineering work to figure out the answers to these questions and create a solution that fits the physical system. You need to derive bounds for the uncertainty and then make sure the solution is robust enough for the specs at hand.
Once you figure out the specs, the rest is simply a matter of picking the right algorithm/tools to meet said specs.
Answered by Nikos Kazazakis on July 27, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP