Operations Research Asked by Dirk Nachbar on August 19, 2021
I have a single worker that can work on N different tasks, but his total time T is limited. Let’s assume time is in steps of 10 minutes (1 step = 1 mn). The payoff from a single job is 0 for the first step (setup cost) and then $log_{10}(t)$, eg payoff is 0.778 after 60 minutes (6 steps). If a worker goes back to an old job and continues it, there is another setup cost of 1, so working 2 times 60 minutes on the same job, payoff is log(12-1).
We can then show that working on each job for 30 minutes (3 steps) and then not going back to it, is better than working on fewer jobs for 60 minutes each. It maximises the total payoff. In fact working on many jobs 11 minutes (1.1 steps) is kind of the optimal thing to do.
How can I make this more realistic? Imagine the worker is a human, and the jobs are meetings. Obviously you would not switch meetings all the time.
I appreciate this is not 100% well defined. But grateful for any ideas.
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP