Examples of machine learning applied to operations research?Decoding a Deep Neural Network as an Analytical Expression for Optimization PurposeWhat is the connection of Operations Research and Reinforcement Learning?What are the tradeoffs between “exact” and Reinforcement Learning methods for solving optimization problemsAs an Operations Research professional, how is your time divided when working on an optimization project?Machine learning and operations research projects

How to wire for AC mains voltage relay, when printer board is connected to AC-charging laptop computer?

I have just 4 hours a month to security check a cloud based application - How to use my time?

What is Trump's position on the whistle blower allegations? What does he mean by "witch hunt"?

Vintage vs modern B&W photography techniques differ in color luminance - what's going on here?

Players who play fast in longer time control games

Why is the air inside airliners so dry (low humidity)?

Is it ok to return default argument's value by const reference?

How to make sure change_tracking statistics stays updated

Why would gloves be necessary for handling flobberworms?

Why is Carbon Dioxide a Greenhouse Gas whereas Ammonia is not?

How to get a large amount of cash abroad if a debit card stops working?

Can I say: “The train departs at 16 past every hour“?

How to insert bigstar in section title with same baseline?

1 kHz clock over long wire

What is David Chalmers' Naturalistic dualism?

Replacing 2-prong outlets in basement - existing wiring has two hot wires, one neutral?

Feasibility of keeping an Electrical Bike in poor (wet) storage conditions

Mechanics to keep mobs and environment alive without using tons of memory?

Would rocket engine exhaust create pockets of gas in space which could hinder further space exploration?

Why is the 'echo' command called 'echo'?

Is it usual for a US president to make specific comments about a UK Prime Minister's suitability during a general election?

What is the hidden passcode?

Cheat at Rock-Paper-Scissors-Lizard-Spock

If ancient soldiers could firebend, would battle lines cease to exist?



Examples of machine learning applied to operations research?


Decoding a Deep Neural Network as an Analytical Expression for Optimization PurposeWhat is the connection of Operations Research and Reinforcement Learning?What are the tradeoffs between “exact” and Reinforcement Learning methods for solving optimization problemsAs an Operations Research professional, how is your time divided when working on an optimization project?Machine learning and operations research projects













21














$begingroup$


Can someone give me a few examples, if they exist, of problems in operations research that could be solved using machine learning.



I am aware that machine learning examples are data-driven and do not give exact solutions, so I am expecting heuristics, and possibly solutions that are specific for a particular instance of the problem.



I am looking for 'direct' machine learning solutions that use machine learning to find a solution of the actual problem, and not just 'indirect' approaches that try to improve existing methods.



EDIT:
I am looking for examples in which the ML approach outperforms other methods.










share|improve this question












$endgroup$










  • 1




    $begingroup$
    Can you define what you mean by "out-perform" ? Obviously not more accurate since (as you state) ML solutions mostly don't give exact solutions (especially if you forbid anything that looks like using ML to enhance a standard method)? Do you mean faster? It is very easy to make a faster method, if you don't also constrain to be accurate (e.g. linear regression).
    $endgroup$
    – Lyndon White
    Jul 6 at 9:19










  • $begingroup$
    As far as I understand, one heuristic is better than the other if they give better results in the same amount of time. If we consider the ML approach as an heuristic, I am asking for an example in which a ML heuristic is better than other non-ML heuristics.
    $endgroup$
    – klaus
    Jul 9 at 16:38
















21














$begingroup$


Can someone give me a few examples, if they exist, of problems in operations research that could be solved using machine learning.



I am aware that machine learning examples are data-driven and do not give exact solutions, so I am expecting heuristics, and possibly solutions that are specific for a particular instance of the problem.



I am looking for 'direct' machine learning solutions that use machine learning to find a solution of the actual problem, and not just 'indirect' approaches that try to improve existing methods.



EDIT:
I am looking for examples in which the ML approach outperforms other methods.










share|improve this question












$endgroup$










  • 1




    $begingroup$
    Can you define what you mean by "out-perform" ? Obviously not more accurate since (as you state) ML solutions mostly don't give exact solutions (especially if you forbid anything that looks like using ML to enhance a standard method)? Do you mean faster? It is very easy to make a faster method, if you don't also constrain to be accurate (e.g. linear regression).
    $endgroup$
    – Lyndon White
    Jul 6 at 9:19










  • $begingroup$
    As far as I understand, one heuristic is better than the other if they give better results in the same amount of time. If we consider the ML approach as an heuristic, I am asking for an example in which a ML heuristic is better than other non-ML heuristics.
    $endgroup$
    – klaus
    Jul 9 at 16:38














21












21








21


2



$begingroup$


Can someone give me a few examples, if they exist, of problems in operations research that could be solved using machine learning.



I am aware that machine learning examples are data-driven and do not give exact solutions, so I am expecting heuristics, and possibly solutions that are specific for a particular instance of the problem.



I am looking for 'direct' machine learning solutions that use machine learning to find a solution of the actual problem, and not just 'indirect' approaches that try to improve existing methods.



EDIT:
I am looking for examples in which the ML approach outperforms other methods.










share|improve this question












$endgroup$




Can someone give me a few examples, if they exist, of problems in operations research that could be solved using machine learning.



I am aware that machine learning examples are data-driven and do not give exact solutions, so I am expecting heuristics, and possibly solutions that are specific for a particular instance of the problem.



I am looking for 'direct' machine learning solutions that use machine learning to find a solution of the actual problem, and not just 'indirect' approaches that try to improve existing methods.



EDIT:
I am looking for examples in which the ML approach outperforms other methods.







modeling machine-learning






share|improve this question
















share|improve this question













share|improve this question




share|improve this question








edited Jul 4 at 22:21







klaus

















asked Jul 4 at 21:33









klausklaus

2086 bronze badges




2086 bronze badges










  • 1




    $begingroup$
    Can you define what you mean by "out-perform" ? Obviously not more accurate since (as you state) ML solutions mostly don't give exact solutions (especially if you forbid anything that looks like using ML to enhance a standard method)? Do you mean faster? It is very easy to make a faster method, if you don't also constrain to be accurate (e.g. linear regression).
    $endgroup$
    – Lyndon White
    Jul 6 at 9:19










  • $begingroup$
    As far as I understand, one heuristic is better than the other if they give better results in the same amount of time. If we consider the ML approach as an heuristic, I am asking for an example in which a ML heuristic is better than other non-ML heuristics.
    $endgroup$
    – klaus
    Jul 9 at 16:38













  • 1




    $begingroup$
    Can you define what you mean by "out-perform" ? Obviously not more accurate since (as you state) ML solutions mostly don't give exact solutions (especially if you forbid anything that looks like using ML to enhance a standard method)? Do you mean faster? It is very easy to make a faster method, if you don't also constrain to be accurate (e.g. linear regression).
    $endgroup$
    – Lyndon White
    Jul 6 at 9:19










  • $begingroup$
    As far as I understand, one heuristic is better than the other if they give better results in the same amount of time. If we consider the ML approach as an heuristic, I am asking for an example in which a ML heuristic is better than other non-ML heuristics.
    $endgroup$
    – klaus
    Jul 9 at 16:38








1




1




$begingroup$
Can you define what you mean by "out-perform" ? Obviously not more accurate since (as you state) ML solutions mostly don't give exact solutions (especially if you forbid anything that looks like using ML to enhance a standard method)? Do you mean faster? It is very easy to make a faster method, if you don't also constrain to be accurate (e.g. linear regression).
$endgroup$
– Lyndon White
Jul 6 at 9:19




$begingroup$
Can you define what you mean by "out-perform" ? Obviously not more accurate since (as you state) ML solutions mostly don't give exact solutions (especially if you forbid anything that looks like using ML to enhance a standard method)? Do you mean faster? It is very easy to make a faster method, if you don't also constrain to be accurate (e.g. linear regression).
$endgroup$
– Lyndon White
Jul 6 at 9:19












$begingroup$
As far as I understand, one heuristic is better than the other if they give better results in the same amount of time. If we consider the ML approach as an heuristic, I am asking for an example in which a ML heuristic is better than other non-ML heuristics.
$endgroup$
– klaus
Jul 9 at 16:38





$begingroup$
As far as I understand, one heuristic is better than the other if they give better results in the same amount of time. If we consider the ML approach as an heuristic, I am asking for an example in which a ML heuristic is better than other non-ML heuristics.
$endgroup$
– klaus
Jul 9 at 16:38











5 Answers
5






active

oldest

votes


















15
















$begingroup$

There are many recent and not so recent papers that use ML to "solve" optimization problems, like Learning Combinatorial Optimization Algorithms over Graphs. A very, very good entry to the subject is the survey Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon.



In your last sentence you probably ask too much. For optimization problems, there are basically two kinds of approaches, exact and heuristic. For all optimization problems you can think of, both approaches have been suggested. Of course (of course!) no algorithm can beat an exact approach, at least not in terms of solution quality as these - by definition - find the best possible solutions. This is not the case for heuristics, which can be of better or worse quality (but maybe beat the exact methods in terms of runtime, so there is a tradeoff). Therefore, when you ask for ML approaches to beat optimization algorithms, these can beat, at best, other heuristics. And again: An ML approach is (almost always) a heuristic approach, and I would add "yet another heuristic approach". You cannot expect them to beat existing heuristics, but you can be lucky, which is true for any other heuristic.



edit: re-reading your question I conclude that I could not really contribute to an answer.






share|improve this answer












$endgroup$










  • 1




    $begingroup$
    The paper "Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon" that you provided answered my question. More specifically, section "3.2.1 End to end learning" was exactly what I was looking for.
    $endgroup$
    – klaus
    Jul 12 at 2:09











  • $begingroup$
    @klaus great! I love that paper, too.
    $endgroup$
    – Marco Lübbecke
    Jul 12 at 4:57


















11
















$begingroup$

Bertsimas and Stellato just put up a new preprint which proposes a method to solve online mixed-integer optimization (MIO) problems at very high speed using machine learning. They benchmark their method against Gurobi and obtain speedups of two to three orders of magnitude on benchmarks with real-world data.



https://arxiv.org/abs/1907.02206






share|improve this answer










$endgroup$










  • 4




    $begingroup$
    Note that in this paper the computation times are really short for both Gurobi and their ML algorithm, so it is not clear whether the speedup would scale up and is not just due to a higher "startup" time.
    $endgroup$
    – Michael Feldmeier
    Jul 6 at 7:47






  • 1




    $begingroup$
    Thanks for pointing this out!
    $endgroup$
    – CMichael
    Jul 6 at 7:48






  • 1




    $begingroup$
    I would also make a distinction between learning to solve, and learning to represent the solution of a parameterized problem. What is done here is simply that a multi-parametric problem is solved by approximating the solution function, using sample solutions, and a function approximation which happens to be a NN. ReLUs work very nicely as the optimal solution to this parameterized MIQP indeed is piecewise affine (We did a similar thing last year in a master thesis project, learning the output from an QP based MPC controller, resulting in close to MHz speed while Gurobi ran in 100Hz or so.)
    $endgroup$
    – Johan Löfberg
    Jul 9 at 10:19


















6
















$begingroup$

Using OR in ML is a very popular approach due to the optimization nature lying behind ML.



However, as you ask, there are also many examples (younger, newer) where you apply ML to solve OR problems. For example, for routing problems: https://arxiv.org/pdf/1803.08475.pdf



The list can be appended, but I think your question needs to be improved before.






share|improve this answer










$endgroup$










  • 1




    $begingroup$
    The paper you cited has quite a few examples in the related work section. However they claim that "The goal of our method is not to outperform a non- learned, specialized TSP algorithm such as Concorde...". I edited my question to narrow my search for examples that do outperform non-learned algorithms.
    $endgroup$
    – klaus
    Jul 4 at 22:30


















4
















$begingroup$

There is a paper Learning Fast Optimizers for Contextual Stochastic Integer Programs where they develop a "learnable local solver" to solve problems where the MIP solvers did not scale.



I have not studied the paper, yet, but it may fit your bill.



EDIT: From the abstract/introduction: The problems are two-stage stochastic optimization, where the learned local solver is applied to the first stage, after which the (deterministic) second stage is handed to a MIP solver. This performs better than handing the overall problem to a MIP solver (better objective within same time limit).






share|improve this answer












$endgroup$










  • 1




    $begingroup$
    The problems are two-stage stochastic optimization, where the learned local solver is applied to the first stage, after which the (deterministic) second stage is handed to a MIP solver. This performs better than handing the overall problem to a MIP solver (better objective within same time limit).
    $endgroup$
    – Robert Schwarz
    Jul 5 at 5:42










  • $begingroup$
    Perhaps this comment would be better served as an edit to the answer for the benefit of future visitors
    $endgroup$
    – SecretAgentMan
    Jul 8 at 15:39


















1
















$begingroup$

Also this special issues can give idea:



Special issue: Combining optimization and machine learning: applications in vehicle routing, network design and crew scheduling



Special Issue "Machine Learning and Optimization with Applications of Power System"



Special Issue on Machine Learning and Optimization






share|improve this answer










$endgroup$
















    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "700"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );














    draft saved

    draft discarded
















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2for.stackexchange.com%2fquestions%2f889%2fexamples-of-machine-learning-applied-to-operations-research%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown


























    5 Answers
    5






    active

    oldest

    votes








    5 Answers
    5






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    15
















    $begingroup$

    There are many recent and not so recent papers that use ML to "solve" optimization problems, like Learning Combinatorial Optimization Algorithms over Graphs. A very, very good entry to the subject is the survey Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon.



    In your last sentence you probably ask too much. For optimization problems, there are basically two kinds of approaches, exact and heuristic. For all optimization problems you can think of, both approaches have been suggested. Of course (of course!) no algorithm can beat an exact approach, at least not in terms of solution quality as these - by definition - find the best possible solutions. This is not the case for heuristics, which can be of better or worse quality (but maybe beat the exact methods in terms of runtime, so there is a tradeoff). Therefore, when you ask for ML approaches to beat optimization algorithms, these can beat, at best, other heuristics. And again: An ML approach is (almost always) a heuristic approach, and I would add "yet another heuristic approach". You cannot expect them to beat existing heuristics, but you can be lucky, which is true for any other heuristic.



    edit: re-reading your question I conclude that I could not really contribute to an answer.






    share|improve this answer












    $endgroup$










    • 1




      $begingroup$
      The paper "Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon" that you provided answered my question. More specifically, section "3.2.1 End to end learning" was exactly what I was looking for.
      $endgroup$
      – klaus
      Jul 12 at 2:09











    • $begingroup$
      @klaus great! I love that paper, too.
      $endgroup$
      – Marco Lübbecke
      Jul 12 at 4:57















    15
















    $begingroup$

    There are many recent and not so recent papers that use ML to "solve" optimization problems, like Learning Combinatorial Optimization Algorithms over Graphs. A very, very good entry to the subject is the survey Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon.



    In your last sentence you probably ask too much. For optimization problems, there are basically two kinds of approaches, exact and heuristic. For all optimization problems you can think of, both approaches have been suggested. Of course (of course!) no algorithm can beat an exact approach, at least not in terms of solution quality as these - by definition - find the best possible solutions. This is not the case for heuristics, which can be of better or worse quality (but maybe beat the exact methods in terms of runtime, so there is a tradeoff). Therefore, when you ask for ML approaches to beat optimization algorithms, these can beat, at best, other heuristics. And again: An ML approach is (almost always) a heuristic approach, and I would add "yet another heuristic approach". You cannot expect them to beat existing heuristics, but you can be lucky, which is true for any other heuristic.



    edit: re-reading your question I conclude that I could not really contribute to an answer.






    share|improve this answer












    $endgroup$










    • 1




      $begingroup$
      The paper "Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon" that you provided answered my question. More specifically, section "3.2.1 End to end learning" was exactly what I was looking for.
      $endgroup$
      – klaus
      Jul 12 at 2:09











    • $begingroup$
      @klaus great! I love that paper, too.
      $endgroup$
      – Marco Lübbecke
      Jul 12 at 4:57













    15














    15










    15







    $begingroup$

    There are many recent and not so recent papers that use ML to "solve" optimization problems, like Learning Combinatorial Optimization Algorithms over Graphs. A very, very good entry to the subject is the survey Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon.



    In your last sentence you probably ask too much. For optimization problems, there are basically two kinds of approaches, exact and heuristic. For all optimization problems you can think of, both approaches have been suggested. Of course (of course!) no algorithm can beat an exact approach, at least not in terms of solution quality as these - by definition - find the best possible solutions. This is not the case for heuristics, which can be of better or worse quality (but maybe beat the exact methods in terms of runtime, so there is a tradeoff). Therefore, when you ask for ML approaches to beat optimization algorithms, these can beat, at best, other heuristics. And again: An ML approach is (almost always) a heuristic approach, and I would add "yet another heuristic approach". You cannot expect them to beat existing heuristics, but you can be lucky, which is true for any other heuristic.



    edit: re-reading your question I conclude that I could not really contribute to an answer.






    share|improve this answer












    $endgroup$



    There are many recent and not so recent papers that use ML to "solve" optimization problems, like Learning Combinatorial Optimization Algorithms over Graphs. A very, very good entry to the subject is the survey Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon.



    In your last sentence you probably ask too much. For optimization problems, there are basically two kinds of approaches, exact and heuristic. For all optimization problems you can think of, both approaches have been suggested. Of course (of course!) no algorithm can beat an exact approach, at least not in terms of solution quality as these - by definition - find the best possible solutions. This is not the case for heuristics, which can be of better or worse quality (but maybe beat the exact methods in terms of runtime, so there is a tradeoff). Therefore, when you ask for ML approaches to beat optimization algorithms, these can beat, at best, other heuristics. And again: An ML approach is (almost always) a heuristic approach, and I would add "yet another heuristic approach". You cannot expect them to beat existing heuristics, but you can be lucky, which is true for any other heuristic.



    edit: re-reading your question I conclude that I could not really contribute to an answer.







    share|improve this answer















    share|improve this answer




    share|improve this answer








    edited Jul 5 at 8:49

























    answered Jul 5 at 8:32









    Marco LübbeckeMarco Lübbecke

    2,8459 silver badges36 bronze badges




    2,8459 silver badges36 bronze badges










    • 1




      $begingroup$
      The paper "Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon" that you provided answered my question. More specifically, section "3.2.1 End to end learning" was exactly what I was looking for.
      $endgroup$
      – klaus
      Jul 12 at 2:09











    • $begingroup$
      @klaus great! I love that paper, too.
      $endgroup$
      – Marco Lübbecke
      Jul 12 at 4:57












    • 1




      $begingroup$
      The paper "Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon" that you provided answered my question. More specifically, section "3.2.1 End to end learning" was exactly what I was looking for.
      $endgroup$
      – klaus
      Jul 12 at 2:09











    • $begingroup$
      @klaus great! I love that paper, too.
      $endgroup$
      – Marco Lübbecke
      Jul 12 at 4:57







    1




    1




    $begingroup$
    The paper "Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon" that you provided answered my question. More specifically, section "3.2.1 End to end learning" was exactly what I was looking for.
    $endgroup$
    – klaus
    Jul 12 at 2:09





    $begingroup$
    The paper "Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon" that you provided answered my question. More specifically, section "3.2.1 End to end learning" was exactly what I was looking for.
    $endgroup$
    – klaus
    Jul 12 at 2:09













    $begingroup$
    @klaus great! I love that paper, too.
    $endgroup$
    – Marco Lübbecke
    Jul 12 at 4:57




    $begingroup$
    @klaus great! I love that paper, too.
    $endgroup$
    – Marco Lübbecke
    Jul 12 at 4:57











    11
















    $begingroup$

    Bertsimas and Stellato just put up a new preprint which proposes a method to solve online mixed-integer optimization (MIO) problems at very high speed using machine learning. They benchmark their method against Gurobi and obtain speedups of two to three orders of magnitude on benchmarks with real-world data.



    https://arxiv.org/abs/1907.02206






    share|improve this answer










    $endgroup$










    • 4




      $begingroup$
      Note that in this paper the computation times are really short for both Gurobi and their ML algorithm, so it is not clear whether the speedup would scale up and is not just due to a higher "startup" time.
      $endgroup$
      – Michael Feldmeier
      Jul 6 at 7:47






    • 1




      $begingroup$
      Thanks for pointing this out!
      $endgroup$
      – CMichael
      Jul 6 at 7:48






    • 1




      $begingroup$
      I would also make a distinction between learning to solve, and learning to represent the solution of a parameterized problem. What is done here is simply that a multi-parametric problem is solved by approximating the solution function, using sample solutions, and a function approximation which happens to be a NN. ReLUs work very nicely as the optimal solution to this parameterized MIQP indeed is piecewise affine (We did a similar thing last year in a master thesis project, learning the output from an QP based MPC controller, resulting in close to MHz speed while Gurobi ran in 100Hz or so.)
      $endgroup$
      – Johan Löfberg
      Jul 9 at 10:19















    11
















    $begingroup$

    Bertsimas and Stellato just put up a new preprint which proposes a method to solve online mixed-integer optimization (MIO) problems at very high speed using machine learning. They benchmark their method against Gurobi and obtain speedups of two to three orders of magnitude on benchmarks with real-world data.



    https://arxiv.org/abs/1907.02206






    share|improve this answer










    $endgroup$










    • 4




      $begingroup$
      Note that in this paper the computation times are really short for both Gurobi and their ML algorithm, so it is not clear whether the speedup would scale up and is not just due to a higher "startup" time.
      $endgroup$
      – Michael Feldmeier
      Jul 6 at 7:47






    • 1




      $begingroup$
      Thanks for pointing this out!
      $endgroup$
      – CMichael
      Jul 6 at 7:48






    • 1




      $begingroup$
      I would also make a distinction between learning to solve, and learning to represent the solution of a parameterized problem. What is done here is simply that a multi-parametric problem is solved by approximating the solution function, using sample solutions, and a function approximation which happens to be a NN. ReLUs work very nicely as the optimal solution to this parameterized MIQP indeed is piecewise affine (We did a similar thing last year in a master thesis project, learning the output from an QP based MPC controller, resulting in close to MHz speed while Gurobi ran in 100Hz or so.)
      $endgroup$
      – Johan Löfberg
      Jul 9 at 10:19













    11














    11










    11







    $begingroup$

    Bertsimas and Stellato just put up a new preprint which proposes a method to solve online mixed-integer optimization (MIO) problems at very high speed using machine learning. They benchmark their method against Gurobi and obtain speedups of two to three orders of magnitude on benchmarks with real-world data.



    https://arxiv.org/abs/1907.02206






    share|improve this answer










    $endgroup$



    Bertsimas and Stellato just put up a new preprint which proposes a method to solve online mixed-integer optimization (MIO) problems at very high speed using machine learning. They benchmark their method against Gurobi and obtain speedups of two to three orders of magnitude on benchmarks with real-world data.



    https://arxiv.org/abs/1907.02206







    share|improve this answer













    share|improve this answer




    share|improve this answer










    answered Jul 6 at 6:50









    CMichaelCMichael

    9512 silver badges14 bronze badges




    9512 silver badges14 bronze badges










    • 4




      $begingroup$
      Note that in this paper the computation times are really short for both Gurobi and their ML algorithm, so it is not clear whether the speedup would scale up and is not just due to a higher "startup" time.
      $endgroup$
      – Michael Feldmeier
      Jul 6 at 7:47






    • 1




      $begingroup$
      Thanks for pointing this out!
      $endgroup$
      – CMichael
      Jul 6 at 7:48






    • 1




      $begingroup$
      I would also make a distinction between learning to solve, and learning to represent the solution of a parameterized problem. What is done here is simply that a multi-parametric problem is solved by approximating the solution function, using sample solutions, and a function approximation which happens to be a NN. ReLUs work very nicely as the optimal solution to this parameterized MIQP indeed is piecewise affine (We did a similar thing last year in a master thesis project, learning the output from an QP based MPC controller, resulting in close to MHz speed while Gurobi ran in 100Hz or so.)
      $endgroup$
      – Johan Löfberg
      Jul 9 at 10:19












    • 4




      $begingroup$
      Note that in this paper the computation times are really short for both Gurobi and their ML algorithm, so it is not clear whether the speedup would scale up and is not just due to a higher "startup" time.
      $endgroup$
      – Michael Feldmeier
      Jul 6 at 7:47






    • 1




      $begingroup$
      Thanks for pointing this out!
      $endgroup$
      – CMichael
      Jul 6 at 7:48






    • 1




      $begingroup$
      I would also make a distinction between learning to solve, and learning to represent the solution of a parameterized problem. What is done here is simply that a multi-parametric problem is solved by approximating the solution function, using sample solutions, and a function approximation which happens to be a NN. ReLUs work very nicely as the optimal solution to this parameterized MIQP indeed is piecewise affine (We did a similar thing last year in a master thesis project, learning the output from an QP based MPC controller, resulting in close to MHz speed while Gurobi ran in 100Hz or so.)
      $endgroup$
      – Johan Löfberg
      Jul 9 at 10:19







    4




    4




    $begingroup$
    Note that in this paper the computation times are really short for both Gurobi and their ML algorithm, so it is not clear whether the speedup would scale up and is not just due to a higher "startup" time.
    $endgroup$
    – Michael Feldmeier
    Jul 6 at 7:47




    $begingroup$
    Note that in this paper the computation times are really short for both Gurobi and their ML algorithm, so it is not clear whether the speedup would scale up and is not just due to a higher "startup" time.
    $endgroup$
    – Michael Feldmeier
    Jul 6 at 7:47




    1




    1




    $begingroup$
    Thanks for pointing this out!
    $endgroup$
    – CMichael
    Jul 6 at 7:48




    $begingroup$
    Thanks for pointing this out!
    $endgroup$
    – CMichael
    Jul 6 at 7:48




    1




    1




    $begingroup$
    I would also make a distinction between learning to solve, and learning to represent the solution of a parameterized problem. What is done here is simply that a multi-parametric problem is solved by approximating the solution function, using sample solutions, and a function approximation which happens to be a NN. ReLUs work very nicely as the optimal solution to this parameterized MIQP indeed is piecewise affine (We did a similar thing last year in a master thesis project, learning the output from an QP based MPC controller, resulting in close to MHz speed while Gurobi ran in 100Hz or so.)
    $endgroup$
    – Johan Löfberg
    Jul 9 at 10:19




    $begingroup$
    I would also make a distinction between learning to solve, and learning to represent the solution of a parameterized problem. What is done here is simply that a multi-parametric problem is solved by approximating the solution function, using sample solutions, and a function approximation which happens to be a NN. ReLUs work very nicely as the optimal solution to this parameterized MIQP indeed is piecewise affine (We did a similar thing last year in a master thesis project, learning the output from an QP based MPC controller, resulting in close to MHz speed while Gurobi ran in 100Hz or so.)
    $endgroup$
    – Johan Löfberg
    Jul 9 at 10:19











    6
















    $begingroup$

    Using OR in ML is a very popular approach due to the optimization nature lying behind ML.



    However, as you ask, there are also many examples (younger, newer) where you apply ML to solve OR problems. For example, for routing problems: https://arxiv.org/pdf/1803.08475.pdf



    The list can be appended, but I think your question needs to be improved before.






    share|improve this answer










    $endgroup$










    • 1




      $begingroup$
      The paper you cited has quite a few examples in the related work section. However they claim that "The goal of our method is not to outperform a non- learned, specialized TSP algorithm such as Concorde...". I edited my question to narrow my search for examples that do outperform non-learned algorithms.
      $endgroup$
      – klaus
      Jul 4 at 22:30















    6
















    $begingroup$

    Using OR in ML is a very popular approach due to the optimization nature lying behind ML.



    However, as you ask, there are also many examples (younger, newer) where you apply ML to solve OR problems. For example, for routing problems: https://arxiv.org/pdf/1803.08475.pdf



    The list can be appended, but I think your question needs to be improved before.






    share|improve this answer










    $endgroup$










    • 1




      $begingroup$
      The paper you cited has quite a few examples in the related work section. However they claim that "The goal of our method is not to outperform a non- learned, specialized TSP algorithm such as Concorde...". I edited my question to narrow my search for examples that do outperform non-learned algorithms.
      $endgroup$
      – klaus
      Jul 4 at 22:30













    6














    6










    6







    $begingroup$

    Using OR in ML is a very popular approach due to the optimization nature lying behind ML.



    However, as you ask, there are also many examples (younger, newer) where you apply ML to solve OR problems. For example, for routing problems: https://arxiv.org/pdf/1803.08475.pdf



    The list can be appended, but I think your question needs to be improved before.






    share|improve this answer










    $endgroup$



    Using OR in ML is a very popular approach due to the optimization nature lying behind ML.



    However, as you ask, there are also many examples (younger, newer) where you apply ML to solve OR problems. For example, for routing problems: https://arxiv.org/pdf/1803.08475.pdf



    The list can be appended, but I think your question needs to be improved before.







    share|improve this answer













    share|improve this answer




    share|improve this answer










    answered Jul 4 at 21:53









    independentvariableindependentvariable

    1,0612 silver badges23 bronze badges




    1,0612 silver badges23 bronze badges










    • 1




      $begingroup$
      The paper you cited has quite a few examples in the related work section. However they claim that "The goal of our method is not to outperform a non- learned, specialized TSP algorithm such as Concorde...". I edited my question to narrow my search for examples that do outperform non-learned algorithms.
      $endgroup$
      – klaus
      Jul 4 at 22:30












    • 1




      $begingroup$
      The paper you cited has quite a few examples in the related work section. However they claim that "The goal of our method is not to outperform a non- learned, specialized TSP algorithm such as Concorde...". I edited my question to narrow my search for examples that do outperform non-learned algorithms.
      $endgroup$
      – klaus
      Jul 4 at 22:30







    1




    1




    $begingroup$
    The paper you cited has quite a few examples in the related work section. However they claim that "The goal of our method is not to outperform a non- learned, specialized TSP algorithm such as Concorde...". I edited my question to narrow my search for examples that do outperform non-learned algorithms.
    $endgroup$
    – klaus
    Jul 4 at 22:30




    $begingroup$
    The paper you cited has quite a few examples in the related work section. However they claim that "The goal of our method is not to outperform a non- learned, specialized TSP algorithm such as Concorde...". I edited my question to narrow my search for examples that do outperform non-learned algorithms.
    $endgroup$
    – klaus
    Jul 4 at 22:30











    4
















    $begingroup$

    There is a paper Learning Fast Optimizers for Contextual Stochastic Integer Programs where they develop a "learnable local solver" to solve problems where the MIP solvers did not scale.



    I have not studied the paper, yet, but it may fit your bill.



    EDIT: From the abstract/introduction: The problems are two-stage stochastic optimization, where the learned local solver is applied to the first stage, after which the (deterministic) second stage is handed to a MIP solver. This performs better than handing the overall problem to a MIP solver (better objective within same time limit).






    share|improve this answer












    $endgroup$










    • 1




      $begingroup$
      The problems are two-stage stochastic optimization, where the learned local solver is applied to the first stage, after which the (deterministic) second stage is handed to a MIP solver. This performs better than handing the overall problem to a MIP solver (better objective within same time limit).
      $endgroup$
      – Robert Schwarz
      Jul 5 at 5:42










    • $begingroup$
      Perhaps this comment would be better served as an edit to the answer for the benefit of future visitors
      $endgroup$
      – SecretAgentMan
      Jul 8 at 15:39















    4
















    $begingroup$

    There is a paper Learning Fast Optimizers for Contextual Stochastic Integer Programs where they develop a "learnable local solver" to solve problems where the MIP solvers did not scale.



    I have not studied the paper, yet, but it may fit your bill.



    EDIT: From the abstract/introduction: The problems are two-stage stochastic optimization, where the learned local solver is applied to the first stage, after which the (deterministic) second stage is handed to a MIP solver. This performs better than handing the overall problem to a MIP solver (better objective within same time limit).






    share|improve this answer












    $endgroup$










    • 1




      $begingroup$
      The problems are two-stage stochastic optimization, where the learned local solver is applied to the first stage, after which the (deterministic) second stage is handed to a MIP solver. This performs better than handing the overall problem to a MIP solver (better objective within same time limit).
      $endgroup$
      – Robert Schwarz
      Jul 5 at 5:42










    • $begingroup$
      Perhaps this comment would be better served as an edit to the answer for the benefit of future visitors
      $endgroup$
      – SecretAgentMan
      Jul 8 at 15:39













    4














    4










    4







    $begingroup$

    There is a paper Learning Fast Optimizers for Contextual Stochastic Integer Programs where they develop a "learnable local solver" to solve problems where the MIP solvers did not scale.



    I have not studied the paper, yet, but it may fit your bill.



    EDIT: From the abstract/introduction: The problems are two-stage stochastic optimization, where the learned local solver is applied to the first stage, after which the (deterministic) second stage is handed to a MIP solver. This performs better than handing the overall problem to a MIP solver (better objective within same time limit).






    share|improve this answer












    $endgroup$



    There is a paper Learning Fast Optimizers for Contextual Stochastic Integer Programs where they develop a "learnable local solver" to solve problems where the MIP solvers did not scale.



    I have not studied the paper, yet, but it may fit your bill.



    EDIT: From the abstract/introduction: The problems are two-stage stochastic optimization, where the learned local solver is applied to the first stage, after which the (deterministic) second stage is handed to a MIP solver. This performs better than handing the overall problem to a MIP solver (better objective within same time limit).







    share|improve this answer















    share|improve this answer




    share|improve this answer








    edited Jul 9 at 8:01

























    answered Jul 5 at 5:39









    Robert SchwarzRobert Schwarz

    1,3143 silver badges15 bronze badges




    1,3143 silver badges15 bronze badges










    • 1




      $begingroup$
      The problems are two-stage stochastic optimization, where the learned local solver is applied to the first stage, after which the (deterministic) second stage is handed to a MIP solver. This performs better than handing the overall problem to a MIP solver (better objective within same time limit).
      $endgroup$
      – Robert Schwarz
      Jul 5 at 5:42










    • $begingroup$
      Perhaps this comment would be better served as an edit to the answer for the benefit of future visitors
      $endgroup$
      – SecretAgentMan
      Jul 8 at 15:39












    • 1




      $begingroup$
      The problems are two-stage stochastic optimization, where the learned local solver is applied to the first stage, after which the (deterministic) second stage is handed to a MIP solver. This performs better than handing the overall problem to a MIP solver (better objective within same time limit).
      $endgroup$
      – Robert Schwarz
      Jul 5 at 5:42










    • $begingroup$
      Perhaps this comment would be better served as an edit to the answer for the benefit of future visitors
      $endgroup$
      – SecretAgentMan
      Jul 8 at 15:39







    1




    1




    $begingroup$
    The problems are two-stage stochastic optimization, where the learned local solver is applied to the first stage, after which the (deterministic) second stage is handed to a MIP solver. This performs better than handing the overall problem to a MIP solver (better objective within same time limit).
    $endgroup$
    – Robert Schwarz
    Jul 5 at 5:42




    $begingroup$
    The problems are two-stage stochastic optimization, where the learned local solver is applied to the first stage, after which the (deterministic) second stage is handed to a MIP solver. This performs better than handing the overall problem to a MIP solver (better objective within same time limit).
    $endgroup$
    – Robert Schwarz
    Jul 5 at 5:42












    $begingroup$
    Perhaps this comment would be better served as an edit to the answer for the benefit of future visitors
    $endgroup$
    – SecretAgentMan
    Jul 8 at 15:39




    $begingroup$
    Perhaps this comment would be better served as an edit to the answer for the benefit of future visitors
    $endgroup$
    – SecretAgentMan
    Jul 8 at 15:39











    1
















    $begingroup$

    Also this special issues can give idea:



    Special issue: Combining optimization and machine learning: applications in vehicle routing, network design and crew scheduling



    Special Issue "Machine Learning and Optimization with Applications of Power System"



    Special Issue on Machine Learning and Optimization






    share|improve this answer










    $endgroup$



















      1
















      $begingroup$

      Also this special issues can give idea:



      Special issue: Combining optimization and machine learning: applications in vehicle routing, network design and crew scheduling



      Special Issue "Machine Learning and Optimization with Applications of Power System"



      Special Issue on Machine Learning and Optimization






      share|improve this answer










      $endgroup$

















        1














        1










        1







        $begingroup$

        Also this special issues can give idea:



        Special issue: Combining optimization and machine learning: applications in vehicle routing, network design and crew scheduling



        Special Issue "Machine Learning and Optimization with Applications of Power System"



        Special Issue on Machine Learning and Optimization






        share|improve this answer










        $endgroup$



        Also this special issues can give idea:



        Special issue: Combining optimization and machine learning: applications in vehicle routing, network design and crew scheduling



        Special Issue "Machine Learning and Optimization with Applications of Power System"



        Special Issue on Machine Learning and Optimization







        share|improve this answer













        share|improve this answer




        share|improve this answer










        answered Aug 23 at 7:57









        kur agkur ag

        1774 bronze badges




        1774 bronze badges































            draft saved

            draft discarded















































            Thanks for contributing an answer to Operations Research Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2for.stackexchange.com%2fquestions%2f889%2fexamples-of-machine-learning-applied-to-operations-research%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown









            Popular posts from this blog

            Tamil (spriik) Luke uk diar | Nawigatjuun

            Align equal signs while including text over equalitiesAMS align: left aligned text/math plus multicolumn alignmentMultiple alignmentsAligning equations in multiple placesNumbering and aligning an equation with multiple columnsHow to align one equation with another multline equationUsing \ in environments inside the begintabularxNumber equations and preserving alignment of equal signsHow can I align equations to the left and to the right?Double equation alignment problem within align enviromentAligned within align: Why are they right-aligned?

            Training a classifier when some of the features are unknownWhy does Gradient Boosting regression predict negative values when there are no negative y-values in my training set?How to improve an existing (trained) classifier?What is effect when I set up some self defined predisctor variables?Why Matlab neural network classification returns decimal values on prediction dataset?Fitting and transforming text data in training, testing, and validation setsHow to quantify the performance of the classifier (multi-class SVM) using the test data?How do I control for some patients providing multiple samples in my training data?Training and Test setTraining a convolutional neural network for image denoising in MatlabShouldn't an autoencoder with #(neurons in hidden layer) = #(neurons in input layer) be “perfect”?