Linearize or approximate a square root constraintHow to linearize the product of two binary variables?How to linearize a constraint with a maximum or minimum in the right-hand-side?How to linearize the product of two continuous variables?Prove that these linear programming problems are bounded by $O(k^1/2)$How to formulate maximum function in a constraint?Is my approach to my internship project good? Optimal allocation of product across stores, constrained optimizationHow to reformulate (linearize/convexify) a budgeted assignment problem?How to linearize min function as a constraint?Divisibility constraints in integer programming

Basic Accidental Question

Making Sandwiches

Why did a shuttle astronaut have an open book during ascent?

Would nuclear bombs be effective against clouds of nanites?

Find the Longest Even Length Word

Dicht antonym - what is it?

How can substitute() operate on a whole buffer?

"Chess is 90% tactics" - should a player focus more on tactics in order to improve?

Why rounding odd font sizes to even?

How to present boolean options along with selecting exactly 1 of them as "primary"?

What's the name of the role of characters who buff teammates?

Is there any point in having more than 6 months' runway in savings?

On a naked chicken (no coating,batter) is there any benefit of double frying?

Need Good OOP Design For World and Countries Problem

Is it sportsmanlike to waste opponents' time by giving check at the end of the game?

Musig Signature Interactivity

Is it typical to not list class instructors during undergraduate enrollment?

Which act of Congress authorized the Ukrainian aid which was allegedly withheld?

Polling on Impeachment

Speaking German abroad and feeling condescended to when people speak English back to me

At what point in time would humans notice a 21st century satellite observing them?

What to do with a bent but not broken aluminum seat stay

Create virtual block device which writes to /dev/null

What is the meaning of Text inside of AMS logo



Linearize or approximate a square root constraint


How to linearize the product of two binary variables?How to linearize a constraint with a maximum or minimum in the right-hand-side?How to linearize the product of two continuous variables?Prove that these linear programming problems are bounded by $O(k^1/2)$How to formulate maximum function in a constraint?Is my approach to my internship project good? Optimal allocation of product across stores, constrained optimizationHow to reformulate (linearize/convexify) a budgeted assignment problem?How to linearize min function as a constraint?Divisibility constraints in integer programming













19














$begingroup$


I encounter a nonlinear constraint that contains the square root of a sum of integer variables. Of course one could use nonlinear solvers and techniques; but I like linear programming. Are there any standard results on linearizing or approximating a square root of the sum of integer variables?



For example, the constraints look like this:



$$sqrtsum_i in mathcalI a_ijx_ij leq theta_j, quad quad forall j in mathcalJ$$



where $x_ij in 0,1$ are binary variables, $theta_j in mathbbR$ are continuous variables, and $a_ij geq 0$ are parameters. $mathcalI$ and $mathcalJ$ are any given sets of polynomial size.



Of course, this constraint is part of a larger MIP, but as I am curious to general methods and results regarding this constraint I believe it not to be of interest to post it here.










share|improve this question












$endgroup$










  • 3




    $begingroup$
    Could you add some additional information about the constraint (maybe a simplified version)? Is the constraint convex?
    $endgroup$
    – Kevin Dalmeijer
    Jul 21 at 9:25















19














$begingroup$


I encounter a nonlinear constraint that contains the square root of a sum of integer variables. Of course one could use nonlinear solvers and techniques; but I like linear programming. Are there any standard results on linearizing or approximating a square root of the sum of integer variables?



For example, the constraints look like this:



$$sqrtsum_i in mathcalI a_ijx_ij leq theta_j, quad quad forall j in mathcalJ$$



where $x_ij in 0,1$ are binary variables, $theta_j in mathbbR$ are continuous variables, and $a_ij geq 0$ are parameters. $mathcalI$ and $mathcalJ$ are any given sets of polynomial size.



Of course, this constraint is part of a larger MIP, but as I am curious to general methods and results regarding this constraint I believe it not to be of interest to post it here.










share|improve this question












$endgroup$










  • 3




    $begingroup$
    Could you add some additional information about the constraint (maybe a simplified version)? Is the constraint convex?
    $endgroup$
    – Kevin Dalmeijer
    Jul 21 at 9:25













19












19








19





$begingroup$


I encounter a nonlinear constraint that contains the square root of a sum of integer variables. Of course one could use nonlinear solvers and techniques; but I like linear programming. Are there any standard results on linearizing or approximating a square root of the sum of integer variables?



For example, the constraints look like this:



$$sqrtsum_i in mathcalI a_ijx_ij leq theta_j, quad quad forall j in mathcalJ$$



where $x_ij in 0,1$ are binary variables, $theta_j in mathbbR$ are continuous variables, and $a_ij geq 0$ are parameters. $mathcalI$ and $mathcalJ$ are any given sets of polynomial size.



Of course, this constraint is part of a larger MIP, but as I am curious to general methods and results regarding this constraint I believe it not to be of interest to post it here.










share|improve this question












$endgroup$




I encounter a nonlinear constraint that contains the square root of a sum of integer variables. Of course one could use nonlinear solvers and techniques; but I like linear programming. Are there any standard results on linearizing or approximating a square root of the sum of integer variables?



For example, the constraints look like this:



$$sqrtsum_i in mathcalI a_ijx_ij leq theta_j, quad quad forall j in mathcalJ$$



where $x_ij in 0,1$ are binary variables, $theta_j in mathbbR$ are continuous variables, and $a_ij geq 0$ are parameters. $mathcalI$ and $mathcalJ$ are any given sets of polynomial size.



Of course, this constraint is part of a larger MIP, but as I am curious to general methods and results regarding this constraint I believe it not to be of interest to post it here.







linear-programming nonlinear-programming constraint linearization






share|improve this question
















share|improve this question













share|improve this question




share|improve this question








edited Sep 30 at 6:40









TheSimpliFire

2,8481 gold badge7 silver badges42 bronze badges




2,8481 gold badge7 silver badges42 bronze badges










asked Jul 21 at 7:01









Albert SchrotenboerAlbert Schrotenboer

1,0625 silver badges22 bronze badges




1,0625 silver badges22 bronze badges










  • 3




    $begingroup$
    Could you add some additional information about the constraint (maybe a simplified version)? Is the constraint convex?
    $endgroup$
    – Kevin Dalmeijer
    Jul 21 at 9:25












  • 3




    $begingroup$
    Could you add some additional information about the constraint (maybe a simplified version)? Is the constraint convex?
    $endgroup$
    – Kevin Dalmeijer
    Jul 21 at 9:25







3




3




$begingroup$
Could you add some additional information about the constraint (maybe a simplified version)? Is the constraint convex?
$endgroup$
– Kevin Dalmeijer
Jul 21 at 9:25




$begingroup$
Could you add some additional information about the constraint (maybe a simplified version)? Is the constraint convex?
$endgroup$
– Kevin Dalmeijer
Jul 21 at 9:25










4 Answers
4






active

oldest

votes


















18
















$begingroup$

This can be handled as an MISOCP, Mixed-Integer Second Order Cone problem. The leading commercial MILP solvers can also handle MISOCP.



Specifically, due to $x_ij$ being binary, $x_ij^2 = x_ij$. Therefore, the left-hand side is the two-norm of the vector over $i in I$ having elements $sqrta_ij x_ij$.



I don't know whether this is the best way to handle this constraint, but it is a way, and it is "exact".






share|improve this answer












$endgroup$










  • 3




    $begingroup$
    I agree with this answer. To add to this: if you really want to solve an LP rather than an SOCP, you can approximate the SOCP constraint with a polynomial number of LP constraints, as discussed here. AFAIK it's generally understood that solving the problem as one SOCP is faster.
    $endgroup$
    – Ryan Cory-Wright
    Jul 21 at 21:22










  • $begingroup$
    I like this answer a lot. To add: for a good reference on modeling SOCP, see this paper.
    $endgroup$
    – Kevin Dalmeijer
    Jul 21 at 22:06







  • 3




    $begingroup$
    @Daniel Duque If $x in 0,1$ then you can simply replace $x$ by $x^2$. This is due to the fact that $0^2 = 0$ and $1^2 = 1$ and thus $x^2 = x$ for all permitted values.
    $endgroup$
    – Kevin Dalmeijer
    Jul 22 at 9:26






  • 1




    $begingroup$
    I see now, thanks.
    $endgroup$
    – Daniel Duque
    Jul 22 at 12:04






  • 1




    $begingroup$
    @Alexandre Frias "Convex Optimization" by Boyd and Vandenberghe web.stanford.edu/~boyd/cvxbook . Also, the MOSEK Modeling Cookbook (chapter 3) docs.mosek.com/MOSEKModelingCookbook-letter.pdf
    $endgroup$
    – Mark L. Stone
    Oct 1 at 20:14


















5
















$begingroup$

Please also have a look at the very similar question in math.stackexchange. As @Mark L. Stone mentioned in his answer, all you need is a second-order cone model to solve your problem.






share|improve this answer










$endgroup$






















    3
















    $begingroup$

    To linearize that constraint as it is can be hard since it is non-convex. Assuming you still want to do that, you would need to introduce binary variables that allow you characterize the function.



    Focusing on a single $j$, let first define $w_j=sumlimits_Iin Ia_i,j x_i,j$, with $w_jgeq 0$ and assume you have a bound on such that $w_jleq UB_j$. Now let $n$ be the number of pieces (linear inequalities) you want to use to describe $sqrtw_j$, and for each piece, let $m_k,j$ and $b_k,j$ be the slope and intercept of the $k$th piece of the $j$th constraint for $k=1,ldots,n$, which are tangent lines of $theta_j=sqrtw_j$ at (finite) points $w_k,jin[0,UB_j]$ (these are the breakpoints in the $w_j$ space), $k=1,ldots,n+1$. Since the constraint are not convex, only one piece can be "on" in an optimal solution, hence, let $lambda_k,jin0,1$ be a binary variable that is one if the piece is "on" for constraint $jin J$, zero otherwise. Putting all together,



    • Choose only one piece for crt $j$: $$sumlimits_k=1^nlambda_k,j=1 quadforall jin J$$


    • $w_j$ need to be in the right interval if you choose piece $k$ $$-M(1-lambda_k,j) + w_k,jle w_j le w_k+1,j + M(1-lambda_k,j) quad forall j in J,,k=1,ldots,n$$


    • Definition of $w_j$: $$w_j = sumlimits_Iin Ia_i,j x_i,j quadforall j in J$$


    • This is the linearized constraint, where $theta_j$ is greater or equal to the piece that is selected: $$theta_jge m_k,j w_j + b_k,j - M(1-lambda_k,j) quadforall jin J,, k=1,ldots,n$$


    As a side note, you have to choose the breakpoints upfront. A plot of $theta_jge sqrtw_j$ (for a single $j$, this a 2D-plot) can help to clarify the linearization.



    If your constraints are convex (e.g., the inequality is $ge$ or you treat it as an SOCP as described in the answer above), then you could implement Kelley's cutting-plane1 method which is an outer approximation method. Those cuts are not cuts in the integer programming sense, so don't add them as cuts. Rather, in B&B add them as lazy constraints. Alternatively, if the MIP is easy to solve, generate a single (Kelley's) cut at a time an re-optimize.




    Reference



    [1] Kelley, J. E., Jr. (1960). The Cutting-Plane Method For Solving Convex Programs. Journal of the Society for Industrial and Applied Mathematics. 8(4):703-712.






    share|improve this answer












    $endgroup$










    • 1




      $begingroup$
      Is all this really needed? The square root function is concave, so you could just use a series of cutting planes: linearisation of square root at $sumlimits_i a_i x_i = textsthletheta_j$; with enough cutting planes, you are drawing an approximation that is good enough. (Or you can use an iterative process to add new cutting planes, to reduce the total number of constraints.) I guess that's what pubsonline.informs.org/doi/abs/10.1287/moor.26.2.193.10561 does.
      $endgroup$
      – dourouc05
      Jul 22 at 1:59











    • $begingroup$
      It is necessary due to the sense of the inequality. In the way it’s written in the question, that constraint defines a non-convex feasible region. If the inequality is the other way around, then it would be much easier as it would be a convex feasible region (ignoring the integrality of x).
      $endgroup$
      – Daniel Duque
      Jul 22 at 2:09


















    2
















    $begingroup$

    You can manipulate this inequality as follows



    $$sqrtsum_i in mathcalI a_ijx_ij leq theta_j, quad quad forall j in mathcalJ$$



    $$sum_i in mathcalI a_ijx_ij leq theta_j^2, quad quad forall j in mathcalJ$$



    Now, you need to linearize $theta_j^2$ using McCormick Envelopes. To do this, assume $-M_jleq theta_j leq M_j$ and consider $w_j=theta_j^2$



    $$
    beginalign
    0leq (theta_j + M_j)(theta_j + M_j) & implies & -w_j - 2M_jtheta leq M_j^2\
    0leq (M_j - theta_j)(M_j - theta_j) & implies & -w_j + 2M_jtheta leq M_j^2\
    0leq (theta_j + M_j)(M_j - theta_j) & implies & w_j leq M_j^2\
    endalign
    $$



    The final set of constraints is



    $$
    beginalign
    sum_i in mathcalI a_ijx_ij leq w_j, quad quad forall j in mathcalJ\
    -w_j - 2M_jtheta_j leq M_j^2, quad quad forall j in mathcalJ\
    -w_j + 2M_jtheta_j leq M_j^2, quad quad forall j in mathcalJ\
    0 leq w_j leq M_j^2, quad quad forall j in mathcalJ\
    -M_j leq theta_j leq M_j, quad quad forall j in mathcalJ\
    endalign
    $$



    OBS: Verify my counts, please.






    share|improve this answer












    $endgroup$










    • 1




      $begingroup$
      McCormick envelopes provide a relaxation for real variables.
      $endgroup$
      – Alexandre Frias
      Oct 1 at 19:36












    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "700"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );














    draft saved

    draft discarded
















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2for.stackexchange.com%2fquestions%2f1052%2flinearize-or-approximate-a-square-root-constraint%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown


























    4 Answers
    4






    active

    oldest

    votes








    4 Answers
    4






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    18
















    $begingroup$

    This can be handled as an MISOCP, Mixed-Integer Second Order Cone problem. The leading commercial MILP solvers can also handle MISOCP.



    Specifically, due to $x_ij$ being binary, $x_ij^2 = x_ij$. Therefore, the left-hand side is the two-norm of the vector over $i in I$ having elements $sqrta_ij x_ij$.



    I don't know whether this is the best way to handle this constraint, but it is a way, and it is "exact".






    share|improve this answer












    $endgroup$










    • 3




      $begingroup$
      I agree with this answer. To add to this: if you really want to solve an LP rather than an SOCP, you can approximate the SOCP constraint with a polynomial number of LP constraints, as discussed here. AFAIK it's generally understood that solving the problem as one SOCP is faster.
      $endgroup$
      – Ryan Cory-Wright
      Jul 21 at 21:22










    • $begingroup$
      I like this answer a lot. To add: for a good reference on modeling SOCP, see this paper.
      $endgroup$
      – Kevin Dalmeijer
      Jul 21 at 22:06







    • 3




      $begingroup$
      @Daniel Duque If $x in 0,1$ then you can simply replace $x$ by $x^2$. This is due to the fact that $0^2 = 0$ and $1^2 = 1$ and thus $x^2 = x$ for all permitted values.
      $endgroup$
      – Kevin Dalmeijer
      Jul 22 at 9:26






    • 1




      $begingroup$
      I see now, thanks.
      $endgroup$
      – Daniel Duque
      Jul 22 at 12:04






    • 1




      $begingroup$
      @Alexandre Frias "Convex Optimization" by Boyd and Vandenberghe web.stanford.edu/~boyd/cvxbook . Also, the MOSEK Modeling Cookbook (chapter 3) docs.mosek.com/MOSEKModelingCookbook-letter.pdf
      $endgroup$
      – Mark L. Stone
      Oct 1 at 20:14















    18
















    $begingroup$

    This can be handled as an MISOCP, Mixed-Integer Second Order Cone problem. The leading commercial MILP solvers can also handle MISOCP.



    Specifically, due to $x_ij$ being binary, $x_ij^2 = x_ij$. Therefore, the left-hand side is the two-norm of the vector over $i in I$ having elements $sqrta_ij x_ij$.



    I don't know whether this is the best way to handle this constraint, but it is a way, and it is "exact".






    share|improve this answer












    $endgroup$










    • 3




      $begingroup$
      I agree with this answer. To add to this: if you really want to solve an LP rather than an SOCP, you can approximate the SOCP constraint with a polynomial number of LP constraints, as discussed here. AFAIK it's generally understood that solving the problem as one SOCP is faster.
      $endgroup$
      – Ryan Cory-Wright
      Jul 21 at 21:22










    • $begingroup$
      I like this answer a lot. To add: for a good reference on modeling SOCP, see this paper.
      $endgroup$
      – Kevin Dalmeijer
      Jul 21 at 22:06







    • 3




      $begingroup$
      @Daniel Duque If $x in 0,1$ then you can simply replace $x$ by $x^2$. This is due to the fact that $0^2 = 0$ and $1^2 = 1$ and thus $x^2 = x$ for all permitted values.
      $endgroup$
      – Kevin Dalmeijer
      Jul 22 at 9:26






    • 1




      $begingroup$
      I see now, thanks.
      $endgroup$
      – Daniel Duque
      Jul 22 at 12:04






    • 1




      $begingroup$
      @Alexandre Frias "Convex Optimization" by Boyd and Vandenberghe web.stanford.edu/~boyd/cvxbook . Also, the MOSEK Modeling Cookbook (chapter 3) docs.mosek.com/MOSEKModelingCookbook-letter.pdf
      $endgroup$
      – Mark L. Stone
      Oct 1 at 20:14













    18














    18










    18







    $begingroup$

    This can be handled as an MISOCP, Mixed-Integer Second Order Cone problem. The leading commercial MILP solvers can also handle MISOCP.



    Specifically, due to $x_ij$ being binary, $x_ij^2 = x_ij$. Therefore, the left-hand side is the two-norm of the vector over $i in I$ having elements $sqrta_ij x_ij$.



    I don't know whether this is the best way to handle this constraint, but it is a way, and it is "exact".






    share|improve this answer












    $endgroup$



    This can be handled as an MISOCP, Mixed-Integer Second Order Cone problem. The leading commercial MILP solvers can also handle MISOCP.



    Specifically, due to $x_ij$ being binary, $x_ij^2 = x_ij$. Therefore, the left-hand side is the two-norm of the vector over $i in I$ having elements $sqrta_ij x_ij$.



    I don't know whether this is the best way to handle this constraint, but it is a way, and it is "exact".







    share|improve this answer















    share|improve this answer




    share|improve this answer








    edited Jul 21 at 15:21









    prubin

    6,4369 silver badges36 bronze badges




    6,4369 silver badges36 bronze badges










    answered Jul 21 at 10:44









    Mark L. StoneMark L. Stone

    4,3101 gold badge10 silver badges35 bronze badges




    4,3101 gold badge10 silver badges35 bronze badges










    • 3




      $begingroup$
      I agree with this answer. To add to this: if you really want to solve an LP rather than an SOCP, you can approximate the SOCP constraint with a polynomial number of LP constraints, as discussed here. AFAIK it's generally understood that solving the problem as one SOCP is faster.
      $endgroup$
      – Ryan Cory-Wright
      Jul 21 at 21:22










    • $begingroup$
      I like this answer a lot. To add: for a good reference on modeling SOCP, see this paper.
      $endgroup$
      – Kevin Dalmeijer
      Jul 21 at 22:06







    • 3




      $begingroup$
      @Daniel Duque If $x in 0,1$ then you can simply replace $x$ by $x^2$. This is due to the fact that $0^2 = 0$ and $1^2 = 1$ and thus $x^2 = x$ for all permitted values.
      $endgroup$
      – Kevin Dalmeijer
      Jul 22 at 9:26






    • 1




      $begingroup$
      I see now, thanks.
      $endgroup$
      – Daniel Duque
      Jul 22 at 12:04






    • 1




      $begingroup$
      @Alexandre Frias "Convex Optimization" by Boyd and Vandenberghe web.stanford.edu/~boyd/cvxbook . Also, the MOSEK Modeling Cookbook (chapter 3) docs.mosek.com/MOSEKModelingCookbook-letter.pdf
      $endgroup$
      – Mark L. Stone
      Oct 1 at 20:14












    • 3




      $begingroup$
      I agree with this answer. To add to this: if you really want to solve an LP rather than an SOCP, you can approximate the SOCP constraint with a polynomial number of LP constraints, as discussed here. AFAIK it's generally understood that solving the problem as one SOCP is faster.
      $endgroup$
      – Ryan Cory-Wright
      Jul 21 at 21:22










    • $begingroup$
      I like this answer a lot. To add: for a good reference on modeling SOCP, see this paper.
      $endgroup$
      – Kevin Dalmeijer
      Jul 21 at 22:06







    • 3




      $begingroup$
      @Daniel Duque If $x in 0,1$ then you can simply replace $x$ by $x^2$. This is due to the fact that $0^2 = 0$ and $1^2 = 1$ and thus $x^2 = x$ for all permitted values.
      $endgroup$
      – Kevin Dalmeijer
      Jul 22 at 9:26






    • 1




      $begingroup$
      I see now, thanks.
      $endgroup$
      – Daniel Duque
      Jul 22 at 12:04






    • 1




      $begingroup$
      @Alexandre Frias "Convex Optimization" by Boyd and Vandenberghe web.stanford.edu/~boyd/cvxbook . Also, the MOSEK Modeling Cookbook (chapter 3) docs.mosek.com/MOSEKModelingCookbook-letter.pdf
      $endgroup$
      – Mark L. Stone
      Oct 1 at 20:14







    3




    3




    $begingroup$
    I agree with this answer. To add to this: if you really want to solve an LP rather than an SOCP, you can approximate the SOCP constraint with a polynomial number of LP constraints, as discussed here. AFAIK it's generally understood that solving the problem as one SOCP is faster.
    $endgroup$
    – Ryan Cory-Wright
    Jul 21 at 21:22




    $begingroup$
    I agree with this answer. To add to this: if you really want to solve an LP rather than an SOCP, you can approximate the SOCP constraint with a polynomial number of LP constraints, as discussed here. AFAIK it's generally understood that solving the problem as one SOCP is faster.
    $endgroup$
    – Ryan Cory-Wright
    Jul 21 at 21:22












    $begingroup$
    I like this answer a lot. To add: for a good reference on modeling SOCP, see this paper.
    $endgroup$
    – Kevin Dalmeijer
    Jul 21 at 22:06





    $begingroup$
    I like this answer a lot. To add: for a good reference on modeling SOCP, see this paper.
    $endgroup$
    – Kevin Dalmeijer
    Jul 21 at 22:06





    3




    3




    $begingroup$
    @Daniel Duque If $x in 0,1$ then you can simply replace $x$ by $x^2$. This is due to the fact that $0^2 = 0$ and $1^2 = 1$ and thus $x^2 = x$ for all permitted values.
    $endgroup$
    – Kevin Dalmeijer
    Jul 22 at 9:26




    $begingroup$
    @Daniel Duque If $x in 0,1$ then you can simply replace $x$ by $x^2$. This is due to the fact that $0^2 = 0$ and $1^2 = 1$ and thus $x^2 = x$ for all permitted values.
    $endgroup$
    – Kevin Dalmeijer
    Jul 22 at 9:26




    1




    1




    $begingroup$
    I see now, thanks.
    $endgroup$
    – Daniel Duque
    Jul 22 at 12:04




    $begingroup$
    I see now, thanks.
    $endgroup$
    – Daniel Duque
    Jul 22 at 12:04




    1




    1




    $begingroup$
    @Alexandre Frias "Convex Optimization" by Boyd and Vandenberghe web.stanford.edu/~boyd/cvxbook . Also, the MOSEK Modeling Cookbook (chapter 3) docs.mosek.com/MOSEKModelingCookbook-letter.pdf
    $endgroup$
    – Mark L. Stone
    Oct 1 at 20:14




    $begingroup$
    @Alexandre Frias "Convex Optimization" by Boyd and Vandenberghe web.stanford.edu/~boyd/cvxbook . Also, the MOSEK Modeling Cookbook (chapter 3) docs.mosek.com/MOSEKModelingCookbook-letter.pdf
    $endgroup$
    – Mark L. Stone
    Oct 1 at 20:14











    5
















    $begingroup$

    Please also have a look at the very similar question in math.stackexchange. As @Mark L. Stone mentioned in his answer, all you need is a second-order cone model to solve your problem.






    share|improve this answer










    $endgroup$



















      5
















      $begingroup$

      Please also have a look at the very similar question in math.stackexchange. As @Mark L. Stone mentioned in his answer, all you need is a second-order cone model to solve your problem.






      share|improve this answer










      $endgroup$

















        5














        5










        5







        $begingroup$

        Please also have a look at the very similar question in math.stackexchange. As @Mark L. Stone mentioned in his answer, all you need is a second-order cone model to solve your problem.






        share|improve this answer










        $endgroup$



        Please also have a look at the very similar question in math.stackexchange. As @Mark L. Stone mentioned in his answer, all you need is a second-order cone model to solve your problem.







        share|improve this answer













        share|improve this answer




        share|improve this answer










        answered Jul 21 at 16:19









        Oguz ToragayOguz Toragay

        4,9371 gold badge3 silver badges30 bronze badges




        4,9371 gold badge3 silver badges30 bronze badges
























            3
















            $begingroup$

            To linearize that constraint as it is can be hard since it is non-convex. Assuming you still want to do that, you would need to introduce binary variables that allow you characterize the function.



            Focusing on a single $j$, let first define $w_j=sumlimits_Iin Ia_i,j x_i,j$, with $w_jgeq 0$ and assume you have a bound on such that $w_jleq UB_j$. Now let $n$ be the number of pieces (linear inequalities) you want to use to describe $sqrtw_j$, and for each piece, let $m_k,j$ and $b_k,j$ be the slope and intercept of the $k$th piece of the $j$th constraint for $k=1,ldots,n$, which are tangent lines of $theta_j=sqrtw_j$ at (finite) points $w_k,jin[0,UB_j]$ (these are the breakpoints in the $w_j$ space), $k=1,ldots,n+1$. Since the constraint are not convex, only one piece can be "on" in an optimal solution, hence, let $lambda_k,jin0,1$ be a binary variable that is one if the piece is "on" for constraint $jin J$, zero otherwise. Putting all together,



            • Choose only one piece for crt $j$: $$sumlimits_k=1^nlambda_k,j=1 quadforall jin J$$


            • $w_j$ need to be in the right interval if you choose piece $k$ $$-M(1-lambda_k,j) + w_k,jle w_j le w_k+1,j + M(1-lambda_k,j) quad forall j in J,,k=1,ldots,n$$


            • Definition of $w_j$: $$w_j = sumlimits_Iin Ia_i,j x_i,j quadforall j in J$$


            • This is the linearized constraint, where $theta_j$ is greater or equal to the piece that is selected: $$theta_jge m_k,j w_j + b_k,j - M(1-lambda_k,j) quadforall jin J,, k=1,ldots,n$$


            As a side note, you have to choose the breakpoints upfront. A plot of $theta_jge sqrtw_j$ (for a single $j$, this a 2D-plot) can help to clarify the linearization.



            If your constraints are convex (e.g., the inequality is $ge$ or you treat it as an SOCP as described in the answer above), then you could implement Kelley's cutting-plane1 method which is an outer approximation method. Those cuts are not cuts in the integer programming sense, so don't add them as cuts. Rather, in B&B add them as lazy constraints. Alternatively, if the MIP is easy to solve, generate a single (Kelley's) cut at a time an re-optimize.




            Reference



            [1] Kelley, J. E., Jr. (1960). The Cutting-Plane Method For Solving Convex Programs. Journal of the Society for Industrial and Applied Mathematics. 8(4):703-712.






            share|improve this answer












            $endgroup$










            • 1




              $begingroup$
              Is all this really needed? The square root function is concave, so you could just use a series of cutting planes: linearisation of square root at $sumlimits_i a_i x_i = textsthletheta_j$; with enough cutting planes, you are drawing an approximation that is good enough. (Or you can use an iterative process to add new cutting planes, to reduce the total number of constraints.) I guess that's what pubsonline.informs.org/doi/abs/10.1287/moor.26.2.193.10561 does.
              $endgroup$
              – dourouc05
              Jul 22 at 1:59











            • $begingroup$
              It is necessary due to the sense of the inequality. In the way it’s written in the question, that constraint defines a non-convex feasible region. If the inequality is the other way around, then it would be much easier as it would be a convex feasible region (ignoring the integrality of x).
              $endgroup$
              – Daniel Duque
              Jul 22 at 2:09















            3
















            $begingroup$

            To linearize that constraint as it is can be hard since it is non-convex. Assuming you still want to do that, you would need to introduce binary variables that allow you characterize the function.



            Focusing on a single $j$, let first define $w_j=sumlimits_Iin Ia_i,j x_i,j$, with $w_jgeq 0$ and assume you have a bound on such that $w_jleq UB_j$. Now let $n$ be the number of pieces (linear inequalities) you want to use to describe $sqrtw_j$, and for each piece, let $m_k,j$ and $b_k,j$ be the slope and intercept of the $k$th piece of the $j$th constraint for $k=1,ldots,n$, which are tangent lines of $theta_j=sqrtw_j$ at (finite) points $w_k,jin[0,UB_j]$ (these are the breakpoints in the $w_j$ space), $k=1,ldots,n+1$. Since the constraint are not convex, only one piece can be "on" in an optimal solution, hence, let $lambda_k,jin0,1$ be a binary variable that is one if the piece is "on" for constraint $jin J$, zero otherwise. Putting all together,



            • Choose only one piece for crt $j$: $$sumlimits_k=1^nlambda_k,j=1 quadforall jin J$$


            • $w_j$ need to be in the right interval if you choose piece $k$ $$-M(1-lambda_k,j) + w_k,jle w_j le w_k+1,j + M(1-lambda_k,j) quad forall j in J,,k=1,ldots,n$$


            • Definition of $w_j$: $$w_j = sumlimits_Iin Ia_i,j x_i,j quadforall j in J$$


            • This is the linearized constraint, where $theta_j$ is greater or equal to the piece that is selected: $$theta_jge m_k,j w_j + b_k,j - M(1-lambda_k,j) quadforall jin J,, k=1,ldots,n$$


            As a side note, you have to choose the breakpoints upfront. A plot of $theta_jge sqrtw_j$ (for a single $j$, this a 2D-plot) can help to clarify the linearization.



            If your constraints are convex (e.g., the inequality is $ge$ or you treat it as an SOCP as described in the answer above), then you could implement Kelley's cutting-plane1 method which is an outer approximation method. Those cuts are not cuts in the integer programming sense, so don't add them as cuts. Rather, in B&B add them as lazy constraints. Alternatively, if the MIP is easy to solve, generate a single (Kelley's) cut at a time an re-optimize.




            Reference



            [1] Kelley, J. E., Jr. (1960). The Cutting-Plane Method For Solving Convex Programs. Journal of the Society for Industrial and Applied Mathematics. 8(4):703-712.






            share|improve this answer












            $endgroup$










            • 1




              $begingroup$
              Is all this really needed? The square root function is concave, so you could just use a series of cutting planes: linearisation of square root at $sumlimits_i a_i x_i = textsthletheta_j$; with enough cutting planes, you are drawing an approximation that is good enough. (Or you can use an iterative process to add new cutting planes, to reduce the total number of constraints.) I guess that's what pubsonline.informs.org/doi/abs/10.1287/moor.26.2.193.10561 does.
              $endgroup$
              – dourouc05
              Jul 22 at 1:59











            • $begingroup$
              It is necessary due to the sense of the inequality. In the way it’s written in the question, that constraint defines a non-convex feasible region. If the inequality is the other way around, then it would be much easier as it would be a convex feasible region (ignoring the integrality of x).
              $endgroup$
              – Daniel Duque
              Jul 22 at 2:09













            3














            3










            3







            $begingroup$

            To linearize that constraint as it is can be hard since it is non-convex. Assuming you still want to do that, you would need to introduce binary variables that allow you characterize the function.



            Focusing on a single $j$, let first define $w_j=sumlimits_Iin Ia_i,j x_i,j$, with $w_jgeq 0$ and assume you have a bound on such that $w_jleq UB_j$. Now let $n$ be the number of pieces (linear inequalities) you want to use to describe $sqrtw_j$, and for each piece, let $m_k,j$ and $b_k,j$ be the slope and intercept of the $k$th piece of the $j$th constraint for $k=1,ldots,n$, which are tangent lines of $theta_j=sqrtw_j$ at (finite) points $w_k,jin[0,UB_j]$ (these are the breakpoints in the $w_j$ space), $k=1,ldots,n+1$. Since the constraint are not convex, only one piece can be "on" in an optimal solution, hence, let $lambda_k,jin0,1$ be a binary variable that is one if the piece is "on" for constraint $jin J$, zero otherwise. Putting all together,



            • Choose only one piece for crt $j$: $$sumlimits_k=1^nlambda_k,j=1 quadforall jin J$$


            • $w_j$ need to be in the right interval if you choose piece $k$ $$-M(1-lambda_k,j) + w_k,jle w_j le w_k+1,j + M(1-lambda_k,j) quad forall j in J,,k=1,ldots,n$$


            • Definition of $w_j$: $$w_j = sumlimits_Iin Ia_i,j x_i,j quadforall j in J$$


            • This is the linearized constraint, where $theta_j$ is greater or equal to the piece that is selected: $$theta_jge m_k,j w_j + b_k,j - M(1-lambda_k,j) quadforall jin J,, k=1,ldots,n$$


            As a side note, you have to choose the breakpoints upfront. A plot of $theta_jge sqrtw_j$ (for a single $j$, this a 2D-plot) can help to clarify the linearization.



            If your constraints are convex (e.g., the inequality is $ge$ or you treat it as an SOCP as described in the answer above), then you could implement Kelley's cutting-plane1 method which is an outer approximation method. Those cuts are not cuts in the integer programming sense, so don't add them as cuts. Rather, in B&B add them as lazy constraints. Alternatively, if the MIP is easy to solve, generate a single (Kelley's) cut at a time an re-optimize.




            Reference



            [1] Kelley, J. E., Jr. (1960). The Cutting-Plane Method For Solving Convex Programs. Journal of the Society for Industrial and Applied Mathematics. 8(4):703-712.






            share|improve this answer












            $endgroup$



            To linearize that constraint as it is can be hard since it is non-convex. Assuming you still want to do that, you would need to introduce binary variables that allow you characterize the function.



            Focusing on a single $j$, let first define $w_j=sumlimits_Iin Ia_i,j x_i,j$, with $w_jgeq 0$ and assume you have a bound on such that $w_jleq UB_j$. Now let $n$ be the number of pieces (linear inequalities) you want to use to describe $sqrtw_j$, and for each piece, let $m_k,j$ and $b_k,j$ be the slope and intercept of the $k$th piece of the $j$th constraint for $k=1,ldots,n$, which are tangent lines of $theta_j=sqrtw_j$ at (finite) points $w_k,jin[0,UB_j]$ (these are the breakpoints in the $w_j$ space), $k=1,ldots,n+1$. Since the constraint are not convex, only one piece can be "on" in an optimal solution, hence, let $lambda_k,jin0,1$ be a binary variable that is one if the piece is "on" for constraint $jin J$, zero otherwise. Putting all together,



            • Choose only one piece for crt $j$: $$sumlimits_k=1^nlambda_k,j=1 quadforall jin J$$


            • $w_j$ need to be in the right interval if you choose piece $k$ $$-M(1-lambda_k,j) + w_k,jle w_j le w_k+1,j + M(1-lambda_k,j) quad forall j in J,,k=1,ldots,n$$


            • Definition of $w_j$: $$w_j = sumlimits_Iin Ia_i,j x_i,j quadforall j in J$$


            • This is the linearized constraint, where $theta_j$ is greater or equal to the piece that is selected: $$theta_jge m_k,j w_j + b_k,j - M(1-lambda_k,j) quadforall jin J,, k=1,ldots,n$$


            As a side note, you have to choose the breakpoints upfront. A plot of $theta_jge sqrtw_j$ (for a single $j$, this a 2D-plot) can help to clarify the linearization.



            If your constraints are convex (e.g., the inequality is $ge$ or you treat it as an SOCP as described in the answer above), then you could implement Kelley's cutting-plane1 method which is an outer approximation method. Those cuts are not cuts in the integer programming sense, so don't add them as cuts. Rather, in B&B add them as lazy constraints. Alternatively, if the MIP is easy to solve, generate a single (Kelley's) cut at a time an re-optimize.




            Reference



            [1] Kelley, J. E., Jr. (1960). The Cutting-Plane Method For Solving Convex Programs. Journal of the Society for Industrial and Applied Mathematics. 8(4):703-712.







            share|improve this answer















            share|improve this answer




            share|improve this answer








            edited Sep 30 at 6:36









            TheSimpliFire

            2,8481 gold badge7 silver badges42 bronze badges




            2,8481 gold badge7 silver badges42 bronze badges










            answered Jul 21 at 17:36









            Daniel DuqueDaniel Duque

            9751 silver badge17 bronze badges




            9751 silver badge17 bronze badges










            • 1




              $begingroup$
              Is all this really needed? The square root function is concave, so you could just use a series of cutting planes: linearisation of square root at $sumlimits_i a_i x_i = textsthletheta_j$; with enough cutting planes, you are drawing an approximation that is good enough. (Or you can use an iterative process to add new cutting planes, to reduce the total number of constraints.) I guess that's what pubsonline.informs.org/doi/abs/10.1287/moor.26.2.193.10561 does.
              $endgroup$
              – dourouc05
              Jul 22 at 1:59











            • $begingroup$
              It is necessary due to the sense of the inequality. In the way it’s written in the question, that constraint defines a non-convex feasible region. If the inequality is the other way around, then it would be much easier as it would be a convex feasible region (ignoring the integrality of x).
              $endgroup$
              – Daniel Duque
              Jul 22 at 2:09












            • 1




              $begingroup$
              Is all this really needed? The square root function is concave, so you could just use a series of cutting planes: linearisation of square root at $sumlimits_i a_i x_i = textsthletheta_j$; with enough cutting planes, you are drawing an approximation that is good enough. (Or you can use an iterative process to add new cutting planes, to reduce the total number of constraints.) I guess that's what pubsonline.informs.org/doi/abs/10.1287/moor.26.2.193.10561 does.
              $endgroup$
              – dourouc05
              Jul 22 at 1:59











            • $begingroup$
              It is necessary due to the sense of the inequality. In the way it’s written in the question, that constraint defines a non-convex feasible region. If the inequality is the other way around, then it would be much easier as it would be a convex feasible region (ignoring the integrality of x).
              $endgroup$
              – Daniel Duque
              Jul 22 at 2:09







            1




            1




            $begingroup$
            Is all this really needed? The square root function is concave, so you could just use a series of cutting planes: linearisation of square root at $sumlimits_i a_i x_i = textsthletheta_j$; with enough cutting planes, you are drawing an approximation that is good enough. (Or you can use an iterative process to add new cutting planes, to reduce the total number of constraints.) I guess that's what pubsonline.informs.org/doi/abs/10.1287/moor.26.2.193.10561 does.
            $endgroup$
            – dourouc05
            Jul 22 at 1:59





            $begingroup$
            Is all this really needed? The square root function is concave, so you could just use a series of cutting planes: linearisation of square root at $sumlimits_i a_i x_i = textsthletheta_j$; with enough cutting planes, you are drawing an approximation that is good enough. (Or you can use an iterative process to add new cutting planes, to reduce the total number of constraints.) I guess that's what pubsonline.informs.org/doi/abs/10.1287/moor.26.2.193.10561 does.
            $endgroup$
            – dourouc05
            Jul 22 at 1:59













            $begingroup$
            It is necessary due to the sense of the inequality. In the way it’s written in the question, that constraint defines a non-convex feasible region. If the inequality is the other way around, then it would be much easier as it would be a convex feasible region (ignoring the integrality of x).
            $endgroup$
            – Daniel Duque
            Jul 22 at 2:09




            $begingroup$
            It is necessary due to the sense of the inequality. In the way it’s written in the question, that constraint defines a non-convex feasible region. If the inequality is the other way around, then it would be much easier as it would be a convex feasible region (ignoring the integrality of x).
            $endgroup$
            – Daniel Duque
            Jul 22 at 2:09











            2
















            $begingroup$

            You can manipulate this inequality as follows



            $$sqrtsum_i in mathcalI a_ijx_ij leq theta_j, quad quad forall j in mathcalJ$$



            $$sum_i in mathcalI a_ijx_ij leq theta_j^2, quad quad forall j in mathcalJ$$



            Now, you need to linearize $theta_j^2$ using McCormick Envelopes. To do this, assume $-M_jleq theta_j leq M_j$ and consider $w_j=theta_j^2$



            $$
            beginalign
            0leq (theta_j + M_j)(theta_j + M_j) & implies & -w_j - 2M_jtheta leq M_j^2\
            0leq (M_j - theta_j)(M_j - theta_j) & implies & -w_j + 2M_jtheta leq M_j^2\
            0leq (theta_j + M_j)(M_j - theta_j) & implies & w_j leq M_j^2\
            endalign
            $$



            The final set of constraints is



            $$
            beginalign
            sum_i in mathcalI a_ijx_ij leq w_j, quad quad forall j in mathcalJ\
            -w_j - 2M_jtheta_j leq M_j^2, quad quad forall j in mathcalJ\
            -w_j + 2M_jtheta_j leq M_j^2, quad quad forall j in mathcalJ\
            0 leq w_j leq M_j^2, quad quad forall j in mathcalJ\
            -M_j leq theta_j leq M_j, quad quad forall j in mathcalJ\
            endalign
            $$



            OBS: Verify my counts, please.






            share|improve this answer












            $endgroup$










            • 1




              $begingroup$
              McCormick envelopes provide a relaxation for real variables.
              $endgroup$
              – Alexandre Frias
              Oct 1 at 19:36















            2
















            $begingroup$

            You can manipulate this inequality as follows



            $$sqrtsum_i in mathcalI a_ijx_ij leq theta_j, quad quad forall j in mathcalJ$$



            $$sum_i in mathcalI a_ijx_ij leq theta_j^2, quad quad forall j in mathcalJ$$



            Now, you need to linearize $theta_j^2$ using McCormick Envelopes. To do this, assume $-M_jleq theta_j leq M_j$ and consider $w_j=theta_j^2$



            $$
            beginalign
            0leq (theta_j + M_j)(theta_j + M_j) & implies & -w_j - 2M_jtheta leq M_j^2\
            0leq (M_j - theta_j)(M_j - theta_j) & implies & -w_j + 2M_jtheta leq M_j^2\
            0leq (theta_j + M_j)(M_j - theta_j) & implies & w_j leq M_j^2\
            endalign
            $$



            The final set of constraints is



            $$
            beginalign
            sum_i in mathcalI a_ijx_ij leq w_j, quad quad forall j in mathcalJ\
            -w_j - 2M_jtheta_j leq M_j^2, quad quad forall j in mathcalJ\
            -w_j + 2M_jtheta_j leq M_j^2, quad quad forall j in mathcalJ\
            0 leq w_j leq M_j^2, quad quad forall j in mathcalJ\
            -M_j leq theta_j leq M_j, quad quad forall j in mathcalJ\
            endalign
            $$



            OBS: Verify my counts, please.






            share|improve this answer












            $endgroup$










            • 1




              $begingroup$
              McCormick envelopes provide a relaxation for real variables.
              $endgroup$
              – Alexandre Frias
              Oct 1 at 19:36













            2














            2










            2







            $begingroup$

            You can manipulate this inequality as follows



            $$sqrtsum_i in mathcalI a_ijx_ij leq theta_j, quad quad forall j in mathcalJ$$



            $$sum_i in mathcalI a_ijx_ij leq theta_j^2, quad quad forall j in mathcalJ$$



            Now, you need to linearize $theta_j^2$ using McCormick Envelopes. To do this, assume $-M_jleq theta_j leq M_j$ and consider $w_j=theta_j^2$



            $$
            beginalign
            0leq (theta_j + M_j)(theta_j + M_j) & implies & -w_j - 2M_jtheta leq M_j^2\
            0leq (M_j - theta_j)(M_j - theta_j) & implies & -w_j + 2M_jtheta leq M_j^2\
            0leq (theta_j + M_j)(M_j - theta_j) & implies & w_j leq M_j^2\
            endalign
            $$



            The final set of constraints is



            $$
            beginalign
            sum_i in mathcalI a_ijx_ij leq w_j, quad quad forall j in mathcalJ\
            -w_j - 2M_jtheta_j leq M_j^2, quad quad forall j in mathcalJ\
            -w_j + 2M_jtheta_j leq M_j^2, quad quad forall j in mathcalJ\
            0 leq w_j leq M_j^2, quad quad forall j in mathcalJ\
            -M_j leq theta_j leq M_j, quad quad forall j in mathcalJ\
            endalign
            $$



            OBS: Verify my counts, please.






            share|improve this answer












            $endgroup$



            You can manipulate this inequality as follows



            $$sqrtsum_i in mathcalI a_ijx_ij leq theta_j, quad quad forall j in mathcalJ$$



            $$sum_i in mathcalI a_ijx_ij leq theta_j^2, quad quad forall j in mathcalJ$$



            Now, you need to linearize $theta_j^2$ using McCormick Envelopes. To do this, assume $-M_jleq theta_j leq M_j$ and consider $w_j=theta_j^2$



            $$
            beginalign
            0leq (theta_j + M_j)(theta_j + M_j) & implies & -w_j - 2M_jtheta leq M_j^2\
            0leq (M_j - theta_j)(M_j - theta_j) & implies & -w_j + 2M_jtheta leq M_j^2\
            0leq (theta_j + M_j)(M_j - theta_j) & implies & w_j leq M_j^2\
            endalign
            $$



            The final set of constraints is



            $$
            beginalign
            sum_i in mathcalI a_ijx_ij leq w_j, quad quad forall j in mathcalJ\
            -w_j - 2M_jtheta_j leq M_j^2, quad quad forall j in mathcalJ\
            -w_j + 2M_jtheta_j leq M_j^2, quad quad forall j in mathcalJ\
            0 leq w_j leq M_j^2, quad quad forall j in mathcalJ\
            -M_j leq theta_j leq M_j, quad quad forall j in mathcalJ\
            endalign
            $$



            OBS: Verify my counts, please.







            share|improve this answer















            share|improve this answer




            share|improve this answer








            edited Sep 30 at 6:09

























            answered Sep 30 at 4:33









            Alexandre FriasAlexandre Frias

            4414 bronze badges




            4414 bronze badges










            • 1




              $begingroup$
              McCormick envelopes provide a relaxation for real variables.
              $endgroup$
              – Alexandre Frias
              Oct 1 at 19:36












            • 1




              $begingroup$
              McCormick envelopes provide a relaxation for real variables.
              $endgroup$
              – Alexandre Frias
              Oct 1 at 19:36







            1




            1




            $begingroup$
            McCormick envelopes provide a relaxation for real variables.
            $endgroup$
            – Alexandre Frias
            Oct 1 at 19:36




            $begingroup$
            McCormick envelopes provide a relaxation for real variables.
            $endgroup$
            – Alexandre Frias
            Oct 1 at 19:36


















            draft saved

            draft discarded















































            Thanks for contributing an answer to Operations Research Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2for.stackexchange.com%2fquestions%2f1052%2flinearize-or-approximate-a-square-root-constraint%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown









            Popular posts from this blog

            Tamil (spriik) Luke uk diar | Nawigatjuun

            Align equal signs while including text over equalitiesAMS align: left aligned text/math plus multicolumn alignmentMultiple alignmentsAligning equations in multiple placesNumbering and aligning an equation with multiple columnsHow to align one equation with another multline equationUsing \ in environments inside the begintabularxNumber equations and preserving alignment of equal signsHow can I align equations to the left and to the right?Double equation alignment problem within align enviromentAligned within align: Why are they right-aligned?

            Training a classifier when some of the features are unknownWhy does Gradient Boosting regression predict negative values when there are no negative y-values in my training set?How to improve an existing (trained) classifier?What is effect when I set up some self defined predisctor variables?Why Matlab neural network classification returns decimal values on prediction dataset?Fitting and transforming text data in training, testing, and validation setsHow to quantify the performance of the classifier (multi-class SVM) using the test data?How do I control for some patients providing multiple samples in my training data?Training and Test setTraining a convolutional neural network for image denoising in MatlabShouldn't an autoencoder with #(neurons in hidden layer) = #(neurons in input layer) be “perfect”?