-
Notifications
You must be signed in to change notification settings - Fork 115
Add cuts to MIP solver #599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughAdds a large cutting‑plane subsystem, cut generation and integration, CSR/CSC and sparse‑vector helpers, basis append‑cuts, expanded branch‑and‑bound/MIP settings and timing propagation, enhanced infeasibility diagnostics, and several solver/API surface extensions across dual‑simplex and MIP codepaths. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (4)
cpp/src/dual_simplex/basis_updates.hpp (1)
294-295: New basis_update_mpf_t::append_cuts API looks consistentThe new
append_cutsdeclaration cleanly extends the basis update interface and matches the described usage fromsolve_linear_program_with_cuts. No header-level issues spotted; just ensure the implementation and callers clearly document/interpret the return code (0 vs error) consistently with otheri_t-returning methods.cpp/src/dual_simplex/sparse_matrix.hpp (1)
139-140: CSR append_rows API is reasonableAdding
csr_matrix_t::append_rowshere is a natural extension of the CSR interface and matches the intended use in the cut workflow. At the interface level it looks fine; just make sure callers only use it whenC.nmatchesthis->n(or clearly document if smallerC.nis intentionally supported).cpp/src/dual_simplex/sparse_vector.cpp (1)
31-44: CSR-row constructor is correct; consider adding a bounds assertThe implementation correctly builds a length‑
A.nsparse vector whose indices are the column indices from rowrowof the CSR matrix. To match existing defensive style (e.g., in other methods usingassertfor dimension checks), it would be helpful to add a guard:template <typename i_t, typename f_t> sparse_vector_t<i_t, f_t>::sparse_vector_t(const csr_matrix_t<i_t, f_t>& A, i_t row) { - const i_t row_start = A.row_start[row]; - const i_t row_end = A.row_start[row + 1]; + assert(row >= 0 && row + 1 <= A.m); + const i_t row_start = A.row_start[row]; + const i_t row_end = A.row_start[row + 1]; const i_t nz = row_end - row_start; n = A.n; i.reserve(nz); x.reserve(nz); for (i_t k = row_start; k < row_end; ++k) { i.push_back(A.j[k]); x.push_back(A.x[k]); } }cpp/src/dual_simplex/sparse_matrix.cpp (1)
360-400: append_rows implementation is sound; consider stricter dimension checksThe
append_rowslogic (row_start updates, contiguous copy ofC.j/C.x, andm/nz_maxupdates) looks correct and preserves CSR invariants for valid inputs.Two small improvements to consider:
- Enforce exact column compatibility instead of just
C.n > n:- if (C.n > n) { - return -1; - } + if (C.n != n) { + return -1; + }unless you explicitly intend to support
C.n < n.
- Document in the header that the function returns
0on success and-1on dimension mismatch, so callers know they must check the return value.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
cpp/src/dual_simplex/basis_updates.cpp(1 hunks)cpp/src/dual_simplex/basis_updates.hpp(1 hunks)cpp/src/dual_simplex/solve.cpp(2 hunks)cpp/src/dual_simplex/solve.hpp(1 hunks)cpp/src/dual_simplex/sparse_matrix.cpp(1 hunks)cpp/src/dual_simplex/sparse_matrix.hpp(1 hunks)cpp/src/dual_simplex/sparse_vector.cpp(1 hunks)cpp/src/dual_simplex/sparse_vector.hpp(1 hunks)cpp/tests/dual_simplex/unit_tests/solve.cpp(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
cpp/src/dual_simplex/sparse_vector.cpp (2)
cpp/src/dual_simplex/sparse_vector.hpp (6)
sparse_vector_t(21-21)sparse_vector_t(23-23)sparse_vector_t(25-25)sparse_vector_t(27-27)sparse_vector_t(29-29)A(33-33)cpp/src/dual_simplex/sparse_matrix.hpp (4)
row(87-87)nz(78-78)x(70-70)x(74-74)
cpp/src/dual_simplex/sparse_vector.hpp (1)
cpp/src/dual_simplex/barrier.cu (2)
A(912-1172)A(912-917)
cpp/src/dual_simplex/sparse_matrix.cpp (1)
cpp/src/dual_simplex/sparse_matrix.hpp (1)
C(139-139)
cpp/tests/dual_simplex/unit_tests/solve.cpp (2)
cpp/src/dual_simplex/presolve.cpp (3)
convert_user_problem(773-1000)convert_user_problem(773-777)convert_user_problem(1558-1563)cpp/src/dual_simplex/solve.cpp (7)
basic_list(113-113)solve_linear_program_with_advanced_basis(128-294)solve_linear_program_with_advanced_basis(128-137)solve_linear_program_with_advanced_basis(803-812)solve_linear_program_with_cuts(297-444)solve_linear_program_with_cuts(297-307)solve_linear_program_with_cuts(814-825)
cpp/src/dual_simplex/solve.cpp (2)
cpp/src/dual_simplex/phase2.cpp (6)
settings(2119-2141)settings(2119-2119)basic_list(2176-2176)dual_phase2_with_advanced_basis(2197-2984)dual_phase2_with_advanced_basis(2197-2209)dual_phase2_with_advanced_basis(2999-3012)cpp/src/dual_simplex/phase2.hpp (1)
dual_phase2_with_advanced_basis(61-73)
cpp/src/dual_simplex/basis_updates.cpp (2)
cpp/src/dual_simplex/basis_updates.hpp (22)
cuts_basic(294-294)L0_(116-120)L0_(367-371)rhs(59-59)rhs(62-62)rhs(66-68)rhs(72-74)rhs(77-77)rhs(79-80)rhs(83-83)rhs(86-86)rhs(89-89)rhs(92-92)rhs(95-95)rhs(98-98)rhs(101-101)rhs(104-104)rhs(307-307)rhs(308-308)nz(149-149)nz(397-397)nz(411-411)cpp/src/dual_simplex/vector_math.cpp (3)
inverse_permutation(154-162)inverse_permutation(154-154)inverse_permutation(206-206)
🔇 Additional comments (1)
cpp/src/dual_simplex/sparse_vector.hpp (1)
28-29: CSR-row constructor declaration is consistentThe added constructor from a CSR row is a good complement to the existing CSC-column constructor and fits the class API. No issues at the declaration level.
|
@chris-maes Aspirational for 25.12? |
|
This does not need to be part of 25.12. |
…eursitics and MIP presolve for experimentation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
♻️ Duplicate comments (1)
cpp/src/dual_simplex/basis_updates.cpp (1)
1172-1172: Add template arguments tosparse_vector_tconstruction.As previously noted,
sparse_vector_tis a class template and requires explicit template arguments since there's no deduction guide for the(WT, h)constructor.- sparse_vector_t rhs(WT, h); + sparse_vector_t<i_t, f_t> rhs(WT, h);
🧹 Nitpick comments (6)
cpp/src/dual_simplex/sparse_matrix.cpp (1)
402-427: Consider adding bounds validation for the sparse vector.Unlike
append_rows, this method doesn't validate that the sparse vector's indices are within the matrix's column bounds. Consider adding validation similar to line 367-369 inappend_rows:template <typename i_t, typename f_t> i_t csr_matrix_t<i_t, f_t>::append_row(const sparse_vector_t<i_t, f_t>& c) { + // Validate column indices are within bounds + for (i_t k = 0; k < c.i.size(); k++) { + if (c.i[k] >= this->n) { + return -1; + } + } const i_t old_m = this->m;Alternatively, if callers are trusted, an
assertwould suffice for debug builds.cpp/src/dual_simplex/branch_and_bound.cpp (3)
1131-1161: Debug logging should be gated by a compile-time flag or log level.These logging statements print detailed information about every integer variable and fractional variable. Consider wrapping them in a debug macro or using a debug log level.
if (num_fractional == 0) { +#ifdef PRINT_INTEGER_VARS for (i_t j = 0; j < original_lp_.num_cols; j++) { if (var_types_[j] == variable_type_t::INTEGER) { settings_.log.printf("Variable %d type %d val %e\n", j, var_types_[j], root_relax_soln_.x[j]); } } +#endif // ... rest of optimal case } else { - settings_.log.printf("Found %d fractional variables on cut pass %d\n", num_fractional, cut_pass); - for (i_t j: fractional) { - settings_.log.printf("Fractional variable %d lower %e value %e upper %e\n", j, original_lp_.lower[j], root_relax_soln_.x[j], original_lp_.upper[j]); - } + settings_.log.debug("Found %d fractional variables on cut pass %d\n", num_fractional, cut_pass);
1237-1249: Debug assertion withexit(1)should use assert or be removed.This validation block checks that B^T * u_bar = e_i. While important for debugging, using
exit(1)in production code is inappropriate.+#ifndef NDEBUG std::vector<f_t> BTu_bar(original_lp_.num_rows); b_transpose_multiply(original_lp_, basic_list, u_bar_dense, BTu_bar); for (i_t k = 0; k < original_lp_.num_rows; k++) { if (k == i) { - if (std::abs(BTu_bar[k] - 1.0) > 1e-6) { - settings_.log.printf("BTu_bar[%d] = %e i %d\n", k, BTu_bar[k], i); - exit(1); - } + assert(std::abs(BTu_bar[k] - 1.0) <= 1e-6 && "BTu_bar diagonal should be 1"); } else { - if (std::abs(BTu_bar[k]) > 1e-6) { - settings_.log.printf("BTu_bar[%d] = %e i %d\n", k, BTu_bar[k], i); - exit(1); - } + assert(std::abs(BTu_bar[k]) <= 1e-6 && "BTu_bar off-diagonal should be 0"); } } +#endif
1218-1218: Debug print should be gated.This debug print for cut generation should be wrapped in a debug macro.
- settings_.log.printf("Generating cut for variable %d relaxed value %e row %d\n", j, x_j, i); + settings_.log.debug("Generating cut for variable %d relaxed value %e row %d\n", j, x_j, i);cpp/src/dual_simplex/solve.cpp (2)
315-327: Consider removing or guarding debug verification code.This block performs a sanity check on the basis factorization and calls
exit(1)on failure. Hard process termination is inappropriate for library code - it prevents callers from handling errors gracefully. Consider:
- Converting to
CUOPT_ASSERTfor debug builds only- Returning an error status instead of terminating
- Wrapping with a
constexpr bool debug_verify = false;guard similar towrite_out_matlab+ constexpr bool verify_basis = false; + if (verify_basis) { csc_matrix_t<i_t, f_t> Btest(lp.num_rows, lp.num_rows, 1); basis_update.multiply_lu(Btest); csc_matrix_t<i_t, f_t> B(lp.num_rows, lp.num_rows, 1); form_b(lp.A, basic_list, B); csc_matrix_t<i_t, f_t> Diff(lp.num_rows, lp.num_rows, 1); add(Btest, B, 1.0, -1.0, Diff); const f_t err = Diff.norm1(); settings.log.printf("Before || B - L*U || %e\n", err); - if (err > 1e-6) { - exit(1); - } + CUOPT_ASSERT(err <= 1e-6); }
435-449: Second debug block should also be guarded.Same concern as the earlier verification block -
exit(1)is inappropriate for library code. Additionally, there's inconsistent indentation at line 448.// Check the basis update + if (verify_basis) { csc_matrix_t<i_t, f_t> Btest(lp.num_rows, lp.num_rows, 1); basis_update.multiply_lu(Btest); // ... if (err > 1e-6) { Diff.print_matrix(); - exit(1); + CUOPT_ASSERT(false); } + }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
cpp/src/dual_simplex/basis_updates.cpp(1 hunks)cpp/src/dual_simplex/branch_and_bound.cpp(4 hunks)cpp/src/dual_simplex/phase2.cpp(1 hunks)cpp/src/dual_simplex/solve.cpp(3 hunks)cpp/src/dual_simplex/sparse_matrix.cpp(1 hunks)cpp/src/dual_simplex/sparse_matrix.hpp(1 hunks)cpp/src/mip/solver.cu(3 hunks)
✅ Files skipped from review due to trivial changes (1)
- cpp/src/dual_simplex/phase2.cpp
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-22T14:25:22.899Z
Learnt from: aliceb-nv
Repo: NVIDIA/cuopt PR: 527
File: cpp/src/mip/diversity/lns/rins.cu:167-175
Timestamp: 2025-10-22T14:25:22.899Z
Learning: In MIP (Mixed Integer Programming) problems in the cuOPT codebase, `n_integer_vars == 0` is impossible by definition—MIP problems must have at least one integer variable. If there are no integer variables, it would be a pure Linear Programming (LP) problem, not a MIP problem.
Applied to files:
cpp/src/mip/solver.cucpp/src/dual_simplex/branch_and_bound.cpp
🧬 Code graph analysis (3)
cpp/src/dual_simplex/sparse_matrix.cpp (1)
cpp/src/dual_simplex/sparse_matrix.hpp (2)
C(140-140)c(143-143)
cpp/src/mip/solver.cu (1)
cpp/tests/mip/elim_var_remap_test.cu (1)
sol(110-110)
cpp/src/dual_simplex/branch_and_bound.cpp (3)
cpp/src/dual_simplex/solve.cpp (16)
basic_list(114-114)solve_linear_program_with_advanced_basis(129-295)solve_linear_program_with_advanced_basis(129-138)solve_linear_program_with_advanced_basis(847-856)compute_user_objective(89-94)compute_user_objective(89-89)compute_user_objective(97-101)compute_user_objective(97-97)compute_user_objective(834-835)compute_user_objective(837-837)solve_linear_program_with_cuts(298-488)solve_linear_program_with_cuts(298-308)solve_linear_program_with_cuts(858-869)compute_objective(77-86)compute_objective(77-77)compute_objective(831-832)cpp/src/dual_simplex/solve.hpp (5)
solve_linear_program_with_advanced_basis(53-62)compute_user_objective(37-37)compute_user_objective(40-40)solve_linear_program_with_cuts(65-75)compute_objective(34-34)cpp/src/dual_simplex/presolve.hpp (1)
uncrush_primal_solution(160-163)
🔇 Additional comments (8)
cpp/src/dual_simplex/sparse_matrix.hpp (1)
139-143: LGTM!The new method declarations for
append_rowsandappend_rowfollow the existing class conventions and provide a clean API for extending CSR matrices with additional rows.cpp/src/dual_simplex/basis_updates.cpp (1)
1113-1315: Implementation ofappend_cutslooks structurally sound.The method correctly:
- Solves U^T W^T = C_B^T row-by-row
- Computes V from W using the inverse transforms
- Extends L and U with the appropriate block structure
- Updates permutations and workspace sizes
Note: Ensure the template argument fix at line 1172 is applied (see previous comment).
cpp/src/dual_simplex/sparse_matrix.cpp (1)
360-400: LGTM!The
append_rowsimplementation correctly:
- Validates that the input matrix has compatible column dimensions
- Resizes storage appropriately
- Updates row pointers and copies nonzero data
cpp/src/dual_simplex/branch_and_bound.cpp (1)
1070-1075: Settingscale_columns = falsemay affect numerical stability.Disabling column scaling could impact numerical stability for certain problem instances. Consider adding a comment explaining why scaling is disabled here, or making this configurable.
cpp/src/mip/solver.cu (1)
130-130: Debug code: LP-concurrent path is disabled.The
0 &&prefix disables the concurrent LP path when the problem is reduced to a pure LP.- if (0 && context.problem_ptr->n_integer_vars == 0) { + if (context.problem_ptr->n_integer_vars == 0) {⛔ Skipped due to learnings
Learnt from: aliceb-nv Repo: NVIDIA/cuopt PR: 527 File: cpp/src/mip/diversity/lns/rins.cu:167-175 Timestamp: 2025-10-22T14:25:22.899Z Learning: In MIP (Mixed Integer Programming) problems in the cuOPT codebase, `n_integer_vars == 0` is impossible by definition—MIP problems must have at least one integer variable. If there are no integer variables, it would be a pure Linear Programming (LP) problem, not a MIP problem.cpp/src/dual_simplex/solve.cpp (3)
11-11: LGTM!The new include for
basis_solves.hppis required to support theappend_cutsmethod used in the new function.
346-378: LGTM!The slack variable addition and LP dimension updates are implemented correctly. Each new cut constraint
C_i * x <= d_igets a slack variables_i >= 0such thatC_i * x + s_i = d_i, with identity columns appended to the constraint matrix.
858-869: LGTM!The explicit template instantiation is correctly specified and follows the existing pattern in the file.
cpp/src/dual_simplex/solve.cpp
Outdated
| lp_status_t lp_status; | ||
| if (status == dual::status_t::OPTIMAL) { lp_status = lp_status_t::OPTIMAL; } | ||
| if (status == dual::status_t::DUAL_UNBOUNDED) { lp_status = lp_status_t::INFEASIBLE; } | ||
| if (status == dual::status_t::TIME_LIMIT) { lp_status = lp_status_t::TIME_LIMIT; } | ||
| if (status == dual::status_t::ITERATION_LIMIT) { lp_status = lp_status_t::ITERATION_LIMIT; } | ||
| if (status == dual::status_t::CONCURRENT_LIMIT) { lp_status = lp_status_t::CONCURRENT_LIMIT; } | ||
| if (status == dual::status_t::NUMERICAL) { lp_status = lp_status_t::NUMERICAL_ISSUES; } | ||
| if (status == dual::status_t::CUTOFF) { lp_status = lp_status_t::CUTOFF; } | ||
| return lp_status; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uninitialized lp_status leads to undefined behavior.
lp_status is declared without initialization. If status doesn't match any of the handled values, the function returns an indeterminate value, which is undefined behavior. Use a switch statement with a default case or initialize with a fallback value.
- lp_status_t lp_status;
- if (status == dual::status_t::OPTIMAL) { lp_status = lp_status_t::OPTIMAL; }
- if (status == dual::status_t::DUAL_UNBOUNDED) { lp_status = lp_status_t::INFEASIBLE; }
- if (status == dual::status_t::TIME_LIMIT) { lp_status = lp_status_t::TIME_LIMIT; }
- if (status == dual::status_t::ITERATION_LIMIT) { lp_status = lp_status_t::ITERATION_LIMIT; }
- if (status == dual::status_t::CONCURRENT_LIMIT) { lp_status = lp_status_t::CONCURRENT_LIMIT; }
- if (status == dual::status_t::NUMERICAL) { lp_status = lp_status_t::NUMERICAL_ISSUES; }
- if (status == dual::status_t::CUTOFF) { lp_status = lp_status_t::CUTOFF; }
+ lp_status_t lp_status = lp_status_t::NUMERICAL_ISSUES; // default fallback
+ switch (status) {
+ case dual::status_t::OPTIMAL: lp_status = lp_status_t::OPTIMAL; break;
+ case dual::status_t::DUAL_UNBOUNDED: lp_status = lp_status_t::INFEASIBLE; break;
+ case dual::status_t::TIME_LIMIT: lp_status = lp_status_t::TIME_LIMIT; break;
+ case dual::status_t::ITERATION_LIMIT: lp_status = lp_status_t::ITERATION_LIMIT; break;
+ case dual::status_t::CONCURRENT_LIMIT: lp_status = lp_status_t::CONCURRENT_LIMIT; break;
+ case dual::status_t::NUMERICAL: lp_status = lp_status_t::NUMERICAL_ISSUES; break;
+ case dual::status_t::CUTOFF: lp_status = lp_status_t::CUTOFF; break;
+ default: lp_status = lp_status_t::NUMERICAL_ISSUES; break;
+ }
return lp_status;🤖 Prompt for AI Agents
cpp/src/dual_simplex/solve.cpp lines 479-487: lp_status is declared
uninitialized and may be returned without being set; replace the chain of ifs
with a switch(status) that explicitly maps each dual::status_t case to the
corresponding lp_status_t and add a default case that assigns a sensible
fallback (e.g. lp_status_t::NUMERICAL_ISSUES or an appropriate "UNKNOWN"/error
status), and optionally log or assert on the default to catch unexpected enum
values.
| i_t num_cuts = cut_pool.get_best_cuts(cuts_to_add, cut_rhs, cut_types); | ||
| if (num_cuts == 0) | ||
| { | ||
| //settings_.log.printf("No cuts found\n"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove commented code
| //settings_.log.printf("No cuts found\n"); | ||
| break; | ||
| } | ||
| for (i_t k = 0; k < cut_types.size(); k++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move to function
| } | ||
| #endif | ||
| // Check against saved solution | ||
| #if 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs to be gated in final PR
| fractional.clear(); | ||
| num_fractional = fractional_variables(settings_, root_relax_soln_.x, var_types_, fractional); | ||
|
|
||
| // TODO: Get upper bound from heuristics |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove TODO that has been completed
| } | ||
| } | ||
|
|
||
| if (num_gomory_cuts + num_mir_cuts + num_knapsack_cuts + num_cg_cuts > 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move to function
| settings_.log.printf("Size with cuts: %d constraints, %d variables, %d nonzeros\n", original_lp_.num_rows, original_lp_.num_cols, original_lp_.A.col_start[original_lp_.A.n]); | ||
| } | ||
|
|
||
| if (edge_norms_.size() != original_lp_.num_cols) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move to function
cpp/src/dual_simplex/cuts.cpp
Outdated
| needs_complement_ = false; | ||
| for (i_t j = 0; j < num_vars_; j++) { | ||
| if (lp.lower[j] < 0) { | ||
| settings_.log.printf("Variable %d has negative lower bound %e\n", j, lp.lower[j]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove and handle negative lower bounds
| if (toc(start_time) > settings.time_limit) { return -1; } | ||
| if (settings.concurrent_halt != nullptr && *settings.concurrent_halt == 1) { return -1; } | ||
| if (toc(start_time) > settings.time_limit) { | ||
| printf("initialize_steepest_edge time limit\n"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove debug printf
| return -1; | ||
| } | ||
| if (settings.concurrent_halt != nullptr && *settings.concurrent_halt == 1) { | ||
| printf("initialize_steepest_edge concurrent_halt\n"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove debug printf
| } | ||
| } | ||
|
|
||
| if (primal_infeas > 10.0*settings.primal_tol) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs to be gated in final PR
| if (delta_y_steepest_edge[j] <= 0.0) | ||
| { | ||
| fix_needed = true; | ||
| //printf("Basic variable %d has a nonpositive steepest edge %e\n", j, delta_y_steepest_edge[j]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove commented code
| //printf("Basic variable %d has a nonpositive steepest edge %e\n", j, delta_y_steepest_edge[j]); | ||
| delta_y_steepest_edge[j] = 1e-4; | ||
| } | ||
| if (fix_needed) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove
| } | ||
|
|
||
| template <typename i_t, typename f_t> | ||
| f_t trial_branching(const lp_problem_t<i_t, f_t>& original_lp, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be removed in final PR
| return branch_var; | ||
| } | ||
|
|
||
| template <typename i_t, typename f_t> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be removed in final PR
| EXPECT_NEAR(solution.z[1], 0.0, 1e-6); | ||
| } | ||
|
|
||
| #if 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test no longer works after API change. Needs to be reworked to use the current API.
…sible with a bad cut
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/cuts.cpp`:
- Line 2367: The printf call uses "%d" for coefficients.size() which returns
size_t, causing undefined behavior on 64-bit; update the format specifier to
"%zu" (or cast coefficients.size() to int if intentionally narrowing) so the
call printf("Original inequality rhs %e nz %zu\n", rhs, coefficients.size());
uses the correct type for coefficients.size(); ensure any similar printf usages
in cuts.cpp referencing coefficients.size() are fixed the same way.
- Around line 2493-2497: The code logs append_rows failures but then uses
assert(append_status == 0), which gets removed in release builds; change this to
return an error instead of asserting: after calling new_A_row.append_rows(cuts)
check if append_status != 0, call settings.log.printf(...) as now, then return
or propagate the error (e.g., return append_status or another appropriate
non-zero code) from the current function instead of asserting; apply the same
replacement for the other identical append_rows checks that use append_status
and settings.log.printf.
- Around line 443-447: The loop computing ratios can divide by zero: in the
for-loop that sets ratios[i] = values[i] / weights[i], add a guard for
weights[i] == 0 and handle it deterministically (e.g., if weights[i] == 0 set
ratios[i] = std::numeric_limits<f_t>::infinity() when values[i] > 0, or
0/−infinity as appropriate for your logic) instead of performing the division;
also add `#include` <limits> if needed and mention the symbols to change: the
vector ratios, the loop over i (i_t i = 0; i < n; i++), and the arrays values
and weights.
- Around line 708-718: The code reads slack_map[i] into slack and
unconditionally uses xstar[slack], causing out-of-bounds when slack == -1;
modify the block that computes slack_value/slack_score in function/context
assigning score[i] so it first checks if slack >= 0 (i.e., valid index) and only
then reads xstar[slack] and computes slack_score, otherwise set slack_score to 0
(or another safe default) before combining nz_weight * nz_score + slack_weight *
slack_score; update references in this block (slack_map, slack, xstar,
slack_value, slack_score, score) to reflect the guarded path.
In `@cpp/src/dual_simplex/simplex_solver_settings.hpp`:
- Around line 90-99: Add Cython declarations and exports in
python/cuopt/cuopt/linear_programming/solver/solver_parameters.pyx for the new
constants so they are visible in the Python API: declare and cdef public the
following names matching the C++ constants—CUOPT_MIP_CUT_PASSES,
CUOPT_MIP_MIR_CUTS, CUOPT_MIP_MIXED_INTEGER_GOMORY_CUTS,
CUOPT_MIP_KNAPSACK_CUTS, CUOPT_MIP_STRONG_CHVATAL_GOMORY_CUTS, and
CUOPT_MIP_RELIABILITY_BRANCHING—ensuring the types match the definitions in
constants.h and the mappings used in solver_settings.cu (so Python users can set
max_cut_passes, mir_cuts, mixed_integer_gomory_cuts, knapsack_cuts,
strong_chvatal_gomory_cuts, and reliability_branching).
♻️ Duplicate comments (14)
cpp/src/dual_simplex/pseudo_costs.cpp (2)
407-413: Race condition: shared state read after mutex release.Line 411 reads
pseudo_cost_sum_down[j]andpseudo_cost_num_down[j]after the mutex is unlocked at line 410. Another thread could modify these values between the unlock and the read, causing inconsistent pseudo-cost calculations.Compare with the up-branch case (lines 431-435) where the read correctly occurs inside the mutex.
🔒 Proposed fix
if (!std::isnan(obj)) { f_t change_in_obj = obj - current_obj; f_t change_in_x = solution[j] - std::floor(solution[j]); mutex.lock(); pseudo_cost_sum_down[j] += change_in_obj / change_in_x; pseudo_cost_num_down[j]++; + pseudo_cost_down[k] = pseudo_cost_sum_down[j] / pseudo_cost_num_down[j]; mutex.unlock(); - pseudo_cost_down[k] = pseudo_cost_sum_down[j] / pseudo_cost_num_down[j]; }
445-457: Out-of-bounds access whenfractionalis empty and debug print usesprintf.Two issues:
- Line 445: If
num_fractionalis 0,fractional[0]causes out-of-bounds access- Line 457: If no variable has a better score,
selectremains-1, causingscore[select]to access an invalid index (confirmed by static analysis)- Line 456: Uses
printfinstead oflog.printf🛡️ Proposed fix
+ if (num_fractional == 0) { + log.printf("No fractional variables for reliable branching\n"); + return -1; + } + i_t branch_var = fractional[0]; f_t max_score = -1; i_t select = -1; for (i_t k = 0; k < num_fractional; k++) { if (score[k] > max_score) { max_score = score[k]; branch_var = fractional[k]; select = k; } } - printf( + log.printf( "pc reliability branching on %d. Value %e. Score %e. Iter %d. Trial branches %d\n", branch_var, solution[branch_var], score[select], iter, trial_branches);cpp/src/dual_simplex/branch_and_bound.cpp (7)
556-573: Avoid lock-order inversion betweenmutex_upper_andmutex_original_lp_.
find_reduced_cost_fixings()locksmutex_original_lp_whilemutex_upper_is held here, butset_new_solution()locks in the opposite order (original→upper). This is a classic deadlock cycle. Move the call outside themutex_upper_critical section (capture a snapshot first).🔧 Suggested restructuring
- if (is_feasible) { - mutex_upper_.lock(); - - if (repaired_obj < upper_bound_) { - upper_bound_ = repaired_obj; - incumbent_.set_incumbent_solution(repaired_obj, repaired_solution); - report_heuristic(repaired_obj); - - if (settings_.solution_callback != nullptr) { - std::vector<f_t> original_x; - uncrush_primal_solution(original_problem_, original_lp_, repaired_solution, original_x); - settings_.solution_callback(original_x, repaired_obj); - } - - find_reduced_cost_fixings(repaired_obj); - } - - mutex_upper_.unlock(); - } + if (is_feasible) { + bool do_rc_fix = false; + f_t ub_snapshot = 0.0; + mutex_upper_.lock(); + + if (repaired_obj < upper_bound_) { + upper_bound_ = repaired_obj; + incumbent_.set_incumbent_solution(repaired_obj, repaired_solution); + report_heuristic(repaired_obj); + + if (settings_.solution_callback != nullptr) { + std::vector<f_t> original_x; + uncrush_primal_solution(original_problem_, original_lp_, repaired_solution, original_x); + settings_.solution_callback(original_x, repaired_obj); + } + do_rc_fix = true; + ub_snapshot = upper_bound_; + } + + mutex_upper_.unlock(); + if (do_rc_fix) { find_reduced_cost_fixings(ub_snapshot); } + }
786-812: Gate/remove the depth-check debug block from this hot path.The block allocates O(n) vectors and prints to stdout when
depth > num_integer_variables_. If it’s diagnostic, wrap it in a debug macro and usesettings_.log.debuginstead ofprintf.
953-967: Gate noisy fixed-variable diagnostics.Unconditional
printfin the leaf update path can be very chatty and slow. Prefersettings_.log.debugunder a compile-time or settings flag.
1679-1707: Verify basic_list/nonbasic_list sizing expectations for advanced-basis solve.
basic_listis constructed with sizemhere, then passed intosolve_linear_program_with_advanced_basis. Please confirm the callee expects pre-sized vectors (vs. empty + push_back), to avoid size mismatches.#!/bin/bash # Locate and inspect solve_linear_program_with_advanced_basis usage of basic_list/nonbasic_list rg -n "solve_linear_program_with_advanced_basis" -S rg -n "basic_list|nonbasic_list" -S
339-418: Reduced-cost fixings are expensive and currently no-op — gate or apply bounds.This allocates/copies O(n) vectors and runs bound strengthening, but never writes the tightened bounds back to any node/global LP. It also uses
printfand holdsmutex_original_lp_across heavy work. Please either gate behind a setting (and return early when disabled) or actually apply the new bounds under the lock, and route output through the logger.
610-637: Don’t writesolution.daton production paths.
#if 1forces file I/O and stdout in normal solves, and uses rawFILE*+ a%xformat onsize_t. Please gate this behind a debug flag/settings knob, switch to RAII (std::ofstream), and fix the format specifier (e.g.,%zx). As per coding guidelines, ensure file handles are properly closed.🔧 Example gating + RAII
-#if 1 - if (settings_.sub_mip == 0) { - FILE* fid = NULL; - fid = fopen("solution.dat", "w"); - if (fid != NULL) { - printf("Writing solution.dat\n"); - ... - printf("Solution hash: %20x\n", seed); - fclose(fid); - } - } -#endif +#ifdef DEBUG_SOLUTION_DUMP + if (settings_.sub_mip == 0) { + std::ofstream fid("solution.dat"); + if (fid) { + settings_.log.debug("Writing solution.dat\n"); + ... + settings_.log.debug("Solution hash: %20zx\n", seed); + } + } +#endif
1801-1909: Cut verification file I/O should be debug-gated.The unconditional
#if 1blocks read saved solutions and print cut violations, introducing nondeterministic I/O and noise in production. Please gate behind a debug flag/settings knob and use the logger.🧰 Example gating
-#if 1 - read_saved_solution_for_cut_verification(original_lp_, settings_, saved_solution); -#endif +#ifdef CUT_VERIFICATION + read_saved_solution_for_cut_verification(original_lp_, settings_, saved_solution); +#endifcpp/src/dual_simplex/phase2.cpp (2)
1234-1241: Use logger (and gate) instead ofprintfin halt paths.These stdout prints fire in tight loops; prefer
settings.log.debugunder a debug flag.
2095-2181: Gate/route infeasibility breakdown output through the logger.The new diagnostic prints in
prepare_optimality()are unguardedprintfs. Please switch tosettings.logand gate them (debug flag or verbosity).cpp/src/dual_simplex/cuts.cpp (3)
96-98: Add epsilon check for near-zero norms incut_orthogonality.As discussed in a previous review and per retrieved learnings, while zero-norm cuts are prevented from entering the pool, near-zero norms can still cause numerical instability in the division at Line 98.
🔎 Suggested fix
f_t norm_i = cut_norms_[i]; f_t norm_j = cut_norms_[j]; + constexpr f_t kMinNorm = 1e-12; + const f_t denom = norm_i * norm_j; + if (denom < kMinNorm) { + return 0.0; // Treat near-parallel or degenerate cuts as non-orthogonal + } - return 1.0 - std::abs(dot) / (norm_i * norm_j); + return 1.0 - std::abs(dot) / denom;
63-66: Guard against division by near-zerocut_normincut_distance.Per the retrieved learnings and coding guidelines for numerical stability,
cut_normcan be near-zero for degenerate cuts. Line 65 performscut_violation / cut_normwithout checking for near-zero values.🔎 Suggested fix
cut_violation = rhs_storage_[row] - cut_x; cut_norm = std::sqrt(dot); + constexpr f_t kMinNorm = 1e-12; + if (cut_norm < kMinNorm) { + return 0.0; + } const f_t distance = cut_violation / cut_norm; return distance;Based on learnings: near-zero norms should be guarded against in cut-related computations.
2503-2520:edge_normsnot resized after adding new slack columns.The function increases
lp.num_colsbyp(new slack columns) and resizeslp.lower,lp.upper,lp.objective, andvstatusaccordingly, but does not resizeedge_norms. This creates an inconsistency: if downstream code accessesedge_norms[j]for the new slack columns (indicesold_colstoold_cols + p - 1), this causes out-of-bounds access. Either resizeedge_normshere to match the pattern of other data structures, or document that callers are responsible for managing this separately.🔎 Suggested fix
for (i_t j = lp.num_cols; j < lp.num_cols + p; j++) { new_A_col.col_start[j] = nz; new_A_col.i[nz] = k++; new_A_col.x[nz] = 1.0; nz++; lp.lower[j] = 0.0; lp.upper[j] = inf; lp.objective[j] = 0.0; new_slacks.push_back(j); } + + // Resize edge_norms for new slack columns (initialize to 1.0 for identity columns) + edge_norms.resize(lp.num_cols + p, 1.0);
🧹 Nitpick comments (9)
cpp/src/utilities/timer.hpp (1)
81-84: Integer division truncates result unnecessarily.The variables
tv_secandtv_usecare declared asdouble, but the assignments use integer division (/) and modulo (%) onint64_tcounts. While this works for the current use case (timestamps after epoch), the explicit integer operations followed by double assignment is confusing and could be simplified.Consider using floating-point division directly for clarity:
♻️ Suggested simplification
- // Populate the timeval struct - double tv_sec = us_since_epoch.count() / 1000000; - double tv_usec = us_since_epoch.count() % 1000000; - - return tv_sec + 1e-6 * tv_usec; + return static_cast<double>(us_since_epoch.count()) * 1e-6;cpp/src/dual_simplex/basis_updates.cpp (2)
1111-1154: Debug code containsexit(1)that should not reach production.The
CHECK_Wblock at lines 1148-1150 callsexit(1), which terminates the process from library code. While this is gated by a debug macro, it's better practice to use assertions or return error codes even in debug paths to avoid accidental termination if the macro is inadvertently enabled.♻️ Suggested improvement
`#ifdef` CHECK_W { for (i_t k = 0; k < cuts_basic.m; k++) { // ... validation code ... for (i_t h = 0; h < m; h++) { if (std::abs(CBT_col_dense[h] - CBT_col[h]) > 1e-6) { printf("W: col %d CBT_col_dense[%d] = %e CBT_col[%d] = %e\n", k, h, CBT_col_dense[h], h, CBT_col[h]); - exit(1); + assert(false && "CHECK_W validation failed"); } } } } `#endif`
1203-1233: Debug code containsexit(1)— same concern as CHECK_W block.The
CHECK_Vblock at line 1229 also callsexit(1). Apply the same pattern of using assertions instead.♻️ Suggested improvement
`#ifdef` CHECK_V // ... validation code ... for (i_t l = 0; l < cuts_basic.m; l++) { if (std::abs(CB_col_dense[l] - CB_column[l]) > 1e-6) { printf("V: col %d CB_col_dense[%d] = %e CB_column[%d] = %e\n", k, l, CB_col_dense[l], l, CB_column[l]); - exit(1); + assert(false && "CHECK_V validation failed"); } } } } `#endif`cpp/include/cuopt/linear_programming/mip/solver_settings.hpp (1)
82-91: Consider adding documentation comments for new public API fields.Per coding guidelines for public headers (
cpp/include/cuopt/**/*), new public fields should have documentation comments (Doxygen format). These new MIP solver settings would benefit from brief descriptions explaining:
- What each option controls
- Valid value ranges (e.g., -1 for auto, 0 to disable, >0 to enable with limit)
- Default behavior
📝 Suggested documentation
f_t time_limit = std::numeric_limits<f_t>::infinity(); + /** Maximum number of branch-and-bound nodes to explore. Default: unlimited. */ i_t node_limit = std::numeric_limits<i_t>::max(); + /** Enable reliability branching. -1: auto, 0: disable, >0: reliability threshold. */ i_t reliability_branching = -1; bool heuristics_only = false; i_t num_cpu_threads = -1; // -1 means use default number of threads in branch and bound + /** Number of cutting plane passes at each node. 0: disabled. */ i_t max_cut_passes = 0; // number of cut passes to make + /** Enable MIR cuts. -1: auto, 0: disable, >0: max cuts per pass. */ i_t mir_cuts = -1; + /** Enable mixed-integer Gomory cuts. -1: auto, 0: disable, >0: max cuts per pass. */ i_t mixed_integer_gomory_cuts = -1; + /** Enable knapsack cover cuts. -1: auto, 0: disable, >0: max cuts per pass. */ i_t knapsack_cuts = -1; + /** Enable strong Chvátal-Gomory cuts. -1: auto, 0: disable, >0: max cuts per pass. */ i_t strong_chvatal_gomory_cuts = -1;cpp/src/dual_simplex/branch_and_bound.hpp (1)
113-121: Consider grouping root-relaxation outputs to reduce API sprawl.The signature now has many out-params (solution, vstatus, basis, lists, norms). A small
root_relaxation_result_twould reduce call-site coupling and make evolution easier. Based on learnings, reduce tight coupling between solver components.cpp/src/dual_simplex/phase2.cpp (1)
2560-2573: Avoid unconditional stdout in the no-leaving-variable path.The diagnostic
printfinside the infeasibility scan can spam logs; prefersettings.log.debugunder a debug flag.cpp/src/dual_simplex/cuts.cpp (3)
1349-1356: Consider propagating error for negative lower bounds.Lines 1351-1353 log a warning when encountering a negative lower bound but continue processing. Per previous review and chris-maes's comment, negative lower bounds should be handled. Since this is a constructor, consider adding a validity flag or throwing an exception that the caller can handle.
🔎 Suggested approach
+ valid_ = true; needs_complement_ = false; for (i_t j = 0; j < num_vars_; j++) { if (lp.lower[j] < 0) { settings_.log.printf("Variable %d has negative lower bound %e\n", j, lp.lower[j]); + valid_ = false; + continue; // Skip processing this variable to avoid invalid state }Add a
bool valid_member and anis_valid()accessor so callers can check initialization status.
2790-2815: Consider RAII pattern for file handle management.The file handle
fidis managed with C-stylefopen/fclose. While the current control flow closes the file properly, using RAII (e.g.,std::unique_ptrwith custom deleter orstd::ifstream) would be more robust against future modifications that might add early returns.As per coding guidelines: ensure RAII patterns for file handle cleanup.
2346-2356: Usesettings.log.printfinstead of rawprintffor consistency.Lines 2346 and 2355 use raw
printffor error messages while the rest of the codebase usessettings.log.printf. This inconsistency may cause logging issues in production environments where stdout is not captured.🔎 Suggested fix
} else { - printf("Error: p %d f_a_0 %e f_a_j %e alpha %e value %.16e\n", p, f_a_0, f_a_j, alpha, value); + settings.log.printf("Error: p %d f_a_0 %e f_a_j %e alpha %e value %.16e\n", p, f_a_0, f_a_j, alpha, value); return -1; } } } else { return -1; } } } else { - printf("Error: k %d lower %e f(a_0) %e upper %e\n", k, lower, f_a_0, upper); + settings.log.printf("Error: k %d lower %e f(a_0) %e upper %e\n", k, lower, f_a_0, upper); return -1; }
| std::vector<f_t> ratios; | ||
| ratios.resize(n); | ||
| for (i_t i = 0; i < n; i++) { | ||
| ratios[i] = values[i] / weights[i]; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against division by zero when computing value/weight ratios.
Line 446 computes ratios[i] = values[i] / weights[i] without checking if weights[i] is zero. While knapsack constraints typically have positive weights, a degenerate case could cause division by zero.
🔎 Suggested fix
std::vector<f_t> ratios;
ratios.resize(n);
for (i_t i = 0; i < n; i++) {
- ratios[i] = values[i] / weights[i];
+ ratios[i] = (weights[i] > 1e-12) ? (values[i] / weights[i]) : 0.0;
}🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.cpp` around lines 443 - 447, The loop computing
ratios can divide by zero: in the for-loop that sets ratios[i] = values[i] /
weights[i], add a guard for weights[i] == 0 and handle it deterministically
(e.g., if weights[i] == 0 set ratios[i] = std::numeric_limits<f_t>::infinity()
when values[i] > 0, or 0/−infinity as appropriate for your logic) instead of
performing the division; also add `#include` <limits> if needed and mention the
symbols to change: the vector ratios, the loop over i (i_t i = 0; i < n; i++),
and the arrays values and weights.
| f_t nz_score = lp.num_cols - row_nz; | ||
|
|
||
| const i_t slack = slack_map[i]; | ||
| const f_t slack_value = xstar[slack]; | ||
|
|
||
| f_t slack_score = -std::log10(1e-16 + std::abs(slack_value)); | ||
|
|
||
| const f_t nz_weight = 1.0; | ||
| const f_t slack_weight = 1.0; | ||
|
|
||
| score[i] = nz_weight * nz_score + slack_weight * slack_score; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out-of-bounds access when slack_map[i] is -1.
While slack_map is now initialized to -1 (addressing the previous concern), Lines 710-711 access xstar[slack] without first checking if slack == -1. For rows without a mapped slack, this causes an out-of-bounds read.
🔎 Suggested fix
if (num_integer_in_row == 0)
{
score[i] = 0.0;
-
} else {
+ const i_t slack = slack_map[i];
+ if (slack == -1) {
+ score[i] = 0.0;
+ continue;
+ }
f_t nz_score = lp.num_cols - row_nz;
- const i_t slack = slack_map[i];
const f_t slack_value = xstar[slack];
f_t slack_score = -std::log10(1e-16 + std::abs(slack_value));🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.cpp` around lines 708 - 718, The code reads
slack_map[i] into slack and unconditionally uses xstar[slack], causing
out-of-bounds when slack == -1; modify the block that computes
slack_value/slack_score in function/context assigning score[i] so it first
checks if slack >= 0 (i.e., valid index) and only then reads xstar[slack] and
computes slack_score, otherwise set slack_score to 0 (or another safe default)
before combining nz_weight * nz_score + slack_weight * slack_score; update
references in this block (slack_map, slack, xstar, slack_value, slack_score,
score) to reflect the guarded path.
| } | ||
| } | ||
| printf("\n"); | ||
| printf("Original inequality rhs %e nz %d\n", rhs, coefficients.size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Format specifier mismatch: %d used for size_t.
coefficients.size() returns size_t but %d expects int. This is undefined behavior on 64-bit systems. Use %zu or cast to int.
🔎 Suggested fix
- printf("Original inequality rhs %e nz %d\n", rhs, coefficients.size());
+ printf("Original inequality rhs %e nz %zu\n", rhs, coefficients.size());📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| printf("Original inequality rhs %e nz %d\n", rhs, coefficients.size()); | |
| printf("Original inequality rhs %e nz %zu\n", rhs, coefficients.size()); |
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.cpp` at line 2367, The printf call uses "%d" for
coefficients.size() which returns size_t, causing undefined behavior on 64-bit;
update the format specifier to "%zu" (or cast coefficients.size() to int if
intentionally narrowing) so the call printf("Original inequality rhs %e nz
%zu\n", rhs, coefficients.size()); uses the correct type for
coefficients.size(); ensure any similar printf usages in cuts.cpp referencing
coefficients.size() are fixed the same way.
| i_t append_status = new_A_row.append_rows(cuts); | ||
| if (append_status != 0) { | ||
| settings.log.printf("append_rows error: %d\n", append_status); | ||
| assert(append_status == 0); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace assert with explicit error return for robustness.
Lines 2493-2497 use assert(append_status == 0) after logging the error. In release builds with NDEBUG defined, asserts are stripped, so the function would continue with an invalid state. Return an error code instead.
🔎 Suggested fix
i_t append_status = new_A_row.append_rows(cuts);
if (append_status != 0) {
settings.log.printf("append_rows error: %d\n", append_status);
- assert(append_status == 0);
+ return append_status;
}Apply similar changes at Lines 2569-2573 and 2598-2601.
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.cpp` around lines 2493 - 2497, The code logs
append_rows failures but then uses assert(append_status == 0), which gets
removed in release builds; change this to return an error instead of asserting:
after calling new_A_row.append_rows(cuts) check if append_status != 0, call
settings.log.printf(...) as now, then return or propagate the error (e.g.,
return append_status or another appropriate non-zero code) from the current
function instead of asserting; apply the same replacement for the other
identical append_rows checks that use append_status and settings.log.printf.
| max_cut_passes(0), | ||
| mir_cuts(-1), | ||
| mixed_integer_gomory_cuts(-1), | ||
| knapsack_cuts(-1), | ||
| strong_chvatal_gomory_cuts(-1), | ||
| num_bfs_workers(std::max(num_threads / 4, 1)), | ||
| random_seed(0), | ||
| inside_mip(0), | ||
| sub_mip(0), | ||
| reliability_branching(-1), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Search for exposure of these settings across the codebase
rg -n "max_cut_passes|mir_cuts|mixed_integer_gomory_cuts|knapsack_cuts|strong_chvatal_gomory_cuts|sub_mip|reliability_branching" --type-add 'src:include:*.{hpp,h,cpp,py,json,yaml}' -iRepository: NVIDIA/cuopt
Length of output: 115
🏁 Script executed:
# Also check settings header structure to understand how settings are typically exposed
head -50 cpp/src/dual_simplex/simplex_solver_settings.hppRepository: NVIDIA/cuopt
Length of output: 1355
🏁 Script executed:
# Look for config/settings parsing patterns in the codebase
fd -e 'py' | head -20 | xargs grep -l "SimplexSolverSettings\|settings\|config" 2>/dev/null | head -10Repository: NVIDIA/cuopt
Length of output: 738
🏁 Script executed:
# Check for JSON schema or settings parsing files
fd -e 'json' -o -e 'yaml' | xargs grep -l "cut\|simplex" 2>/dev/nullRepository: NVIDIA/cuopt
Length of output: 174
🏁 Script executed:
# Search for Python bindings that expose settings
fd -e 'pyx' -o -e 'pxd' | xargs grep -l "SimplexSolverSettings" 2>/dev/nullRepository: NVIDIA/cuopt
Length of output: 174
🏁 Script executed:
# Search for SimplexSolverSettings in the codebase
rg "SimplexSolverSettings" -A 5 -B 5 | head -100Repository: NVIDIA/cuopt
Length of output: 38
🏁 Script executed:
# Look for settings parsing or exposure patterns
rg "max_cut_passes|mir_cuts|knapsack_cuts|strong_chvatal_gomory_cuts|reliability_branching" . --type cpp --type py | head -50Repository: NVIDIA/cuopt
Length of output: 4950
🏁 Script executed:
# Check if there's a settings factory or parser
find . -name "*settings*" -o -name "*config*" | grep -E '\.(cpp|hpp|py|h)$' | head -20Repository: NVIDIA/cuopt
Length of output: 949
🏁 Script executed:
# Search for where settings are actually constructed/exposed to users
rg "struct.*Settings\|class.*Settings" -A 20 --type cpp | head -100Repository: NVIDIA/cuopt
Length of output: 38
🏁 Script executed:
# Look for Python bindings files
find . -name "*.pyx" -o -name "*.pxd" -o -name "*binding*" | head -20Repository: NVIDIA/cuopt
Length of output: 1065
🏁 Script executed:
# Check the Python solver_settings file
cat python/cuopt/cuopt/linear_programming/solver_settings/solver_settings.py | head -200Repository: NVIDIA/cuopt
Length of output: 5753
🏁 Script executed:
# Check how constants are used in config parsing
rg "CUOPT_MIP_MIR_CUTS|CUOPT_MIP_KNAPSACK_CUTS|CUOPT_MIP_RELIABILITY_BRANCHING" -B 3 -A 3Repository: NVIDIA/cuopt
Length of output: 2738
🏁 Script executed:
# Check the server settings parser
cat python/cuopt_server/cuopt_server/utils/settings.py | head -150Repository: NVIDIA/cuopt
Length of output: 1860
🏁 Script executed:
# Search for JSON/config mapping of these settings
rg "mip_mir_cuts|mip_knapsack_cuts|mip_strong_chvatal_gomory_cuts|mip_reliability_branching" -B 2 -A 2Repository: NVIDIA/cuopt
Length of output: 1206
🏁 Script executed:
# Check how MIP solver settings are exposed in Python
rg "class.*Settings\|def.*settings" python/cuopt/cuopt/linear_programming/solver_settings/ -A 10 | head -100Repository: NVIDIA/cuopt
Length of output: 38
🏁 Script executed:
# Check what constants are imported in solver_parameters.pyx
rg "CUOPT_MIP_MIR_CUTS|CUOPT_MIP_KNAPSACK_CUTS|CUOPT_MIP_RELIABILITY_BRANCHING|CUOPT_MIP_CUT_PASSES" python/Repository: NVIDIA/cuopt
Length of output: 38
🏁 Script executed:
# Check the solver_parameters.pyx file to see all exported constants
cat python/cuopt/cuopt/linear_programming/solver/solver_parameters.pyx | grep -A 5 "CUOPT_MIP" | head -80Repository: NVIDIA/cuopt
Length of output: 1856
🏁 Script executed:
# Verify the constants.h file has all the settings defined
rg "CUOPT_MIP_" cpp/include/cuopt/linear_programming/constants.hRepository: NVIDIA/cuopt
Length of output: 1059
🏁 Script executed:
# Check solver_settings.cu to confirm all settings have mapping entries
rg "CUOPT_MIP_(MIR_CUTS|KNAPSACK_CUTS|RELIABILITY_BRANCHING|CUT_PASSES)" cpp/src/math_optimization/solver_settings.cu -B 1 -A 1Repository: NVIDIA/cuopt
Length of output: 811
🏁 Script executed:
# Check if cut settings are exposed in server config handling
rg "mip_cut_passes|mip_mir_cuts|mip_knapsack_cuts|reliability_branching" python/cuopt_server/ --type py -B 2 -A 2Repository: NVIDIA/cuopt
Length of output: 38
🏁 Script executed:
# Check complete solver_parameters.pyx for all CUOPT_MIP exports
grep -n "CUOPT_MIP_" python/cuopt/cuopt/linear_programming/solver/solver_parameters.pyx | tail -30Repository: NVIDIA/cuopt
Length of output: 1232
🏁 Script executed:
# Check if these constants are documented/listed anywhere
rg "cut_passes|mir_cuts|knapsack_cuts|reliability_branching" python/cuopt/cuopt/ --type pyRepository: NVIDIA/cuopt
Length of output: 38
Add Python bindings for new cut and MIP settings.
The new settings (max_cut_passes, mir_cuts, mixed_integer_gomory_cuts, knapsack_cuts, strong_chvatal_gomory_cuts, reliability_branching) are defined in constants.h and properly mapped in solver_settings.cu, but the Python bindings in solver_parameters.pyx do not expose these constants. Users cannot set these parameters via the Python API despite full C++ implementation.
Update python/cuopt/cuopt/linear_programming/solver/solver_parameters.pyx to add Cython declarations and exports for:
CUOPT_MIP_CUT_PASSESCUOPT_MIP_MIR_CUTSCUOPT_MIP_KNAPSACK_CUTSCUOPT_MIP_STRONG_CHVATAL_GOMORY_CUTSCUOPT_MIP_RELIABILITY_BRANCHINGCUOPT_MIP_MIXED_INTEGER_GOMORY_CUTS
Also applies to: 164-175
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/simplex_solver_settings.hpp` around lines 90 - 99, Add
Cython declarations and exports in
python/cuopt/cuopt/linear_programming/solver/solver_parameters.pyx for the new
constants so they are visible in the Python API: declare and cdef public the
following names matching the C++ constants—CUOPT_MIP_CUT_PASSES,
CUOPT_MIP_MIR_CUTS, CUOPT_MIP_MIXED_INTEGER_GOMORY_CUTS,
CUOPT_MIP_KNAPSACK_CUTS, CUOPT_MIP_STRONG_CHVATAL_GOMORY_CUTS, and
CUOPT_MIP_RELIABILITY_BRANCHING—ensuring the types match the definitions in
constants.h and the mappings used in solver_settings.cu (so Python users can set
max_cut_passes, mir_cuts, mixed_integer_gomory_cuts, knapsack_cuts,
strong_chvatal_gomory_cuts, and reliability_branching).
| root_relax_soln_.z.resize(original_lp_.num_cols, 0.0); | ||
|
|
||
| // For now just clear the edge norms | ||
| edge_norms_.clear(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should keep edge norms for the old basic variables.
|
|
||
| # Uncomment to enable debug info | ||
| #set_source_files_properties(${DUAL_SIMPLEX_SRC_FILES} DIRECTORY ${CMAKE_SOURCE_DIR} PROPERTIES COMPILE_OPTIONS "-g1") | ||
| set_source_files_properties(${DUAL_SIMPLEX_SRC_FILES} DIRECTORY ${CMAKE_SOURCE_DIR} PROPERTIES COMPILE_OPTIONS "-g1") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Must be removed before merging
| lp.lower.resize(lp.num_cols + p); | ||
| lp.upper.resize(lp.num_cols + p); | ||
| lp.objective.resize(lp.num_cols + p); | ||
| i_t nz = new_A_col.col_start[lp.num_cols]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resize edge_norms here and fill new variables with 1.0
| std::vector<f_t>& y, | ||
| std::vector<f_t>& z, | ||
| std::vector<i_t>& basic_list, | ||
| std::vector<i_t>& nonbasic_list, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add edge_norms here and adjust size
| std::vector<i_t> is_slack(lp.num_cols, 0); | ||
| for (i_t j : new_slacks) { | ||
| is_slack[j] = 1; | ||
| // Check that slack column length is 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add guard here so only active in debug code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/cuts.cpp`:
- Around line 986-1004: The loop currently only clears aggregated_rows and
aggregated_mark when add_cut is true, which leaves rows permanently marked after
a failed aggregation; always reset the aggregation bookkeeping at the end of
each outer iteration. After the existing block that sets score[i] = 0.0,
unconditionally iterate aggregated_rows to set score[row] = 0.0 and
aggregated_mark[row] = 0 (or, simpler, perform those clears before/after the
add_cut branch), then call aggregated_rows.clear(); ensure this uses the same
variables aggregated_rows, aggregated_mark, add_cut and respects existing
control flow so no rows remain marked when add_cut is false.
- Around line 935-980: The code currently skips aggregation when
max_off_bound_var == 0 because of the check "if (max_off_bound_var > 0)"; change
this to allow column 0 by using a sentinel-aware check (e.g., "if
(max_off_bound_var >= 0)" or "if (max_off_bound_var != -1)") so valid pivot
index 0 is processed; update the condition around the block that references
lp.A.col_start[max_off_bound_var], col_end, and col_len (the pivot
selection/aggregation logic using max_off_bound_var, potential_rows, and
mir.combine_rows) accordingly.
- Around line 2484-2488: The function currently asserts when cut_rhs.size() !=
static_cast<size_t>(p) which can crash in release builds; replace the assert
with returning a non-zero error code instead: after logging via
settings.log.printf change the assert(cut_rhs.size() == static_cast<size_t>(p))
to return a non-zero failure (e.g., return 1 or the function's existing error
convention) so callers can handle the invalid input; update the function
signature/return path if necessary to propagate this error from the caller that
invokes cuts, p, and cut_rhs.
♻️ Duplicate comments (6)
cpp/src/dual_simplex/cuts.cpp (6)
48-66: Guard against near‑zero norms in cut scoring.
cut_distanceandcut_orthogonalitydivide by norms without an epsilon guard, so near‑zero values can cause NaNs and destabilize sorting/scoring. Add a small threshold before division. Based on learnings, please guard near‑zero norms for numerical stability.🔧 Suggested fix
f_t cut_pool_t<i_t, f_t>::cut_distance(i_t row, const std::vector<f_t>& x, f_t& cut_violation, f_t& cut_norm) { + constexpr f_t kMinNorm = 1e-12; ... cut_violation = rhs_storage_[row] - cut_x; cut_norm = std::sqrt(dot); + if (!(cut_norm > kMinNorm)) { + cut_violation = 0.0; + cut_norm = 0.0; + return 0.0; + } const f_t distance = cut_violation / cut_norm; return distance; } f_t cut_pool_t<i_t, f_t>::cut_orthogonality(i_t i, i_t j) { + constexpr f_t kMinNorm = 1e-12; ... f_t norm_i = cut_norms_[i]; f_t norm_j = cut_norms_[j]; - return 1.0 - std::abs(dot) / (norm_i * norm_j); + const f_t denom = norm_i * norm_j; + if (!(denom > kMinNorm)) { return 0.0; } + return 1.0 - std::abs(dot) / denom; }Also applies to: 80-99
703-744: Guard slack_map lookups before indexing xstar.
slack_map[i]can be-1; using it to indexxstarcauses out‑of‑bounds access. Add a guard before anyxstar[slack]read (both in scoring and later when selecting a row). As per coding guidelines, validate algorithm correctness and bounds safety.🔧 Suggested fix
- const i_t slack = slack_map[i]; - const f_t slack_value = xstar[slack]; + const i_t slack = slack_map[i]; + if (slack < 0) { + score[i] = 0.0; + continue; + } + const f_t slack_value = xstar[slack];Apply the same guard where
slack_value = xstar[slack]is recomputed later for the chosen row.
443-447: Protect ratio computation from zero weights.
ratios[i] = values[i] / weights[i]will divide by zero if a weight is zero. Add a guard to avoid NaNs. As per coding guidelines, ensure numerical stability.🔧 Suggested fix
for (i_t i = 0; i < n; i++) { - ratios[i] = values[i] / weights[i]; + ratios[i] = (weights[i] > 1e-12) ? (values[i] / weights[i]) : 0.0; }
2369-2372: Fix format specifier for size_t.
coefficients.size()returnssize_t, but%dexpectsint. Use%zu(or cast) to avoid UB on 64‑bit builds.🔧 Suggested fix
- printf("Original inequality rhs %e nz %d\n", rhs, coefficients.size()); + printf("Original inequality rhs %e nz %zu\n", rhs, coefficients.size());
2496-2500: Avoid assert-only handling for append_rows failures.
In release builds, the assert is stripped and the function continues with a corrupted matrix. Return the error instead. As per coding guidelines, propagate errors to callers.🔧 Suggested fix
i_t append_status = new_A_row.append_rows(cuts); if (append_status != 0) { settings.log.printf("append_rows error: %d\n", append_status); - assert(append_status == 0); + return append_status; }
2542-2640: Resizeedge_normsafter adding slack columns.
lp.num_colsincreases byp, butedge_normsisn’t resized. Any downstream access to the new columns will be out of bounds. Initialize new entries (e.g., 1.0). As per coding guidelines, validate algorithm state after structural changes.🔧 Suggested fix
i_t old_cols = lp.num_cols; lp.num_cols += p; + edge_norms.resize(lp.num_cols, 1.0);
| // The variable that is farthest from its bound is used as a pivot | ||
| if (max_off_bound_var > 0) { | ||
| const i_t col_start = lp.A.col_start[max_off_bound_var]; | ||
| const i_t col_end = lp.A.col_start[max_off_bound_var + 1]; | ||
| const i_t col_len = col_end - col_start; | ||
| const i_t max_potential_rows = 10; | ||
| if (col_len > 1) { | ||
| std::vector<i_t> potential_rows; | ||
| potential_rows.reserve(col_len); | ||
|
|
||
| const f_t threshold = 1e-4; | ||
| for (i_t q = col_start; q < col_end; q++) { | ||
| const i_t i = lp.A.i[q]; | ||
| const f_t val = lp.A.x[q]; | ||
| // Can't use rows that have already been aggregated | ||
| if (std::abs(val) > threshold && aggregated_mark[i] == 0) { potential_rows.push_back(i); } | ||
| if (potential_rows.size() >= max_potential_rows) { break; } | ||
| } | ||
|
|
||
| if (!potential_rows.empty()) { | ||
| std::sort(potential_rows.begin(), potential_rows.end(), [&](i_t a, i_t b) { | ||
| return score[a] > score[b]; | ||
| }); | ||
|
|
||
| const i_t pivot_row = potential_rows[0]; | ||
|
|
||
| sparse_vector_t<i_t, f_t> pivot_row_inequality(Arow, pivot_row); | ||
| f_t pivot_row_rhs = lp.rhs[pivot_row]; | ||
| //printf("\tCombining with %d\n", pivot_row); | ||
| mir.combine_rows(lp, | ||
| Arow, | ||
| max_off_bound_var, | ||
| pivot_row_inequality, | ||
| pivot_row_rhs, | ||
| inequality, | ||
| inequality_rhs); | ||
| aggregated_rows.push_back(pivot_row); | ||
| aggregated_mark[pivot_row] = 1; | ||
| } else { | ||
| //printf("\tno potential rows to aggregate\n"); | ||
| break; | ||
| } | ||
| } else { | ||
| settings.log.printf("Bad col len\n"); | ||
| assert(col_len > 1); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allow column 0 as a valid aggregation pivot.
if (max_off_bound_var > 0) skips aggregation when the best pivot is column 0. That’s a valid variable index and should be allowed.
🔧 Suggested fix
- if (max_off_bound_var > 0) {
+ if (max_off_bound_var >= 0) {🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.cpp` around lines 935 - 980, The code currently
skips aggregation when max_off_bound_var == 0 because of the check "if
(max_off_bound_var > 0)"; change this to allow column 0 by using a
sentinel-aware check (e.g., "if (max_off_bound_var >= 0)" or "if
(max_off_bound_var != -1)") so valid pivot index 0 is processed; update the
condition around the block that references lp.A.col_start[max_off_bound_var],
col_end, and col_len (the pivot selection/aggregation logic using
max_off_bound_var, potential_rows, and mir.combine_rows) accordingly.
| if (add_cut) { | ||
| // We were successful in generating a cut. | ||
|
|
||
| // Set the score of the aggregated rows to zero | ||
| for (i_t row : aggregated_rows) { | ||
| score[row] = 0.0; | ||
| } | ||
|
|
||
| // Clear the aggregated mark | ||
| for (i_t row : aggregated_rows) { | ||
| aggregated_mark[row] = 0; | ||
| } | ||
| // Clear the aggregated rows | ||
| aggregated_rows.clear(); | ||
| } | ||
|
|
||
| // Set the score of the current row to zero | ||
| score[i] = 0.0; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reset aggregation bookkeeping even when no cut is found.
aggregated_rows / aggregated_mark are only cleared when add_cut is true, violating the stated invariant and permanently excluding rows after a failed aggregation attempt. Clear them every outer iteration. As per coding guidelines, validate algorithm correctness.
🔧 Suggested fix
- if (add_cut) {
- // We were successful in generating a cut.
- ...
- for (i_t row : aggregated_rows) {
- aggregated_mark[row] = 0;
- }
- aggregated_rows.clear();
- }
+ if (add_cut) {
+ // We were successful in generating a cut.
+ ...
+ }
+ // Always reset aggregation bookkeeping for the next iteration.
+ for (i_t row : aggregated_rows) {
+ aggregated_mark[row] = 0;
+ }
+ aggregated_rows.clear();🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.cpp` around lines 986 - 1004, The loop currently
only clears aggregated_rows and aggregated_mark when add_cut is true, which
leaves rows permanently marked after a failed aggregation; always reset the
aggregation bookkeeping at the end of each outer iteration. After the existing
block that sets score[i] = 0.0, unconditionally iterate aggregated_rows to set
score[row] = 0.0 and aggregated_mark[row] = 0 (or, simpler, perform those clears
before/after the add_cut branch), then call aggregated_rows.clear(); ensure this
uses the same variables aggregated_rows, aggregated_mark, add_cut and respects
existing control flow so no rows remain marked when add_cut is false.
| const i_t p = cuts.m; | ||
| if (cut_rhs.size() != static_cast<size_t>(p)) { | ||
| settings.log.printf("cut_rhs must have the same number of rows as cuts\n"); | ||
| assert(cut_rhs.size() == static_cast<size_t>(p)); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return an error when cut_rhs size mismatches.
Asserting on user input in release builds can allow invalid state to proceed. Return a non‑zero error instead. As per coding guidelines, propagate errors to callers.
🔧 Suggested fix
if (cut_rhs.size() != static_cast<size_t>(p)) {
settings.log.printf("cut_rhs must have the same number of rows as cuts\n");
- assert(cut_rhs.size() == static_cast<size_t>(p));
+ return -1;
}🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.cpp` around lines 2484 - 2488, The function
currently asserts when cut_rhs.size() != static_cast<size_t>(p) which can crash
in release builds; replace the assert with returning a non-zero error code
instead: after logging via settings.log.printf change the assert(cut_rhs.size()
== static_cast<size_t>(p)) to return a non-zero failure (e.g., return 1 or the
function's existing error convention) so callers can handle the invalid input;
update the function signature/return path if necessary to propagate this error
from the caller that invokes cuts, p, and cut_rhs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/cuts.cpp`:
- Around line 2505-2524: The code adds p new slack columns but forgets to resize
edge_norms, causing out-of-bounds access when later referencing edge_norms[j];
update the vector edge_norms (and any parallel per-column vectors if applicable)
to size lp.num_cols + p before the loop that sets new column data. Specifically,
locate the block that resizes lp.lower/upper/objective and new_A_col (the code
manipulating lp.num_cols, p, new_A_col, and new_slacks) and add a resize call
for edge_norms (edge_norms.resize(lp.num_cols + p)) so the new slack column
indices are valid.
- Around line 2668-2674: Replace the runtime-only assert in the column-length
check with a proper error path and a debug-only guard: instead of assert(col_len
== 1), if col_len != 1 return an error code (or throw) after logging the message
(using the existing printf/process logger) and wrap a debug-only assertion
(e.g., `#ifndef` NDEBUG / assert(col_len == 1) / `#endif`) so release builds don't
silently skip handling; additionally, when you remove a cut for slack index j
ensure you update/clear the associated edge_norms entry (reference the
edge_norms container and the col_start/col_end logic using lp.A.col_start[j] and
col_len) so internal state remains consistent.
♻️ Duplicate comments (8)
cpp/src/dual_simplex/cuts.cpp (8)
62-66: Add epsilon guard for near-zero cut_norm in distance calculation.Line 65 divides by
cut_normwithout checking for near-zero values. While zero-norm cuts are prevented from entering the pool, near-zero norms from numerical issues can still cause instability.Based on learnings: Guard numerical stability by guarding near-zero norms in cut-related computations.
96-98: Add epsilon guard for near-zero norms in orthogonality calculation.Line 98 divides by
(norm_i * norm_j)without checking for near-zero values. This was acknowledged in a previous review as needing a fix.Based on learnings: Guard numerical stability by guarding near-zero norms in cut-related computations.
443-447: Guard against division by zero in value/weight ratio computation.Line 446 divides by
weights[i]without checking for zero. While typical knapsack constraints have positive weights, degenerate cases could cause issues.
708-718: Out-of-bounds access whenslack_map[i]is -1.While
slack_mapis initialized to -1 (Line 673), Lines 710-711 accessxstar[slack]without first checking ifslack == -1. For rows without a mapped slack, this causes an out-of-bounds read.
935-936: Allow column 0 as a valid aggregation pivot.Line 936 uses
if (max_off_bound_var > 0)which incorrectly skips aggregation when the best pivot is column 0. Column 0 is a valid variable index and should be allowed. Change to>= 0.
986-1000: Reset aggregation bookkeeping even when no cut is found.
aggregated_rowsandaggregated_markare only cleared whenadd_cutis true (Lines 994-999). This violates the stated invariant (Lines 729-731) and permanently excludes rows after a failed aggregation attempt. The bookkeeping should be cleared at the end of every outer iteration regardless of success.
2370-2370: Format specifier mismatch:%dused forsize_t.
coefficients.size()returnssize_tbut%dexpectsint. Use%zuor cast toint.
2484-2500: Replaceassertwith explicit error return for input validation.Lines 2487, 2499, and 2603 use
assertfor validation which gets stripped in release builds withNDEBUG. These should return error codes to allow graceful error handling.
🧹 Nitpick comments (1)
cpp/src/dual_simplex/cuts.cpp (1)
1846-1853: Consider graceful error handling for slack invariant violations.Lines 1846-1853 use
assertfor slack column validation. While these are invariant checks that shouldn't fail in correct usage, in a library context, returning an error code would be more robust than potentially crashing in debug builds.
| // Add in slacks variables for the new rows | ||
| lp.lower.resize(lp.num_cols + p); | ||
| lp.upper.resize(lp.num_cols + p); | ||
| lp.objective.resize(lp.num_cols + p); | ||
| i_t nz = new_A_col.col_start[lp.num_cols]; | ||
| new_A_col.col_start.resize(lp.num_cols + p + 1); | ||
| new_A_col.i.resize(nz + p); | ||
| new_A_col.x.resize(nz + p); | ||
| i_t k = lp.num_rows; | ||
| for (i_t j = lp.num_cols; j < lp.num_cols + p; j++) { | ||
| new_A_col.col_start[j] = nz; | ||
| new_A_col.i[nz] = k++; | ||
| new_A_col.x[nz] = 1.0; | ||
| nz++; | ||
| lp.lower[j] = 0.0; | ||
| lp.upper[j] = inf; | ||
| lp.objective[j] = 0.0; | ||
| new_slacks.push_back(j); | ||
| } | ||
| settings.log.debug("Done adding slacks\n"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resize edge_norms for new slack columns.
The function increases lp.num_cols by p (new slack columns) but does not resize edge_norms. The developer noted this should be fixed (Line 2509 comment). Downstream code accessing edge_norms[j] for new columns will cause out-of-bounds access.
🔎 Suggested fix
// Add in slacks variables for the new rows
lp.lower.resize(lp.num_cols + p);
lp.upper.resize(lp.num_cols + p);
lp.objective.resize(lp.num_cols + p);
+ const size_t old_edge_norms_size = edge_norms.size();
+ edge_norms.resize(lp.num_cols + p, 1.0);
i_t nz = new_A_col.col_start[lp.num_cols];Per developer comment at Line 2509.
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.cpp` around lines 2505 - 2524, The code adds p new
slack columns but forgets to resize edge_norms, causing out-of-bounds access
when later referencing edge_norms[j]; update the vector edge_norms (and any
parallel per-column vectors if applicable) to size lp.num_cols + p before the
loop that sets new column data. Specifically, locate the block that resizes
lp.lower/upper/objective and new_A_col (the code manipulating lp.num_cols, p,
new_A_col, and new_slacks) and add a resize call for edge_norms
(edge_norms.resize(lp.num_cols + p)) so the new slack column indices are valid.
| const i_t col_start = lp.A.col_start[j]; | ||
| const i_t col_end = lp.A.col_start[j + 1]; | ||
| const i_t col_len = col_end - col_start; | ||
| if (col_len != 1) { | ||
| printf("Remove cuts: Slack %d has %d nzs in column\n", j, col_len); | ||
| assert(col_len == 1); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace assert with error return and add debug guard.
Line 2673 uses assert(col_len == 1) which is stripped in release builds. Per the developer's comment at Line 2667, this should be guarded for debug-only or converted to an error return.
Additionally, per the developer's comment at Line 2655, edge_norms should be handled when removing cuts to maintain consistency.
Per developer comments at Lines 2655 and 2667.
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.cpp` around lines 2668 - 2674, Replace the
runtime-only assert in the column-length check with a proper error path and a
debug-only guard: instead of assert(col_len == 1), if col_len != 1 return an
error code (or throw) after logging the message (using the existing
printf/process logger) and wrap a debug-only assertion (e.g., `#ifndef` NDEBUG /
assert(col_len == 1) / `#endif`) so release builds don't silently skip handling;
additionally, when you remove a cut for slack index j ensure you update/clear
the associated edge_norms entry (reference the edge_norms container and the
col_start/col_end logic using lp.A.col_start[j] and col_len) so internal state
remains consistent.
This PR adds cuts to the MIP solver. This includes the following:
Summary by CodeRabbit
New Features
Improvements
Tests
✏️ Tip: You can customize this high-level summary in your review settings.