LP Python Examples#
The following example showcases how to use the CuOptServiceSelfHostClient to solve a simple LP problem in normal mode and batch mode (where multiple problems are solved at once).
The OpenAPI specification for the server is available in open-api spec. The example data is structured as per the OpenAPI specification for the server, please refer LPData under “POST /cuopt/request” under schema section. LP and MILP share same spec.
If you want to run server locally, please run the following command in a terminal or tmux session so you can test examples in another terminal.
1export ip="localhost"
2export port=5000
3python -m cuopt_server.cuopt_service --ip $ip --port $port
Genric Example With Normal Mode and Batch Mode#
1# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2# SPDX-License-Identifier: Apache-2.0
3"""
4Basic LP Server Example (Normal and Batch Mode)
5
6This example demonstrates how to use the cuOpt Self-Hosted Service Client
7to solve LP problems via the cuOpt server in both normal and batch modes.
8
9Requirements:
10 - cuOpt server running (default: localhost:5000)
11 - cuopt_sh_client package installed
12
13Problem:
14 Minimize: -0.2*x1 + 0.1*x2
15 Subject to:
16 3.0*x1 + 4.0*x2 <= 5.4
17 2.7*x1 + 10.1*x2 <= 4.9
18 x1, x2 >= 0
19
20The data is structured according to the OpenAPI specification (LPData):
21 - csr_constraint_matrix: Constraint matrix in CSR format
22 - constraint_bounds: Upper and lower bounds for constraints
23 - objective_data: Objective coefficients and settings
24 - variable_bounds: Variable bounds
25 - solver_config: Solver settings
26
27Expected Response:
28 Normal mode: Single solution
29 Batch mode: Array of solutions (one per problem)
30"""
31
32from cuopt_sh_client import CuOptServiceSelfHostClient
33import json
34import time
35
36
37def repoll(cuopt_service_client, solution, repoll_tries):
38 """
39 Repoll the server for solution if it's still processing.
40
41 If solver is still busy solving, the job will be assigned a request id
42 and response is sent back in the format {"reqId": <REQUEST-ID>}.
43 """
44 if "reqId" in solution and "response" not in solution:
45 req_id = solution["reqId"]
46 for i in range(repoll_tries):
47 solution = cuopt_service_client.repoll(
48 req_id, response_type="dict"
49 )
50 if "reqId" in solution and "response" in solution:
51 break
52
53 # Sleep for a second before requesting
54 time.sleep(1)
55
56 return solution
57
58
59def main():
60 """Run the basic LP example in normal and batch modes."""
61 # Example data for LP problem
62 # The data is structured as per the OpenAPI specification for the server
63 data = {
64 "csr_constraint_matrix": {
65 "offsets": [0, 2, 4],
66 "indices": [0, 1, 0, 1],
67 "values": [3.0, 4.0, 2.7, 10.1],
68 },
69 "constraint_bounds": {
70 "upper_bounds": [5.4, 4.9],
71 "lower_bounds": ["ninf", "ninf"],
72 },
73 "objective_data": {
74 "coefficients": [-0.2, 0.1],
75 "scalability_factor": 1.0,
76 "offset": 0.0,
77 },
78 "variable_bounds": {
79 "upper_bounds": ["inf", "inf"],
80 "lower_bounds": [0.0, 0.0],
81 },
82 "maximize": False,
83 "solver_config": {"tolerances": {"optimality": 0.0001}},
84 }
85
86 # If cuOpt is not running on localhost:5000, edit ip and port parameters
87 cuopt_service_client = CuOptServiceSelfHostClient(
88 ip="localhost", port=5000, polling_timeout=25, timeout_exception=False
89 )
90
91 # Number of repoll requests to be carried out for a successful response
92 repoll_tries = 500
93
94 # Logging callback
95 def log_callback(log):
96 for i in log:
97 print("server-log: ", i)
98
99 print("=== Solving in Normal Mode ===")
100 solution = cuopt_service_client.get_LP_solve(
101 data, response_type="dict", logging_callback=log_callback
102 )
103
104 solution = repoll(cuopt_service_client, solution, repoll_tries)
105
106 print("---------- Normal mode ---------------")
107 print(json.dumps(solution, indent=4))
108
109 print("\n=== Solving in Batch Mode ===")
110 # For batch mode send list of mps/dict/DataModel
111 solution = cuopt_service_client.get_LP_solve(
112 [data, data], response_type="dict", logging_callback=log_callback
113 )
114 solution = repoll(cuopt_service_client, solution, repoll_tries)
115
116 print("---------- Batch mode -----------------")
117 print(json.dumps(solution, indent=4))
118
119
120if __name__ == "__main__":
121 main()
The response would be as follows:
Normal mode response:
1{
2 "response": {
3 "solver_response": {
4 "status": "Optimal",
5 "solution": {
6 "problem_category": "LP",
7 "primal_solution": [
8 1.8,
9 0.0
10 ],
11 "dual_solution": [
12 -0.06666666666666668,
13 0.0
14 ],
15 "primal_objective": -0.36000000000000004,
16 "dual_objective": 6.92188481708744e-310,
17 "solver_time": 0.006462812423706055,
18 "vars": {},
19 "lp_statistics": {
20 "primal_residual": 6.92114652678267e-310,
21 "dual_residual": 6.9218848170975e-310,
22 "gap": 6.92114652686054e-310,
23 "nb_iterations": 1
24 },
25 "reduced_cost": [
26 0.0,
27 0.0031070813207920247
28 ],
29 "milp_statistics": {}
30 }
31 },
32 "total_solve_time": 0.013341188430786133
33 },
34 "reqId": "c7f2e5a1-d210-4e2e-9308-4257d0a86c4a"
35}
Batch mode response:
1{
2 "response": {
3 "solver_response": [
4 {
5 "status": "Optimal",
6 "solution": {
7 "problem_category": "LP",
8 "primal_solution": [
9 1.8,
10 0.0
11 ],
12 "dual_solution": [
13 -0.06666666666666668,
14 0.0
15 ],
16 "primal_objective": -0.36000000000000004,
17 "dual_objective": 6.92188481708744e-310,
18 "solver_time": 0.005717039108276367,
19 "vars": {},
20 "lp_statistics": {
21 "primal_residual": 6.92114652678267e-310,
22 "dual_residual": 6.9218848170975e-310,
23 "gap": 6.92114652686054e-310,
24 "nb_iterations": 1
25 },
26 "reduced_cost": [
27 0.0,
28 0.0031070813207920247
29 ],
30 "milp_statistics": {}
31 }
32 },
33 {
34 "status": "Optimal",
35 "solution": {
36 "problem_category": "LP",
37 "primal_solution": [
38 1.8,
39 0.0
40 ],
41 "dual_solution": [
42 -0.06666666666666668,
43 0.0
44 ],
45 "primal_objective": -0.36000000000000004,
46 "dual_objective": 6.92188481708744e-310,
47 "solver_time": 0.007481813430786133,
48 "vars": {},
49 "lp_statistics": {
50 "primal_residual": 6.921146112128e-310,
51 "dual_residual": 6.9218848170975e-310,
52 "gap": 6.92114611220587e-310,
53 "nb_iterations": 1
54 },
55 "reduced_cost": [
56 0.0,
57 0.0031070813207920247
58 ],
59 "milp_statistics": {}
60 }
61 }
62 ],
63 "total_solve_time": 0.013
64 },
65 "reqId": "69dc8f36-16c3-4e28-8fb9-3977eb92b480"
66}
Note
Warm start is only applicable to LP and not for MILP.
Warm Start#
Previously run solutions can be saved and be used as warm start for new requests using previously run reqIds as follows:
1# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2# SPDX-License-Identifier: Apache-2.0
3"""
4LP Warmstart Server Example
5
6This example demonstrates how to use warmstart functionality with the cuOpt server.
7Warmstart allows reusing solution context from a previous solve to speed up
8solving of similar problems.
9
10Note:
11 Warmstart is only applicable to LP, not for MILP.
12
13Requirements:
14 - cuOpt server running (default: localhost:5000)
15 - cuopt_sh_client package installed
16
17Problem 1 & 2:
18 Minimize: -0.2*x1 + 0.1*x2
19 Subject to:
20 3.0*x1 + 4.0*x2 <= 5.4
21 2.7*x1 + 10.1*x2 <= 4.9
22 x1, x2 >= 0
23
24The second solve reuses the solution context from the first solve.
25"""
26
27from cuopt_sh_client import CuOptServiceSelfHostClient
28import json
29
30
31def main():
32 """Run the warmstart LP example."""
33 data = {
34 "csr_constraint_matrix": {
35 "offsets": [0, 2, 4],
36 "indices": [0, 1, 0, 1],
37 "values": [3.0, 4.0, 2.7, 10.1],
38 },
39 "constraint_bounds": {
40 "upper_bounds": [5.4, 4.9],
41 "lower_bounds": ["ninf", "ninf"],
42 },
43 "objective_data": {
44 "coefficients": [-0.2, 0.1],
45 "scalability_factor": 1.0,
46 "offset": 0.0,
47 },
48 "variable_bounds": {
49 "upper_bounds": ["inf", "inf"],
50 "lower_bounds": [0.0, 0.0],
51 },
52 "maximize": False,
53 "solver_config": {"tolerances": {"optimality": 0.0001}},
54 }
55
56 # If cuOpt is not running on localhost:5000, edit ip and port parameters
57 cuopt_service_client = CuOptServiceSelfHostClient(
58 ip="localhost", port=5000, timeout_exception=False
59 )
60
61 print("=== Solving Problem 1 ===")
62 # Set delete_solution to false so it can be used in next request
63 initial_solution = cuopt_service_client.get_LP_solve(
64 data, delete_solution=False, response_type="dict"
65 )
66
67 print(f"Problem 1 reqId: {initial_solution['reqId']}")
68 print(
69 f"Objective: {initial_solution['response']['solver_response']['solution']['primal_objective']}"
70 )
71
72 print("\n=== Solving Problem 2 with Warmstart ===")
73 # Use previous solution saved in server as warmstart for this request.
74 # That solution is referenced with previous request id.
75 solution = cuopt_service_client.get_LP_solve(
76 data, warmstart_id=initial_solution["reqId"], response_type="dict"
77 )
78
79 print(json.dumps(solution, indent=4))
80
81 # Delete saved solution if not required to save space
82 print("\n=== Cleaning Up ===")
83 cuopt_service_client.delete(initial_solution["reqId"])
84 print("Saved solution deleted")
85
86
87if __name__ == "__main__":
88 main()
The response would be as follows:
1{
2 "response": {
3 "solver_response": {
4 "status": "Optimal",
5 "solution": {
6 "problem_category": "LP",
7 "primal_solution": [
8 1.8,
9 0.0
10 ],
11 "dual_solution": [
12 -0.06666666666666668,
13 0.0
14 ],
15 "primal_objective": -0.36000000000000004,
16 "dual_objective": 6.92188481708744e-310,
17 "solver_time": 0.006613016128540039,
18 "vars": {},
19 "lp_statistics": {
20 "primal_residual": 6.921146112128e-310,
21 "dual_residual": 6.9218848170975e-310,
22 "gap": 6.92114611220587e-310,
23 "nb_iterations": 1
24 },
25 "reduced_cost": [
26 0.0,
27 0.0031070813207920247
28 ],
29 "milp_statistics": {}
30 }
31 },
32 "total_solve_time": 0.013310909271240234
33 },
34 "reqId": "6d1e278f-5505-4bcc-8a33-2f7f7d6f8a30"
35}
Using MPS file directly#
An example on using .mps files as input is shown below:
1# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2# SPDX-License-Identifier: Apache-2.0
3"""
4LP MPS File Server Example
5
6This example demonstrates how to solve LP problems from MPS files using the
7cuOpt server. MPS (Mathematical Programming System) is a standard file format
8for representing optimization problems.
9
10Requirements:
11 - cuOpt server running (default: localhost:5000)
12 - cuopt_sh_client package installed
13
14Problem (in MPS format):
15 Minimize: -0.2*VAR1 + 0.1*VAR2
16 Subject to:
17 3*VAR1 + 4*VAR2 <= 5.4
18 2.7*VAR1 + 10.1*VAR2 <= 4.9
19 VAR1, VAR2 >= 0
20
21Expected Response:
22 {
23 "response": {
24 "solver_response": {
25 "status": "Optimal",
26 "solution": {
27 "primal_objective": -0.36,
28 "vars": {
29 "VAR1": 1.8,
30 "VAR2": 0.0
31 }
32 }
33 }
34 }
35 }
36"""
37
38from cuopt_sh_client import (
39 CuOptServiceSelfHostClient,
40 ThinClientSolverSettings,
41)
42import json
43import os
44
45
46def main():
47 """Run the MPS file LP example."""
48 data = "sample.mps"
49
50 # MPS file content
51 mps_data = """NAME good-1
52ROWS
53 N COST
54 L ROW1
55 L ROW2
56COLUMNS
57 VAR1 COST -0.2
58 VAR1 ROW1 3 ROW2 2.7
59 VAR2 COST 0.1
60 VAR2 ROW1 4 ROW2 10.1
61RHS
62 RHS1 ROW1 5.4 ROW2 4.9
63ENDATA
64"""
65
66 # Write MPS file
67 with open(data, "w") as file:
68 file.write(mps_data)
69
70 print(f"Created MPS file: {data}")
71
72 # If cuOpt is not running on localhost:5000, edit `ip` and `port` parameters
73 cuopt_service_client = CuOptServiceSelfHostClient(
74 ip="localhost", port=5000, timeout_exception=False
75 )
76
77 # Configure solver settings
78 ss = ThinClientSolverSettings()
79 ss.set_parameter("time_limit", 5)
80 ss.set_optimality_tolerance(0.00001)
81
82 print("\n=== Solving LP from MPS File ===")
83 solution = cuopt_service_client.get_LP_solve(
84 data, solver_config=ss, response_type="dict"
85 )
86
87 print(json.dumps(solution, indent=4))
88
89 # Delete the mps file after solving
90 if os.path.exists(data):
91 os.remove(data)
92 print(f"Deleted MPS file: {data}")
93
94
95if __name__ == "__main__":
96 main()
The response is:
1{
2 "response": {
3 "solver_response": {
4 "status": "Optimal",
5 "solution": {
6 "problem_category": "LP",
7 "primal_solution": [
8 1.8,
9 0.0
10 ],
11 "dual_solution": [
12 -0.06666666666666668,
13 0.0
14 ],
15 "primal_objective": -0.36000000000000004,
16 "dual_objective": 6.92188481708744e-310,
17 "solver_time": 0.008397102355957031,
18 "vars": {
19 "VAR1": 1.8,
20 "VAR2": 0.0
21 },
22 "lp_statistics": {
23 "primal_residual": 6.921146112128e-310,
24 "dual_residual": 6.9218848170975e-310,
25 "gap": 6.92114611220587e-310,
26 "nb_iterations": 1
27 },
28 "reduced_cost": [
29 0.0,
30 0.0031070813207920247
31 ],
32 "milp_statistics": {}
33 }
34 },
35 "total_solve_time": 0.014980316162109375
36 },
37 "reqId": "3f36bad7-6135-4ffd-915b-858c449c7cbb"
38}
Generate Datamodel from MPS Parser#
Use a datamodel generated from mps file as input; this yields a solution object in response. For more details please refer to LP/MILP parameters.
1# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2# SPDX-License-Identifier: Apache-2.0
3"""
4LP DataModel from MPS Parser Example
5
6This example demonstrates how to:
7- Parse an MPS file using cuopt_mps_parser
8- Create a DataModel from the parsed MPS
9- Solve using the DataModel via the server
10- Extract detailed solution information
11
12Requirements:
13 - cuOpt server running (default: localhost:5000)
14 - cuopt_sh_client package installed
15 - cuopt_mps_parser package installed
16
17Problem (in MPS format):
18 Minimize: -0.2*VAR1 + 0.1*VAR2
19 Subject to:
20 3*VAR1 + 4*VAR2 <= 5.4
21 2.7*VAR1 + 10.1*VAR2 <= 4.9
22 VAR1, VAR2 >= 0
23
24Expected Output:
25 Termination Reason: 1 (Optimal)
26 Objective Value: -0.36
27 Variables Values: {'VAR1': 1.8, 'VAR2': 0.0}
28"""
29
30from cuopt_sh_client import (
31 CuOptServiceSelfHostClient,
32 ThinClientSolverSettings,
33 PDLPSolverMode,
34)
35import cuopt_mps_parser
36import time
37
38
39def main():
40 """Run the MPS DataModel example."""
41 # -- Parse the MPS file --
42
43 data = "sample.mps"
44
45 mps_data = """NAME good-1
46ROWS
47 N COST
48 L ROW1
49 L ROW2
50COLUMNS
51 VAR1 COST -0.2
52 VAR1 ROW1 3 ROW2 2.7
53 VAR2 COST 0.1
54 VAR2 ROW1 4 ROW2 10.1
55RHS
56 RHS1 ROW1 5.4 ROW2 4.9
57ENDATA
58"""
59
60 with open(data, "w") as file:
61 file.write(mps_data)
62
63 print(f"Created MPS file: {data}")
64
65 # Parse the MPS file and measure the time spent
66 print("\n=== Parsing MPS File ===")
67 parse_start = time.time()
68 data_model = cuopt_mps_parser.ParseMps(data)
69 parse_time = time.time() - parse_start
70 print(f"Parse time: {parse_time:.3f} seconds")
71
72 # -- Build the client object --
73
74 # If cuOpt is not running on localhost:5000, edit `ip` and `port` parameters
75 cuopt_service_client = CuOptServiceSelfHostClient(
76 ip="localhost", port=5000, timeout_exception=False
77 )
78
79 # -- Set the solver settings --
80
81 ss = ThinClientSolverSettings()
82
83 # Set the solver mode to Fast1 (Stable1 could also be used)
84 ss.set_parameter("pdlp_solver_mode", PDLPSolverMode.Fast1)
85
86 # Set the general tolerance to 1e-4 (default value)
87 ss.set_optimality_tolerance(1e-4)
88
89 # Optional: Set iteration and time limits
90 # ss.set_iteration_limit(1000)
91 # ss.set_time_limit(10)
92 ss.set_parameter("time_limit", 5)
93
94 # -- Call solve --
95
96 print("\n=== Solving with Server ===")
97 network_time = time.time()
98 solution = cuopt_service_client.get_LP_solve(data_model, ss)
99 network_time = time.time() - network_time
100
101 # -- Retrieve the solution object and print the details --
102
103 solution_status = solution["response"]["solver_response"]["status"]
104 solution_obj = solution["response"]["solver_response"]["solution"]
105
106 print("\n=== Results ===")
107 # Check Termination Reason
108 print(f"Termination Reason: {solution_status}")
109
110 # Check found objective value
111 print(f"Objective Value: {solution_obj.get_primal_objective()}")
112
113 # Check the MPS parse time
114 print(f"MPS Parse time: {parse_time:.3f} sec")
115
116 # Check network time (client call - solve time)
117 network_time = network_time - (solution_obj.get_solve_time())
118 print(f"Network time: {network_time:.3f} sec")
119
120 # Check solver time
121 solve_time = solution_obj.get_solve_time()
122 print(f"Engine Solve time: {solve_time:.3f} sec")
123
124 # Check the total end to end time (mps parsing + network + solve time)
125 end_to_end_time = parse_time + network_time + solve_time
126 print(f"Total end to end time: {end_to_end_time:.3f} sec")
127
128 # Print the found decision variables
129 print(f"Variables Values: {solution_obj.get_vars()}")
130
131
132if __name__ == "__main__":
133 main()
The response would be as follows:
1 Termination Reason: (1 is Optimal)
2 1
3 Objective Value:
4 -0.36000000000000004
5 Mps Parse time: 0.000 sec
6 Network time: 1.062 sec
7 Engine Solve time: 0.004 sec
8 Total end to end time: 1.066 sec
9 Variables Values:
10 {'VAR1': 1.8, 'VAR2': 0.0}
Example with DataModel is available in the Examples Notebooks Repository.
The data argument to get_LP_solve may be a dictionary of the format shown in LP Open-API spec. More details on the response can be found under the responses schema “get /cuopt/request” and “get /cuopt/solution” API spec.
Aborting a Running Job in Thin Client#
Please refer to the Aborting a Running Job in Thin Client in the MILP Example for more details.
LP CLI Examples#
Generic Example#
The following examples showcase how to use the cuopt_sh CLI to solve a simple LP problem.
echo '{
"csr_constraint_matrix": {
"offsets": [0, 2, 4],
"indices": [0, 1, 0, 1],
"values": [3.0, 4.0, 2.7, 10.1]
},
"constraint_bounds": {
"upper_bounds": [5.4, 4.9],
"lower_bounds": ["ninf", "ninf"]
},
"objective_data": {
"coefficients": [0.2, 0.1],
"scalability_factor": 1.0,
"offset": 0.0
},
"variable_bounds": {
"upper_bounds": ["inf", "inf"],
"lower_bounds": [0.0, 0.0]
},
"maximize": "False",
"solver_config": {
"tolerances": {
"optimality": 0.0001
}
}
}' > data.json
Invoke the CLI.
# Please update these values if the server is running on a different IP address or port
export ip="localhost"
export port=5000
cuopt_sh data.json -t LP -i $ip -p $port -sl
Response is as follows:
1{
2 "response": {
3 "solver_response": {
4 "status": "Optimal",
5 "solution": {
6 "problem_category": "LP",
7 "primal_solution": [1.8, 0.0],
8 "dual_solution": [-0.06666666666666668, 0.0],
9 "primal_objective": -0.36000000000000004,
10 "dual_objective": 6.92188481708744e-310,
11 "solver_time": 0.007324934005737305,
12 "vars": {},
13 "lp_statistics": {
14 "primal_residual": 6.921146112128e-310,
15 "dual_residual": 6.9218848170975e-310,
16 "gap": 6.92114611220587e-310,
17 "nb_iterations": 1
18 },
19 "reduced_cost": [0.0, 0.0031070813207920247],
20 "milp_statistics": {}
21 }
22 },
23 "total_solve_time": 0.014164209365844727
24 },
25 "reqId": "4665e513-341e-483b-85eb-bced04ba598c"
26}
Warm Start in CLI#
To use a previous solution as the initial/warm start solution for a new request ID, you are required to save the previous solution, which can be accomplished use option -k. Use the previous reqId in the next request as follows:
Note
Warm start is only applicable to LP and not for MILP.
# Please update these values if the server is running on a different IP address or port
export ip="localhost"
export port=5000
reqId=$(cuopt_sh -t LP data.json -i $ip -p $port -k | sed "s/'/\"/g" | sed 's/False/false/g' | jq -r '.reqId')
cuopt_sh data.json -t LP -i $ip -p $port -wid $reqId
In case the user needs to update solver settings through CLI, the option -ss can be used as follows:
# Please update these values if the server is running on a different IP address or port
export ip="localhost"
export port=5000
cuopt_sh data.json -t LP -i $ip -p $port -ss '{"tolerances": {"optimality": 0.0001}, "time_limit": 5}'
In the case of batch mode, you can send a bunch of mps files at once, and acquire results. The batch mode works only for mps in the case of CLI:
Note
Batch mode is not available for MILP problems.
A sample MPS file (sample.mps):
1NAME good-1
2ROWS
3 N COST
4 L ROW1
5 L ROW2
6COLUMNS
7 VAR1 COST -0.2
8 VAR1 ROW1 3 ROW2 2.7
9 VAR2 COST 0.1
10 VAR2 ROW1 4 ROW2 10.1
11RHS
12 RHS1 ROW1 5.4 ROW2 4.9
13ENDATA
Run the example:
# Please update these values if the server is running on a different IP address or port
export ip="localhost"
export port=5000
cuopt_sh sample.mps sample.mps sample.mps -t LP -i $ip -p $port -ss '{"tolerances": {"optimality": 0.0001}, "time_limit": 5}'
Aborting a Running Job In CLI#
Please refer to the Aborting a Running Job In CLI in the MILP Example for more details.
Note
Please use solver settings while using .mps files.