cuOpt Thin Client API Example#

Routing Example#

 1import os
 2from cuopt_thin_client import CuOptServiceClient
 3import json
 4import time
 5
 6data = {"cost_matrix_data": {"data": {"0": [[0,1],[1,0]]}},
 7        "task_data": {"task_locations": [0,1]},
 8        "fleet_data": {"vehicle_locations": [[0,0],[0,0]]}}
 9
10# Load the credential "NVIDIA Identity Federation API Key" from the environment or some other way
11
12sak = os.environ["NVIDIA_IDENTITY_FEDERATION_API_KEY"]
13
14cuopt_service_client = CuOptServiceClient(
15    sak=sak,
16    function_id=<FUNCTION_ID_OBTAINED_FROM_NGC>,
17    timeout_exception=False
18)
19
20def repoll(solution, repoll_tries):
21    # If solver is still busy solving, the job will be assigned a request id and response is sent back in the
22    # following format {"reqId": <REQUEST-ID>}.
23    # Solver needs to be re-polled for response using this <REQUEST-ID>.
24
25    if "reqId" in solution and "response" not in solution:
26        req_id = solution["reqId"]
27        for i in range(repoll_tries):
28            solution = cuopt_service_client.repoll(req_id, response_type="dict")
29            if "reqId" in solution and "response" in solution:
30                break;
31
32            # Sleep for a second before requesting
33            time.sleep(1)
34
35    return solution
36
37solution = cuopt_service_client.get_optimized_routes(data)
38
39# Number of repoll requests to be carried out for a successful response
40repoll_tries = 500
41
42solution = cuopt_service_client.get_optimized_routes(data)
43solution = repoll(solution, repoll_tries)
44print(json.dumps(solution, indent=4))

response would be as follows:

 1{
 2"response": {
 3    "solver_response": {
 4        "status": 0,
 5        "num_vehicles": 1,
 6        "solution_cost": 2.0,
 7        "objective_values": {
 8            "cost": 2.0
 9        },
10        "vehicle_data": {
11            "0": {
12                "task_id": [
13                    "Depot",
14                    "0",
15                    "1",
16                    "Depot"
17                ],
18                "arrival_stamp": [
19                    0.0,
20                    0.0,
21                    0.0,
22                    0.0
23                ],
24                "type": [
25                    "Depot",
26                    "Delivery",
27                    "Delivery",
28                    "Depot"
29                ],
30                "route": [
31                    0,
32                    0,
33                    1,
34                    0
35                ]
36            }
37        },
38        "dropped_tasks": {
39            "task_id": [],
40            "task_index": []
41        }
42    }
43},
44"reqId": "ebd378a3-c02a-47f3-b0a1-adec81be7cdd"
45}

The function_id can be found in NGC -> Cloud Functions -> Shared Functions. For more details, please check Quick Start Guide.

The data argument to get_optimized_routes may be a dictionary of the format shown in Get Routes Open-API spec. More details of the response can be found under responses schema in open-api spec or same can be found in redoc as well. It may also be the path of a file containing such a dictionary as JSON or written using the Python msgpack module (pickle is deprecated). A JSON file may optionally be compressed with zlib.

LP Example#

Note

Linear Programming (LP) and Mixed Integer Linear Programming (MILP) are Early Access features and are currently open to only select customers.

Note

LP: Solver returns as soon as optimality is achieved and does not wait for the time limit to expire. MILP: Solver keeps trying other paths to find a better solution since it is heuristic-based until the time limit expires.

 1import os
 2from cuopt_thin_client import CuOptServiceClient
 3import json
 4import time
 5
 6data = {
 7    "csr_constraint_matrix": {
 8        "offsets": [0, 2, 4],
 9        "indices": [0, 1, 0, 1],
10        "values": [3.0, 4.0, 2.7, 10.1]
11    },
12    "constraint_bounds": {
13        "upper_bounds": [5.4, 4.9],
14        "lower_bounds": ["ninf", "ninf"]
15    },
16    "objective_data": {
17        "coefficients": [0.2, 0.1],
18        "scalability_factor": 1.0,
19        "offset": 0.0
20    },
21    "variable_bounds": {
22        "upper_bounds": ["inf", "inf"],
23        "lower_bounds": [0.0, 0.0]
24    },
25    "maximize": False,
26    "solver_config": {
27        "tolerances": {
28            "optimality": 0.0001
29        }
30    }
31}
32
33# Load the credential "NVIDIA Identity Federation API Key" from the environment or some other way
34
35sak = os.environ["NVIDIA_IDENTITY_FEDERATION_API_KEY"]
36
37cuopt_service_client = CuOptServiceClient(
38    sak=sak,
39    function_id=<FUNCTION_ID_OBTAINED_FROM_NGC>,
40    timeout_exception=False
41)
42
43# Number of repoll requests to be carried out for a successful response
44repoll_tries = 500
45
46def repoll(solution, repoll_tries):
47    # If solver is still busy solving, the job will be assigned a request id and response is sent back in the
48    # following format {"reqId": <REQUEST-ID>}.
49    # Solver needs to be re-polled for response using this <REQUEST-ID>.
50
51    if "reqId" in solution and "response" not in solution:
52        req_id = solution["reqId"]
53        for i in range(repoll_tries):
54            solution = cuopt_service_client.repoll(req_id, response_type="dict")
55            if "reqId" in solution and "response" in solution:
56                break;
57
58            # Sleep for a second before requesting
59            time.sleep(1)
60
61    return solution
62
63solution = cuopt_service_client.get_LP_solve(data, response_type="dict")
64
65solution = repoll(solution, repoll_tries)
66
67print("---------- Normal mode ---------------  \n", json.dumps(solution, indent=4))
68
69# For batch mode, send list of mps/dict/DataModel
70
71solution = cuopt_service_client.get_LP_solve([data, data], response_type="dict")
72solution = repoll(solution, repoll_tries)
73
74print("---------- Batch mode -----------------  \n", json.dumps(solution, indent=4))

response would be as follows:

Normal mode response:

 1{
 2"response": {
 3    "solver_response": {
 4        "status": 1,
 5        "solution": {
 6            "primal_solution": [
 7                0.0,
 8                0.0
 9            ],
10            "dual_solution": [
11                0.0,
12                0.0
13            ],
14            "primal_objective": 0.0,
15            "dual_objective": 0.0,
16            "solver_time": 38.0,
17            "vars": {},
18            "lp_statistics": {
19                "primal_residual": 0.0,
20                "dual_residual": 0.0,
21                "gap": 0.0,
22                "reduced_cost": [
23                    0.2,
24                    0.1
25                ]
26            },
27            "milp_statistics": {}
28        }
29    }
30},
31"reqId": "39c52105-736d-4383-a101-707390937141"
32}

Batch mode response:

 1{
 2"response": {
 3    "solver_response": [
 4        {
 5            "status": 1,
 6            "solution": {
 7                "primal_solution": [
 8                    0.0,
 9                    0.0
10                ],
11                "dual_solution": [
12                    0.0,
13                    0.0
14                ],
15                "primal_objective": 0.0,
16                "dual_objective": 0.0,
17                "solver_time": 5.0,
18                "vars": {},
19                "lp_statistics": {
20                    "primal_residual": 0.0,
21                    "dual_residual": 0.0,
22                    "gap": 0.0,
23                    "reduced_cost": [
24                        0.2,
25                        0.1
26                    ]
27                }
28            }
29        },
30        {
31            "status": 1,
32            "solution": {
33                "primal_solution": [
34                    0.0,
35                    0.0
36                ],
37                "dual_solution": [
38                    0.0,
39                    0.0
40                ],
41                "primal_objective": 0.0,
42                "dual_objective": 0.0,
43                "solver_time": 3.0,
44                "vars": {},
45                "lp_statistics": {
46                    "primal_residual": 0.0,
47                    "dual_residual": 0.0,
48                    "gap": 0.0,
49                    "reduced_cost": [
50                        0.2,
51                        0.1
52                    ]
53                },
54                "milp_statistics": {}
55            }
56        }
57    ],
58    "total_solve_time": 9.0
59},
60"reqId": "f04a6936-830e-4235-b535-68ad51736ac0"
61}

An example to use a .mps file as input is shown below:

 1import os
 2from cuopt_thin_client import CuOptServiceClient
 3from solver_settings import SolverSettings
 4import json
 5import time
 6
 7data = "sample.mps"
 8mps_data = """* optimize
 9*  cost = 0.2 * VAR1 + 0.1 * VAR2
10* subject to
11*  3 * VAR1 + 4 * VAR2 <= 5.4
12*  2.7 * VAR1 + 10.1 * VAR2 <= 4.9
13NAME   good-1
14ROWS
15 N  COST
16 L  ROW1
17 L  ROW2
18COLUMNS
19    VAR1      COST      0.2
20    VAR1      ROW1      3              ROW2      2.7
21    VAR2      COST      0.1
22    VAR2      ROW1      4              ROW2      10.1
23RHS
24    RHS1      ROW1      5.4            ROW2      4.9
25ENDATA
26"""
27
28with open(data, "w") as file:
29    file.write(mps_data)
30
31# Load the credential "NVIDIA Identity Federation API Key" from the environment or directly pass it to function
32
33sak = os.environ["NVIDIA_IDENTITY_FEDERATION_API_KEY"]
34
35cuopt_service_client = CuOptServiceClient(
36    sak=sak,
37    function_id=<FUNCTION_ID_OBTAINED_FROM_NGC>,
38    timeout_exception=False
39)
40
41# Number of repoll requests to be carried out for a successful response
42repoll_tries = 500
43
44def repoll(solution, repoll_tries):
45    # If solver is still busy solving, the job will be assigned a request id and response is sent back in the
46    # following format {"reqId": <REQUEST-ID>}.
47    # Solver needs to be re-polled for response using this <REQUEST-ID>.
48
49    if "reqId" in solution and "response" not in solution:
50        req_id = solution["reqId"]
51        for i in range(repoll_tries):
52            solution = cuopt_service_client.repoll(req_id, response_type="dict")
53            if "reqId" in solution and "response" in solution:
54                break;
55
56            # Sleep for a second before requesting
57            time.sleep(1)
58
59    return solution
60
61ss = SolverSettings()
62
63ss.set_time_limit(5)
64ss.set_optimality_tolerance(0.00001)
65solution = cuopt_service_client.get_LP_solve(data, solver_config=ss, response_type="dict")
66
67solution = repoll(solution, repoll_tries)
68
69print(json.dumps(solution, indent=4))

The response is:

 1{
 2"response": {
 3    "solver_response": {
 4        "status": 1,
 5        "solution": {
 6            "primal_solution": [
 7                0.0,
 8                0.0
 9            ],
10            "dual_solution": [
11                0.0,
12                0.0
13            ],
14            "primal_objective": 0.0,
15            "dual_objective": 0.0,
16            "solver_time": 42.0,
17            "vars": {
18                "VAR1": 0.0,
19                "VAR2": 0.0
20            },
21            "lp_statistics": {
22                "primal_residual": 0.0,
23                "dual_residual": 0.0,
24                "gap": 0.0,
25                "reduced_cost": [
26                    0.2,
27                    0.1
28                ]
29            },
30            "milp_statistics": {}
31        }
32    }
33},
34"reqId": "ea113107-659b-4122-975a-ab9120ae15bf"
35}

The data argument to get_LP_solve may be a dictionary of the format shown in LP Open-API spec. More details on the response can be found under responses schema in open-api spec or same can be found in redoc as well.

Example with DataModel is available in the LP example notebook

MILP Example#

Note

Linear Programming (LP) and Mixed Integer Linear Programming (MILP) are Early Access features and are currently open to only select customers.

Note

LP: Solver returns as soon as optimality is achieved and does not wait for the time limit to expire. MILP: Solver keeps trying other paths to find a better solution since it is heuristic-based until the time limit expires.

The major difference between this example and the prior LP example is that some of the variable are integers, so variable types need to be shared.

 1import os
 2from cuopt_thin_client import CuOptServiceClient
 3import json
 4import time
 5
 6data = {
 7    "csr_constraint_matrix": {
 8        "offsets": [0, 2],
 9        "indices": [0, 1],
10        "values": [1.0, 1.0]
11    },
12    "constraint_bounds": {
13        "upper_bounds": [5000.0],
14        "lower_bounds": [0.0]
15    },
16    "objective_data": {
17        "coefficients": [1.2, 1.7],
18        "scalability_factor": 1.0,
19        "offset": 0.0
20    },
21    "variable_bounds": {
22        "upper_bounds": [3000.0, 5000.0],
23        "lower_bounds": [0.0, 0.0]
24    },
25    "maximize": True,
26    "variable_names": ["x", "y"],
27    "variable_types": ["I", "I"],
28    "solver_config":{
29        "time_limit": 30,
30    }
31}
32
33# Load the credential "NVIDIA Identity Federation API Key" from the environment or some other way
34
35sak = os.environ["NVIDIA_IDENTITY_FEDERATION_API_KEY"]
36
37cuopt_service_client = CuOptServiceClient(
38    sak=sak,
39    function_id=<FUNCTION_ID_OBTAINED_FROM_NGC>,
40    timeout_exception=False
41)
42def repoll(solution, repoll_tries):
43    # If solver is still busy solving, the job will be assigned a request id and response is sent back in the
44    # following format {"reqId": <REQUEST-ID>}.
45    # Solver needs to be re-polled for response using this <REQUEST-ID>.
46
47    if "reqId" in solution and "response" not in solution:
48        req_id = solution["reqId"]
49        for i in range(repoll_tries):
50            solution = cuopt_service_client.repoll(req_id, response_type="dict")
51            if "reqId" in solution and "response" in solution:
52                break;
53
54            # Sleep for a second before requesting
55            time.sleep(1)
56
57    return solution
58
59solution = cuopt_service_client.get_LP_solve(data, response_type="dict")
60
61# Number of repoll requests to be carried out for a successful response
62repoll_tries = 500
63
64solution = repoll(solution, repoll_tries)
65
66
67print(json.dumps(solution, indent=4))

response would be as follows:

 1{
 2"response": {
 3    "solver_response": {
 4        "status": 2,
 5        "solution": {
 6            "problem_category": 1,
 7            "primal_solution": [
 8                0.0,
 9                5000.0
10            ],
11            "dual_solution": null,
12            "primal_objective": 8500.0,
13            "dual_objective": null,
14            "solver_time": 30.003654432,
15            "vars": {
16                "x": 0.0,
17                "y": 5000.0
18            },
19            "lp_statistics": {
20                "reduced_cost": null
21            },
22            "milp_statistics": {
23                "mip_gap": -6.93265581705675e-310,
24                "solution_bound": "-Infinity",
25                "presolve_time": 0.004038293,
26                "max_constraint_violation": 0.0,
27                "max_int_violation": 0.0,
28                "max_variable_bound_violation": 0.0
29            }
30        }
31    }
32},
33"reqId": "b8b0ee19-672c-4120-b8f6-600eceb70d89"
34}

An example with DataModel is available in the MILP example notebook. More details on the response can be found under responses schema in open-api spec or same can be found in redoc as well.

The data argument to get_LP_solve may be a dictionary of the format shown in LP Open-API spec.

cuOpt Thin Client CLI Example#

Put your NVIDIA Identity Federation API Key in a credentials.json file as below:

{
    "CUOPT_CLIENT_SAK" : "PASTE_YOUR_NVIDIA_IDENTITY_FEDERATION_API_KEY"
}

Create a data.json file containing this sample data:

Routing Example#

echo '{"cost_matrix_data": {"data": {"0": [[0, 1], [1, 0]]}},
 "task_data": {"task_locations": [0, 1]},
 "fleet_data": {"vehicle_locations": [[0, 0], [0, 0]]}}' > data.json

Invoke the CLI

cuopt_cli data.json -f <FUNCTION_ID> -s credentials.json

LP Example#

Note

Linear Programming (LP) and Mixed Integer Linear Programming (MILP) are Early Access features and are currently open to only select customers.

echo '{
   "csr_constraint_matrix": {
       "offsets": [0, 2, 4],
       "indices": [0, 1, 0, 1],
       "values": [3.0, 4.0, 2.7, 10.1]
   },
   "constraint_bounds": {
       "upper_bounds": [5.4, 4.9],
       "lower_bounds": ["ninf", "ninf"]
   },
   "objective_data": {
       "coefficients": [0.2, 0.1],
       "scalability_factor": 1.0,
       "offset": 0.0
   },
   "variable_bounds": {
       "upper_bounds": ["inf", "inf"],
       "lower_bounds": [0.0, 0.0]
   },
   "maximize": "False",
   "solver_config": {
       "tolerances": {
           "optimality": 0.0001
       }
   }
}' > data.json

Invoke the CLI:

cuopt_cli data.json -f <FUNCTION_ID> -t LP -s credentials.json

In the case of batch mode, you can send a bunch of mps files at once, and acquire results. The batch mode works only for mps in the case of CLI.

If you want to set solver configuration on the fly, you can use it in the following way:

cuopt_cli data.json -f <FUNCTION_ID> -t LP -s credentials.json -ss '{"tolerances": {"optimality": 0.0001}, "time_limit": 5}'

This can also be passed as a file.

Note

Batch mode is not available for MILP problems.

 echo "* optimize
*  cost = 0.2 * VAR1 + 0.1 * VAR2
* subject to
*  3 * VAR1 + 4 * VAR2 <= 5.4
*  2.7 * VAR1 + 10.1 * VAR2 <= 4.9
NAME   good-1
ROWS
 N  COST
 L  ROW1
 L  ROW2
COLUMNS
   VAR1      COST      0.2
   VAR1      ROW1      3              ROW2      2.7
   VAR2      COST      0.1
   VAR2      ROW1      4              ROW2      10.1
RHS
   RHS1      ROW1      5.4            ROW2      4.9
ENDATA" > sample.mps

cuopt_cli sample.mps sample.mps sample.mps -f <FUNCTION_ID> -t LP -s credentials.json -ss '{"tolerances": {"optimality": 0.0001}, "time_limit": 5}'

Note

Please use solver settings while using .mps files.

As mentioned above function_id can be found in NGC -> Cloud Functions -> Shared Functions. For more details, please check the Quickstart Guide.

MILP Example#

Note

Linear Programming (LP) and Mixed Integer Linear Programming (MILP) are Early Access features and are currently open to only select customers.

echo '{
   "csr_constraint_matrix": {
       "offsets": [0, 2, 4],
       "indices": [0, 1, 0, 1],
       "values": [3.0, 4.0, 2.7, 10.1]
   },
   "constraint_bounds": {
       "upper_bounds": [5.4, 4.9],
       "lower_bounds": ["ninf", "ninf"]
   },
   "objective_data": {
       "coefficients": [0.2, 0.1],
       "scalability_factor": 1.0,
       "offset": 0.0
   },
   "variable_bounds": {
       "upper_bounds": ["inf", "inf"],
       "lower_bounds": [0.0, 0.0]
   },
   "variable_names": ["x", "y"],
   "variable_types": ["I", "I"],
   "maximize": "False",
   "solver_config": {
       "time_limit": 30,
   }
}' > data.json

Invoke the CLI:

cuopt_cli data.json -f <FUNCTION_ID> -t LP -s credentials.json

Note

Batch mode is not supported for MILP.

Solver settings can also be passed through cli with option -ss as follows:

cuopt_cli data.json -f <FUNCTION_ID> -t LP -s credentials.json -ss '{"time_limit": 5}'

Alternatively, you may set the NVIDIA Identity Federation API Key as CUOPT_CLIENT_SAK in your environment and omit the -s argument to cuopt_cli.