constraintsschedulingnonlinear-optimization

# Formulating an optimization problem with non linear constraints

I am trying to solve a dynamic machine scheduling problem using what I think qualifies as a non linear programming constraints.

I have academic knowledge of linear programming techniques and appreciate the opportunity to try my hands at non linear programming.

However, before anything I am struggling to write down a well forumulated problem. Here is the situation:

• I have a list of tasks of various natures that are ordered by "degree of emergency"
• Several machines, that specialize in various tasks, work on those tasks based on the "degree of emergency" and their specialty
• Tasks are not preemptive (start one, finish it, no interruption)
• Completion time is statistically known based on the nature of the task and the machine assigned to it.
• All machines should be used with minimal downtime
• Workload is balanced: machines work on roughly the same number of tasks and work for the same amount of time total

• m in M set of machines
• S number of specialties
• $s_m\in\left\[0;1\right\]^S vector\;of\;specialties\;for\;m$
• t in T set of tasks
• $C_t\in\left\[0;1\right\]^S vector\;of\;characteristics\;for\;t$
• $d_{mt}\in\mathbb{R}$ time to complete task t by machine m
• $p_{t}\in\mathbb{R}$ penalty coefficient for task t for being started with a delay past acceptable time
• X_mt 1 if task t is assigned to machine m 0 otherwise

objective function: Find optimal X_mt such that task are started on time (limit penalty)

$min\sum_{t\in T}p_t$

constraints:

• work duration is balanced through the machines (epsilon being a parameter):

$variance\sum_{t\in T}d_{mt}\leqslant\epsilon,\forall m\in M$

• number of work item is balanced through the machines (epsilon being a parameter):

$variance\sum_{t\in T}X_{mt}\leqslant\epsilon,\forall m\in M$

• machine selects task for which it specializes (dotproduct m specialty/t characteristics )

$max\sum_{s\in S}\left(s_m*C_t\right)$

• minimize makespan

$min\sum_{m\in M}\sum_{t\in T}d_{mt}$

• only assign 1 task per machine

$\sum_{m\in M}X_{mt}=1$

I am also looking for an algorithm to find some epsilon-optimal solution based on how I relax the workload balancing parameter.

But before I can try to look for an algorithm, I need help to properly formulate the problem.

I've seen example where variance constraints are formulated as the minimization of the difference of upper and lower boundaries but I don't know how to apply it in this case since I have multiple variance constraints.

If this is too much, how would you translate this problem into a Reinforcement Learning one?

Solution

• Let's look at one of the variance constraints:

is wrong IMHO. There should be no "for all m".

I rewrite this as:

``````v[m] = sum(t,d[m,t])  for all m
variance(v) <= eps    (this is a single, scalar constraint)

<==>

v[m] = sum(t,d[m,t])  for all m
minv <= v[m]          for all m
maxv >= v[m]          for all m
maxv - minv <= eps    (different eps. now to limit the bandwidth)
``````
• Similar for the other variance constraint
• I don't see a good reason to add things to the objective.
• This is completely linear