The human character has a lot to boast, and yet there is nothing more significant about it than our tendency of growing on a consistent basis. We say this because the stated tendency has really enabled the world to clock some huge milestones, with technology emerging as quite a major member of the group. The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become abundantly clear how the whole runner was also very much inspired from the way we applied those skills across a real world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence, and as a result, initiated a full-blown tech revolution. Of course, the next thing this revolution did was to scale up the human experience through some outright unique avenues, but even after achieving such a monumental feat, technology will somehow continue to bring forth the right goods. The same has turned more and more evident in recent times, and assuming one new discovery ends up with the desired impact, it will only put that trend on a higher pedestal moving forward.
The researching team at Massachusetts Institute of Technology has successfully developed a new solution, which is designed resolve those longstanding optimization issues related to global package routing, power grid operations, and other operations happening within a somewhat similar ballpark. You see, thus far, organizations across the world have used mixed-integer linear programming (MILP) software to address the stated gap. Here, they would use a process called branching to break down the problem into smaller pieces. Next up, they would use a technique called cutting to tighten up the resulting small piece, making it possible to search them faster. The cutting process, notably enough, uses a set of rules to do the job without removing any feasible solutions. These rules are generated by a few dozen algorithms known as separators that have been created for different kinds of MILP problems. Anyway, given the non-negotiable need to go through what, in practice, is a drawn out process, the whole affair naturally loses its appeal on a holistic scale To further explain the tedious nature of it, many organizations are forced to even stop the software midway through and accept a solution that isn’t exactly ideal for its needs. So, how did MIT researchers solve this massive problem? Well, their answer began to reveal itself when they discovered how the process of identifying the ideal combination of separator algorithms to use is, in itself, a problem, considering it arrives bearing an exponential number of solutions.
“Separator management is a core part of every solver, but this is an underappreciated aspect of the problem space. One of the contributions of this work is identifying the problem of separator management as a machine learning task to begin with,” said Cathy Wu, senior author of the study, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering (CEE), and a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS).
Keeping that discovery in mind, Wu and her team members conceived a cutting-edge filtering mechanism, which has the means to reduce this separator search space from more than 130,000 potential combinations to around 20 odd options. Apart from that, the filtering mechanism in question would also make a point to build upon the principle of diminishing marginal returns. Contextualizing it a little bit, according to the principle of diminishing marginal returns, the most benefit would come from a small set of algorithms, and installing more algorithms won’t bring any extra improvement. Once the said ideology was brought into the fold, they moved on to leveraging a machine learning model to pick the best combination of algorithms from among the 20 remaining options. The said model, on its part, is to be trained on a dataset that circles around the user’s personal problem; a feature which allows the ML model to choose algorithms that best suit the user’s particular task. For instance, when a user will repeatedly apply the technology to solve a particular problem, or at least a similar type of problem, they can eventually start expecting better solutions over time.
“Sometimes, in a field like optimization, it is very common for folks to think of solutions as either purely machine learning or purely classical. I am a firm believer that we want to get the best of both worlds, and this is a really strong instantiation of that hybrid approach,” said Wu.
The researchers have already conducted some initial tests on their technology, and going by the available details, they observed that their technique sped up MILP solvers by almost 30 to 70 percent. Making the whole achievement even more significant is that it did the job without any drop in accuracy.
As for possible use cases, we can get the technology to work in and around wherever MILP solvers are employed, such as ride-hailing services, electric grid operators, vaccination distributors, or any entity faced with a resource-allocation problem.
“These problems are called NP-hard, which means it is very unlikely there is an efficient algorithm to solve them. When the problem is big enough, we can only hope to achieve some suboptimal performance,” said Wu.
For the immediate future, the researchers plan on investing their latest brainchild across more complex MILP problems, markedly including the areas where gathering labeled data to train the model could be challenging. Another possible objective is to train the model on a smaller dataset and then tweak it to tackle a much larger optimization problem. Finally, the team hopes to more closely infer the learned model to better understand the effectiveness of different separator algorithms.