Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 9 of 9
  • Item
    Thumbnail Image
    Efficient interval reasoning for floating-point arithmetic
    Nazecic-Andrlon, Mak ( 2023-06)
    Reasoning about floating-point numbers is notoriously difficult, since rounding arithmetic operations destroys or weakens their algebraic properties (e.g. associativity). As such, available interval propagation methods often exhibit slow convergence even on simple problems. In this work, we develop new theoretical results allowing us to efficiently compute optimal---in the sense that any subinterval omits at least one solution---bounding intervals for the arguments of floating-point sums, products, and quotients. This allows us to directly solve equations of the form f(x, y) = z, where the variables are bounded by intervals and f is one of rounded addition, subtraction, multiplication, or division, and thus solve the convergence problem.
  • Item
    Thumbnail Image
    Hybrid optimization of vehicle routing problems
    Lam, Edward ( 2017)
    Vehicle routing problems are combinatorial optimization problems that aspire to design vehicle routes that minimize some measure of cost, such as the total distance traveled or the time at which the last vehicle returns to a depot, while adhering to various restrictions. Vehicle routing problems are of profound interest in both academia and industry because they are opportunities to study graph structures and algorithms, and because they underpin practical applications in a multitude of industries, but notably, the transportation and logistics industries. This dissertation presents two applications relevant to industry and develops a fully hybrid method for solving a classical vehicle routing problem. The first application combines vehicle routing with crew scheduling. In industry, vehicle routing and crew scheduling are usually performed in stages due to the large search space of integrated models. The first stage finds minimal-cost vehicle routes with little consideration to crew constraints and objectives. The second stage schedules crews on the routes from the first stage. Disregarding crew constraints in the first stage can lead to suboptimality or even infeasibility of the overall problem in the second stage. To quantify the suboptimality of staged optimization models, two formulations of the integrated problem are developed. The first is an ordinary mixed integer programming model, and the second is a constraint programming model containing a linear relaxation global constraint that performs cost-based filtering. The two integrated models are solved using a branch-and-bound search and a highly specialized large neighborhood search. The large neighborhood search exploits the substructures linking the vehicle routing and crew scheduling elements of the problem, and when executed on the constraint programming model, is shown to perform significantly better than the other approaches. The second application introduces a number of scheduling constraints to the Vehicle Routing Problem with Pickup and Delivery and Time Windows. The scheduling constraints arise from a lack of loading bays or equipment that unloads and loads incoming vehicles. These constraints limit the number of vehicles present or in service at any particular site by requiring the arrival of vehicles to be scheduled around the availabilities of a scarce resource. A mixed integer programming model, a constraint programming model and a sequential model are implemented for the problem but are shown to be inferior to a branch-and-price-and-check model, which hybridizes column generation and constraint programming with nogood learning. This thesis concludes with a hybrid method, named Branch-and-Check with Explanations, that unifies linear programming, constraint programming and Boolean satisfiability. The method begins with a linear programming model that omits several critical constraints. The solver uses the linear programming model to find objective bounds and candidate solutions, which are checked by a constraint programming model for feasibility of the omitted constraints. A Boolean satisfiability model performs conflict analysis on infeasible candidate solutions to derive nogood cuts, which are placed into the linear programming model and the constraint programming model. The method is implemented in a proof-of-concept solver for the Vehicle Routing Problem with Time Windows and is shown to be competitive against a branch-and-cut model while avoiding the intricacies involved in developing the cutting planes and separation algorithms required in branch-and-cut.
  • Item
    Thumbnail Image
    Discrete optimization over graph problems
    De Uña, Diego ( 2018)
    Discrete optimization problems are ubiquitous both in industry and theoretical computer science. They appear in a wide range of fields, such as manufacturing, planning, packing, design and scheduling. It is often the case that the problem at hand is in the form of a graph, or can be represented as such. It is therefore natural to have an interest in tackling this specific kind of problems. In this thesis we mainly use Constraint Programming as a technology to solve discrete optimization problems over graphs. We identify substructures that occur repeatedly in different problems that are usually tackled using Constraint Programming. We then study these combinatorial substructures to develop advanced algorithms within Constraint Programming solvers that attempt to drastically reduce the time spent by solvers on the problem. These are called propagators, and this thesis explores a series of them that deal with graph structures. All these algorithms have been implemented and different set of experiments show the value of each one of them. We also investigate the use of graphical structures not as a tool to represent the model, but rather as a tool within a Constraint Programming solver to increase its performance when tackling highly constrained problems. We do so through the use of Multi-valued Decision Diagrams (MDD) and Deterministic Decomposable Negation Normal Form Formulae (d-DNNF) for which propagators already exist. Our work on this focuses on improving the model provided by a user of the solver, by preprocessing parts of it in the hope of decreasing the total runtime. Additionally, we identified a graph problem in the field of computational sustainability for which Constraint Programming solvers were not adequate. In order to solve this problem, we developed a completely new Local Search algorithm that was specifically designed for it. The common thread between all components of this thesis are graphs and discrete optimization. We have looked at how to improve Constraint Programming technology to solve discrete problems containing graphs, how to use graphs within Constraint Programming to solve any discrete problem, and lastly, how to tackle discrete graph problems when Constraint Programming is not an adequate tool.
  • Item
    Thumbnail Image
    Scheduling and rostering with learning constraint solvers
    Downing, Nicholas Ronald ( 2016)
    In this research we investigate using Constraint Programming (CP) with Lazy Clause Generation (LCG), that is, constraint solvers with nogood learning, to tackle a number of well known scheduling, rostering, and planning problems. We extend a number of CP constraints to be useable in an LCG solver, and we investigate alternative search strategies that use the Unsatisfiable Cores, i.e. reasons for failure returned by the LCG solver, to guide the search for optimal solutions. We give comprehensive analysis and experiments. These experiments show that while adding more and more sophisticated constraint propagators to LCG delivers diminishing returns, unsatisfiable-core optimization which leverages the infrastructure provided by LCG can deliver significant benefits which are unavailable in CP without LCG. Overall, we demonstrate that LCG is a highly competitive technology for solving realistic industrial scheduling and rostering problems to optimality, allowing that the problems are smaller than those tackled by competing algorithms which do not prove optimality.
  • Item
    Thumbnail Image
    Natural optimisation modelling for software developers
    Francis, Kathryn Glenn ( 2016)
    Constraint solving technology has been successfully applied to large industrial combinatorial optimisation problems with high impact, but is not widely applied to smaller problems. Unfortunately, including even basic optimisation or constraint satisfaction functionality in an application currently requires the use of an entirely separate paradigm with which most software developers are not familiar. The aim of this thesis is to demonstrate the potential for an interface to a constraint solver which is much more approachable for general software developers, in order to promote wider use of the technology. Instead of defining a conventional constraint model directly, programmers use native code (in the application programming language) to define how to combine individual decisions to construct a candidate solution, and how to evaluate the result. Constraint satisfaction or optimisation is then seamlessly integrated into the wider application through automatic conversion between this definition and a conventional model solved by an external solver. We call this a native programming language interface. This thesis presents a prototype implementation of a native Java interface to a finite domain constraint solver, before exploring several ideas for improving the automated conversion from procedural code into an equivalent constraint model. This conversion process has already been studied in the context of software engineering applications, and the improvements discussed here should be transferable back to that domain. The following new techniques are presented.1. A novel query-based approach to handling destructive assignments, designed to improve the translation particularly when aliasing between variables is allowed. 2. An alternative technique (loop untangling) for creating copies of loop bodies in such a way that the uncertainty within the loop body is reduced, at the expense of a greater number of copies and an unknown execution order. 3. A new global constraint (reaching definitions) generalising the query-based translation technique to eliminate the assumption of a known order of execution for assignments (for use with loop untangling), and to achieve stronger deduction in some cases. To support these new techniques, two further contributions are included in the thesis. 1. A study into the circuit constraint, exploring how this constraint can be extended for the sub circuit/sub path case as required to constrain the execution path when using loop untangling, and how both versions can be implemented in a lazy clause generation constraint solver. 2. As part of the implementation of the reaching definitions constraint, discussion of how optional variables (solver variables which are allowed to take no value) can be implemented in a lazy clause generation solver
  • Item
    Thumbnail Image
    Answer set programming: founded bounds and model counting
    Aziz, Rehan Abdul ( 2015)
    Answer Set Programming (ASP) is a powerful modelling formalism that is very efficient for solving combinatorial problems. This work extends the state-of-the-art in theory and practice of ASP. It has two parts. The first part looks at the intersection of ASP and Constraint Programming and proposes theory and algorithms for implementing Bound Founded ASP, which is a generalization of both ASP and CP. The second part discusses model counting in the context of ASP.
  • Item
    Thumbnail Image
    Improving the efficiency and capabilities of document structuring
    MARSHALL, ROBERT ( 2007)
    Natural language generation (NLG), the problem of creating human-readable documents by computer, is one of the major fields of research in computational linguistics The task of creating a document is extremely common in many fields of activity. Accordingly, there are many potential applications for NLG - almost any document creation task could potentially be automated by an NLG system. Advanced forms of NLG could also be used to generate a document in multiple languages, or as an output interface for other programs, which might ordinarily produce a less-manageable collection of data. They may also be able to create documents tailored to the needs of individual users. This thesis deals with document structure, a recent theory which describes those aspects of a document’s layout which affect its meaning. As well as its theoretical interest, it is a useful intermediate representation in the process of NLG. There is a well-defined process for generating a document structure using constraint programming. We show how this process can be made considerably more efficient. This in turn allows us to extend the document structuring task to allow for summarisation and finer control of the document layout. This thesis is organised as follows. Firstly, we review the necessary background material in both natural language processing and constraint programming.
  • Item
    Thumbnail Image
    Improving scheduling by learning
    SCHUTT, ANDREAS ( 2011)
    Scheduling problems appear in many industrial problems with different facets and requirements of a solution. A solution is a schedule of a set of activities subject to constraints such as precedence relations and resource restrictions on the maximum number of concurrent activities. This dissertation presents new deductive techniques for precedence and cumulative resource constraints in constraint programming to improve solution processes of combinatorial scheduling problems, in particular, and combinatorial problems, in general. Due to their intractability, many schedulers either solve a simplified problem or are tailored to a specific problem, but are inflexible with respect to changes in the problem statement. Constraint Programming (CP) offers a powerful framework for combinatorial problems which also tackles the demand of flexibility of changes in the problem statement due to a strict separation of problem modelling, search algorithms, and high-specialised deduction techniques. Moreover, recent advanced CP solvers such as lazy clause generation (LCG) additionally include sophisticated conflict learning technologies. Their efficiency depends, amongst other things, on reusable explanations formulated by deductive algorithms. Unit two variable per inequality (UTVPI) constraints are one of the largest integer constraint class that is solvable in polynomial time. These constraints can model precedence relations between activities. A new theoretical result about reasoning of UTVPI systems is shown, based on that, new incremental deductive algorithms are created for manipulating UTVPI constraints including satisfiability, implication, and explanation. These algorithms are asymptotically faster on sparse UTVPI systems than current algorithms. Cumulative constraints enforce resource restrictions in scheduling problems. We show how, by adding explanation to the cumulative constraint in an LCG solver, we can solve resource-constrained project scheduling problems far faster than other comparable methods. Furthermore, a complete LCG-based approach including cumulative constraints is developed for an industrial carpet cutting problem. This approach outperforms the current incomplete method on all industrial instances provided.
  • Item
    Thumbnail Image
    Improving combinatorial optimization
    Chu, Geoffrey G. ( 2011)
    Combinatorial Optimization is an important area of computer science that has many theoretical and practical applications. In this thesis, we present important contributions to several different areas of combinatorial optimization, including nogood learning, symmetry breaking, dominance, relaxations and parallelization. We develop a new nogood learning technique based on constraint projection that allows us to exploit subproblem dominances that arise when two different search paths lead to subproblems which are identical on the remaining unlabeled variables. On appropriate problems, this nogood learning technique provides orders of magnitude speedup compared to a base solver which does not learn nogoods. We present a new symmetry breaking technique called SBDS-1UIP, which is an extension of Symmetry Breaking During Search (SBDS). SBDS-1UIP uses symmetric versions of the 1UIP nogoods derived by Lazy Clause Generation solvers to prune symmetric parts of the search space. We show that SBDS-1UIP can exploit at least as many symmetries as SBDS, and that it is strictly more powerful on some problems, allowing us to exploit types of symmetries that no previous general symmetry breaking technique is capable of exploiting. We present two new general methods for exploiting almost symmetries (symmetries which are broken by a small number of constraints). The first is to treat almost symmetries as conditional symmetries and exploit them via conditional symmetry breaking constraints. The second is to modify SDBS-1UIP to handle almost symmetries. Both techniques are capable of producing exponential speedups on appropriate problems. We examine three reasonably well known problems: the Minimization of Open Stacks Problem, the Talent Scheduling Problem (CSPLib prob039), and the Maximum Density Still Life Problem (CSPLib prob032). By applying various powerful techniques such as nogood learning, dynamic programming, dominance and relaxations, we are able to solve these problems many orders of magnitude faster than the previous state of the art. We identify cache contention as a serious bottleneck that severely limit the amount of speedup achievable when parallelizing SAT and LCG solvers on multi-core systems. We alter the data structures used in the state of the art SAT solver MiniSat to be more cache aware, leading to a sequential SAT solver that is some 80% faster on average and a parallel SAT solver that is some 140% faster on average. We examine an important issue in parallel search that has not been properly addressed in the literature. Many work stealing schemes used for load balancing completely ignore the branching heuristic, and instead try to maximize the granularity of the work stolen. This can result in many of the processors searching in unfruitful parts of the search tree. We analyze this problem and develop a new work stealing scheme called confidence based work stealing, which partitions the work based on how strong the branching heuristic is at each node. The new parallel algorithm produced near linear or super linear speedup on all the problems we tested.