Design issues

37  Download (0)

Full text


3. Selected design issues

AEA 2021/2022

Based on

Algorithm Engineering: Bridging the Gap Between Algorithm Theory and Practice - ch. 3




1. Simplicity Fibonacci heaps

General-Purpose Modelers Randomized algorithms



I The design phase comes after the modeling phase I A construction plan for the algorithm to be developed

I if there are several alternatives and a theoretical analysis does not reveal a clear winner, design decision should be based on an experimental evaluation

I The task of algorithm design is to bridge the gap from the abstract algorithmic ideas to the implementation by

anticipating questions that arise during the implementation and providing answers to them

I Recognize the limitations of the models used

I ex: be aware of the imprecision from the finite representation of real numbers


Design issues

I Simplicity: wide-ranging effects on the applicability;

randomization, general purpose modelers

I Scalability: how to deal with rapidly growing data sets, large input sizes, and huge networks

I Time-space trade-offs: how easily one measure of quality can be improved by moderately sacrificing the other one I Robustness: make algorithms able to deal with and recover

from unexpected abnormal situations

Reusability: using an available building block saves implementation time and inherits the correctness




1. Simplicity Fibonacci heaps

General-Purpose Modelers Randomized algorithms



I A new algorithm that achieves the same result as a known one, but in a simpler way, is often an important progress I ”Concise to write down and to grasp”

I Much of the perceived simplicity of an algorithm lies in its presentation

I Advantages of simplicity for implementation:

I quicker to implement - fewer bugs, reduced effort for testing I maintainability - easier to understand and debug, easily

adaptable if the specification changes

I employment in resource constrained environments, such as embedded systems (pure hardware implementations)




1. Simplicity Fibonacci heaps

General-Purpose Modelers Randomized algorithms



Lack of simplicity may make an implementation infeasible;

algorithms initially dismissed as too complicated sometimes still find uses.

Fibonacci heaps: a data structure efficiently supporting

decrease-key, an important operation in many graph algorithms.

decrease-key(v,k): given a pointer to an element v in the priority queue, lower its key (priority) tok

Why is this hard?

I Lowering a node’s priority might break the heap property I Correcting the imbalance O(logn) layers deep in a tree might


Binomial tree: properties

In a binomial tree of degreek I the root has k children I the tree has 2k elements I the tree has height k

I the children of root are binomial trees of degree k−1,k−2, ...,1,0


Binomial Heap

A collection of binomial trees, each one a heap, and at most one tree of each degree.

I the binary representation of n: which degree trees are present I the longest tree has degree blog2nc and there are

≤ blog2nc+ 1 trees


Binomial Heap

I push: add to the list of roots, merge trees of equal degree I pop-min: extracts the smallest root item, promote children,

merge trees of equal degree

I decrease-key: if violating the min-heap property, exchange value with its parent value

merge: O(logn) push: O(1) amortized


Fibonacci heaps

I Similar to binomial heaps, but less rigid structure

I Binomial heap: eagerlyconsolidate trees after eachinsert I Fibonacci heap: lazily defer consolidation until nextdelete-min I Set of heap-orderedtrees

I Maintain pointer to minimum element I Set of marked nodes


Dijkstra’s algorithm

Can be implemented with a priority queue using:

I O(n) total enqueues, I O(n) total extract-mins and I O(m) total decrease-keys.

Dijkstra’s algorithm runtime: O(nTenq+nText+mTdec) forall u∈V do

dist(u)← ∞;prev(u)←nil; dist(s)←0 (use dist-vals as keys);


whileH is not empty do u ←deletemin(H);

for all(u,v)∈E do

if dist(v)>dist(u) +length(u,v) then dist(v)←dist(u) +length(u,v);




Fibonacci heap

In a binary heap: enqueue, extract-min, decrease-key -O(logn) time each→ O(mlogn).

Fibonacci heaps:

I enqueue, meld, find-min: O(1) I extract-min: O(logn), amortized.

I decrease-key: O(1), amortized.

Cost of Dijkstra’s algorithm: O(n+nlogn+m) =O(m+nlogn) (theoretically optimal for a comparison-based priority queue).




1. Simplicity Fibonacci heaps

General-Purpose Modelers Randomized algorithms


How to Achieve Simplicity

I Top-Down Design: a system is decomposed into several intersecting parts which can be independently designed and be further subdivided

I example: algorithms that work in phases (compilers are usually divided into a lexing, a parsing and a translation phase) I in software engineering: modularity

I use standard algorithm design schemes, i.e. divide & conquer, dynamic programming, greedy, branch & bound to simplify algorithms

I General-Purpose Modelers: cast a problem in terms of a general problem model like LPs, ILPs, CSPs, SAT

I the quickest way to obtain a solution


General-Purpose Modelers - example

GRAPH-BIPARTIZATION: find a minimum set of vertices to delete from a graph to make it bipartite.

min ci

s.t. ∀{v,w} ∈E : (sv 6=sw)∨cv∨cw

ci ∈ {0,1} (deletion set),si ∈ {0,1} (side),∀i ∈ {1, ...,n}


Graph bipartization

For a graphG = (V,E), the following are equivalent:

1. G is bipartite: V can be partitioned into sets V1 andV2 (sides) s.t. @{v,w} ∈E with both v,w ∈V1 or both v,w ∈V2.

2. V can be colored with two colors s.t. ∀{v,w} ∈E the vertices v andw have different colors (the color classes correspond to the sides).

3. G does not contain odd cycles (cycles of odd length).


Graph bipartization

Input: G = (V,E), a nonnegative integerk

Task: Find X ⊆V with|X| ≤k s.t. each odd cycle in G contains at least one vertex fromX (X: odd cycle cover).

The removal of all vertices in X from G results in a bipartite graph.


Graph-Bipartization/Maximum Bipartite Subgraph/Odd Cycle Transversal

s.t.∀{v,w} ∈E :sv +sw+ (cv+cw)≥1 (1)

∀{v,w} ∈E :sv +sw −(cv+cw)≤1 (2)

cv = 1: v is part of theodd cycle cover;sv models the side of the bipartite graph (that remains when deleting the vertices from the odd cycle cover)

(1) for an edge, either one endpoint has color 1, or the other has color 1, or one of them is in the cover;

(2) forbids that both endpoints have color 1 while none is in the cover.




1. Simplicity Fibonacci heaps

General-Purpose Modelers Randomized algorithms


How to Achieve Simplicity?

I Trade-Off Guaranteed Performance

I Sometimes,bad worst-caseperformance comes from corner case inputs; solution: employrandomnessto avoid excessive resource usage on corner case inputs.

Example: quicksort - choose a random pivot

Randomized algorithmsuse random values during execution:

at different runs, the sequences of processings and even the results may be different.


Las Vegas algorithm

Las Vegas algorithm: a randomized algorithm that gives correct results, but carries a small probability of using more resources than expected

I running time is a random variable I hard to analyze

I especially useful if the algorithm is often called within an application


Randomized QuickSort

I QuickSort complexity

The number of operations performed depends on the type of partitioning:

I unbalanced (worst case) - repeatedly divides the subsequence into parts of very unequal size: O(n2)

I balanced (most favorable case): O(nlogn) Average case: O(nlogn)

I Pick a pivot element uniformly at randomfrom the array The random selection of the pivot allows to reduce the number of cases where the partitioning is unbalanced.

Randomized QuickSortsorts a given array of length n in


Monte Carlo algorithms

Monte Carlo algorithms carry some small chance that a solution will not be correct or optimal; have a fixed deterministic running time.

A Las Vegas algorithm can be converted into a Monte Carlo algorithm via early termination.


(Markov inequality) Let X be a random variable with non-negative domain. The probability that X is no less than t is no greater than the expectation of X divided by t:

Pr[X ≥t]≤ E[X]

t , or equivalently Pr[X ≥tE[X]]≤ 1 t


Transform Las Vegas to a Monte Carlo algorithm

Consider a Las Vegas algorithm with expected running time at mostf(n).

Idea: Stop algorithm afterαf(n) time.

I ProbabilityPr[T > αf(n)]≤1/α

=⇒ Monte Carlo algorithm with running timeαf(n) and error rate 1/α.


Monte Carlo algorithm example: Verifying matrix multiplication

VerifyA·B =C for given matrices A,B,C.

I computeA·B and compare toC → naive: θ(n3), best θ(n2.37)

I Faster randomized algorithm with small error probability:

1. Chooser = (r1, . . . ,rn)∈ {0,1}n at random.

2. IfA(Br) =Cr output ”TRUE” else output ”FALSE”

Running time: O(n2)

The algorithm uses a sampling andtesting strategy.

IfAB =C then A(Br) =Cr (i.e. Pr[ABr =Cr] = 1).


Verifying matrix multiplication

When will the algorithm make an error?

WhenAB6=C, and ther we choose satisfiesABr =Cr. ForAB 6=C,Pr[ABr =Cr]≤1/2.

I algorithm is wrong with probability ≤1/2

I is correct in at least 50% of time: the probability that A(Br)6=Cr is at least 1/2


Verifying matrix multiplication


If AB6=C then Pr[ABr =Cr]≤1/2.


LetD =AB−C 6= 0; W.l.o.g, D1,16= 0.

ABr =Cr ⇐⇒ (AB−C)r = 0 ⇐⇒ Dr = 0 AssumeDr = 0. Therefore,




D1,jrj = 0⇔r1 =− Pn

j=2D1,jrj D1,1 I Fix all rj butr1

I Ifr2,r3, . . . ,rn are fixed, there isat most one value ofr1 s.t.

Dr = 0 =⇒ Pr[Dr = 0]≤1/2


Verifying matrix multiplication

Probability amplification

Idea: Repeat the testk times with independent random choices forr.

I If some test reports ”FALSE”, we know for sure AB 6=C I IfAB 6=C, the probability that all tests report ”TRUE” is at

most 2−k

I running time: O(kn2)


Monte-Carlo algorithm example: MIN-CUT problem

Given a graphG, find a min-cut inG, i.e. a minimum size set of edges whose removal results inG being broken into two or more components.

The fastest knowndeterministic algorithm takesO(n2mlog nm2) time


Monte-Carlo algorithm example: MIN-CUT problem


1. pick an edge (x,y) uniformly at random

2. contract the edge, keeping multi-edges, but removing self-loops

3. if there are more than 2 nodes, go back to 1; else, output the edges remaining as the cut.




How often the procedure has to be repeated to meet a desired error probability?

The algorithm outputs a minimum cut with probability at least


n(n−1) = 1

(n2) ≥1/n2.

Boost the success probability: repeat the whole processM times, independently and return the best min-cut candidate. The failure probability is at most (1−n12)M ≤e−M/n2. If we repeat

M =n2ln ntimes, the failure probability is at most 1/n.

Running time: M ·min(O(mlogm),O(n2))≤O(mn2log2n)


The 2-SAT problem

Given a set of boolean clauses in CNF, each containing two literals, find a satisfying assignment if one exists.

(x1+x4)(x4+x3)(x2+x3). . .

I Start with any tentative assignment

I If there is an unsatisfied clause, pick one of its two literals at random, and flip it.

I If no solution found in 2n2 steps, declare ”none exists”.

Monte Carlo: If a solution exists, will find it with probability>1/2.

If not, will always declare ”none exists”.


Effects on Analysis

I Intuitively, a simpler algorithm should be easier to analyze for performance measures such as worst-case runtime, memory use, or solution quality.

Example: VERTEX COVER: find a subset of vertices s.t.

every edge has at least one point in the subset; a simple algorithm has an approximation factor of 2, whereas the currently best approximation achieves a factor of 2−Θ(1/p

(logn)) (advanced concepts to analyze).


Vertex Cover

C ← ∅;

whileE 6=∅do

pick any{u,v} ∈E;

C ←C ∪ {u,v};

delete all edges incident with either u or v;


This vertex cover has size≤2×minimum size (optimal solution);

a 2−approximation algorithm.


Effects on Analysis

I Sometimes simplicity and analyzability seem to be excluding properties

SHORTEST COMMON SUPERSTRING: given a set S ={S1, ...,Sn}of strings, find the shortest string that contains each element of S as a contiguous substring

I agreedy algorithm that repeatedly merges two string with the largest overlap, until one string remains


One can find an example where the resulting superstring is twice as long as an optimal one, but no worse example is known; only an upper bound of 3.5 has been proven yet. The

”best” algorithm provides a factor-2.5-approximation - never been implemented.




Related subjects :