DESIGN OF ALGORITHMS - PowerPoint PPT Presentation

1 / 82
About This Presentation
Title:

DESIGN OF ALGORITHMS

Description:

Problems we try to resolve. Solving complex problem quickly ... current = null ; return ; } try { current = ( BinaryNode ) q.dequeue ... – PowerPoint PPT presentation

Number of Views:46
Avg rating:3.0/5.0
Slides: 83
Provided by: facultyPl
Category:
Tags: algorithms | design | try

less

Transcript and Presenter's Notes

Title: DESIGN OF ALGORITHMS


1
DESIGN OF ALGORITHMS
  • RESUME OF COURSE
  • CSC 321

2
Problems we try to resolve
  • Solving complex problem quickly
  • Finding efficient ways to solve problems in
    minimal time
  • Using least memory possible for large problems

3
How to do that?
  • Separate the structure of the solution from its
    implementation.
  • Use Object Based Programming (OBP) or Object
    Oriented Programming (OOP)
  • Determine the complexity of the algorithm how
    to reduce it.
  • Find data structures to do algorithms more
    efficiently.

4
Complexity
  • One way to measure the speed of an algorithm is
    to obtain the execution run time. This tells us
    only something about the current data.
  • Another way is to see if the time becomes
    horrendous as we go from processing 100 items to
    a million items.

5
Different possibilities
  • 106 items are processed 1000 times slower than
    100 items linear.
  • 106 items are processed 10002 times slower than
    100 items quadratic.
  • 106 items are processed (log 1000 3) times
    slower than 100 items logarithmic.
  • 106 items are processed e1000 times slower than
    100 items exponential.

6
Examples
  • for (int i 0 i lt k i)
  • linear
  • for (int i 0 i lt m i)
  • for (int i 0 i lt k i) quadratic
  • for (int i 0 i lt m i)
  • Repeated halving or doubling how many times
    should N be halved to give 1?
  • 2k N ? log 2k k log N logarithmic

7
Searching
  • Sequential search is O(N) complex.
  • Binary search uses repeated halving is O(log N)
    complex.

8
public static int binarySearch( Comparable A ,
Comparable x) throws ItemNotFound int low
0 int high a.length - 1 int mid while(
low lt high) mid ( low high) / 2 if (
amid.compares (x) lt 0 ) low mid
1 else if ( amid.compares (x) gt 0 ) high
mid - 1 else return mid throw new
ItemNotFound ( Binary Search fails )
9
Interpolation searchFinding a better guess than
the middle
If each access is expensive (ex. disc access),
if data is uniformly distributed, interpolation
search is faster than a binary search. Mid is
replaced by next.
next low x - a low x
(high - low - 1) ahigh - alow
Worst case complexity is O(N). Average case is
O(log log N).
10
Recursion
public recursive ( int N ) if (N 1)
return 1 // base case else return (
recursive ( N-1 ) ) // recursive step
11
n ? 1

The recursion stack
1
Store 5 24 pop top
10
Store 4 6 pop top
2
9
3
Store 3 2 pop top
8
4
Store 2 1 pop top
7
Store 1 pop top
6
5
12
Binary search as a recursive routine
public static int binSearch( Comparable a ,
Comparable x) throws ItemNotFound return
binSearch( a, x, 0, a.length - 1) private
static int binSearch( Comparable a , Comparable
x, int low, int high, ) throws ItemNotFoun
d if ( low gt high)

throw new ItemNotFound( Binary search
failed) int mid ( low high ) / 2
if ( amid.Compares(x) lt 0) return binSearch(
a, x, mid1, high) else if (
amid.Compares(x) gt 0) return binSearch( a, x,
low, mid-1) else return mid
13
Data Structures
  • Stacks
  • Queues
  • Linked lists
  • General trees
  • Binary search trees
  • Hash tables
  • Priority queues heaps

14
Examples
  • Matching parentheses - stacks
  • Printer queues - queues
  • Data Bases manipulations - linked lists
  • Operating Systems structures - trees
  • String searches, smallest largest item
    searches - trees
  • Compiler symbol table searches, dictionary
    searches - hash tables
  • Bank queues - heaps (priority queues)

15
Stacks
tos(1)
B
A
A
tos(0)
tos(0)
A
tos(-1)
Can be implemented as a dynamic array
16
Public void push ( Object x ) if (topOfStack
1 theArray.length ) doubleArray (
) theArray topOfStack x Public void
pop ( ) throws Underflow if ( isEmpty ( )
) throw new Underflow ( Stack pop
) topOfStack -- Public Object topAndPop ( )
throws Underflow if ( isEmpty( ) ) throw new
Underflow ( Stack topAndPop) return theArray
topOfStac k--
17
Linked lists
Class ListNode Object element ListNode
next
current.next new ListNode ( x, current.next )
current.next current.next.next
18
General Trees
  • A rooted tree has a root node, other nodes
    connected in pairs by edges.
  • Each node, except for the root, is connected by a
    unique edge to a parent node, p.
  • Each node, except for the leaf nodes, has one or
    more children, c.
  • The length of a path from one node to another is
    the number of edges traversed.
  • The children of each node will be implemented in
    a linked list.
  • Each node references its leftmost child and
    right sibling.
  • The last child (a leaf) has no child.
  • The rightmost sibling has no sibling

19
First child / Next sibling representation
20
Operations on trees
  • Find a particular object
  • Retrieve an object
  • Remove an object
  • Insert an object
  • Go to the root, or the first child, or the first
    sibling
  • Check if a node is valid, if the tree is empty,
    or empty it out

21
Binary trees
  • A binary tree is a tree in which each node, other
    than the leaves, has 2 children.
  • The children can be named left right.
  • The recursive definition is that it is either
    empty, or it has a root, a left sub-tree, a right
    sub-tree.
  • Important examples are expression trees in
    language syntax, Huffman trees for compression
    algorithms.
  • We can traverse a tree in PostOrder, PreOrder,
    Inorder or Level Order form.

22
Examples
(a b) ( c - d) Expression tree
Huffman coding tree for data compression
Thus b encoded as 100 c encoded as 101
a encoded as 0
23
final class BinaryNode BinaryNode( ) // a
null node this( null) BinaryNode( Object
theElement ) // a leaf this( theElement, null,
null) BinaryNode( Object theElement,
BinaryNode rt, BinaryNode lt ) element
theElement left lt right rt public
class BinaryTree public BinaryTree( ) root
null public BinaryTree( Object rootItem
) root new BinaryNode( rootItem)l
Tree node code
24
Tree generation,size height
BinaryNode duplicate( ) BinaryNode root new
BinaryNode( element) if( left !
null) root.left left.duplicate( ) if( right
! null) root.right right.duplicate(
) return root Static int size(
BinaryNode) if( t null) return
0 else return 1 size( t.left) size(
t.right) Static int height( BinaryNode) if(
t null) return -1 else return 1
Math.max( height( t.left) , height(
t.right) )
25
Traversal Routes
Root - left subtree - right subtree PreOrder
Left leaf - right leaf - root of subtree PostOrder
Left subtree - root - right subtree InOrder
26

Void printPreOrder( ) System.out.println(
element) if( left ! null ) left.printPreOrder(
) if( right ! null ) right.printPreOrder(
) void printPostOrder( ) if( left ! null
) left.printPostOrder( ) if( right ! null
) right.printPostOrder( ) System.out.println(
element) void printInOrder( ) if( left !
null ) left.printIntOrder( ) System.out.println(
element) if( right ! null ) right.printInOrder
( )
27
Implementation of traversal methods
  • A stack will be maintained to store the current
    state, with the current node on top of the stack.
  • The root will be pushed onto the stack, then the
    left sub-tree, then the right sub-tree.
  • As we process each node on top of the stack, it
    will be popped from the stack.
  • A counter will be maintained.
  • The counter will contain 1 if we are about to
    process the nodes left sub-tree.
  • The counter will contain 2 if we are about to
    process the nodes right sub-tree.
  • The counter will contain 3 if we are about to
    process the node itself.
  • We may be pushing popping a null (non-existent)
    sub-tree.

28
public class PostOrder extends TreeIterator publ
ic PostOrder( BinaryTree the tree) super(
theTree) s new StackAr( ) s.push( new
StNode( t.root) ) public void first(
) s.makeEmpty( ) if( t.root !
null) s.push( new StNode( t.root) ) try
advance( ) catch( ItemNotFound e) (
) protected stack s class
StNode BinaryNode node int
timesPopped StNode( BinaryNode n) node
n timesPopped 0
29
public void advance( ) throws ItemNotFound if(
s.isEmpty( ) ) if ( current null
) throw new ItemNotFound( No item
found) current null return StNode
cnode for( ) try cnode (StNode)
s.topAndPop( ) catch ( Underflow e
) return if ( cnode.timesPopped
3) current snode.node return s.pu
sh( cnode) if ( cnode.timesPopped
1) if ( cnode.node.left ! null ) s.push
( new StNode ( cnode.node.left )
) else if ( cnode.node.right ! null
) s.push ( new StNode ( cnode.node.right )
)
30
public class InOrder extends PostOrder
public class InOrder ( BinaryTree theTree )
super ( theTree) public void advance( )
throws ItemNotFound if( s.isEmpty( )
) if ( current null ) throw new
ItemNotFound( No item found) current
null return StNode cnode for(
) try cnode (StNode) s.topAndPop( )
catch ( Underflow e ) return if (
cnode.timesPopped 2) current
cnode.node if ( cnode.node.right ! null
) s.push ( new StNode ( cnode.node.right )
) return s.push( cnode) if (
cnode.node.left ! null ) s.push ( new StNode
( cnode.node.left ) )
31
public class PreOrder extends TreeIterator publi
c PreOrder ( BinaryTree theTree ) super (
theTree ) s new StackAr( ) s.push (
t.root ) public void first (
) s.makeEmpty ( ) if ( t.root ! null
) s.push ( t.root) try advance (
) catch ( itemNotFound e ) ( ) public
void advance ( ) throws ItemNotFound private
Stack s public void advance( ) throws
ItemNotFound if( s.isEmpty( ) ) if ( current
null ) throw new ItemNotFound( No item
found) current null return try
current ( BinaryNode ) s.topAndPop ( )
catch ( Underflow e ) return if (
current.right ! null ) s.push ( current.right
) if ( current.left ! null ) s.push (
current.left )
32
Breadth-first search or Level order traversal
We can traverse the tree level by level. This is
used mostly in breadth-first searches in AI -
Artificial Intelligence.
1
This is implemented as a queue.
33
public class levelOrder extends TreeIterator
public levelOrder ( BinaryTree theTree
) super ( theTree ) q new QueueAr (
) q.enqueue ( t.root ) public void
first ( ) q.makeEmpty ( ) if ( t.root !
null ) q.enqueue ( t.root ) try
advance ( ) catch ( ItemNotFound e) ( )
public void advance ( ) throws
ItemNotFound advance private queue
q Public void advance ( ) throws
ItemNotFound if ( q.isEmpty ( ) )
if ( current null) throw new
ItemNotFound ( No item ) current null
return try current
( BinaryNode ) q.dequeue ( )
catch ( Underflow e ) return if
( current.left ! null ) q.enqueue ( current.left
) if ( current.right ! null ) q.enqueue
( current.right )
34
Binary search trees
  • It gives O(log N) complexity instead of O(N)
    complexity for a regular tree.
  • For a client DB we might want to look up max.
    sale in month.
  • For supplier DB we might want to look up min.
    cost item
  • We want a tree that permits looking for mid
    quickly, then restricting search to half the
    tree, then looking for mid again in whats left
    of the tree, ...

35
Structure
We want min. values on left sub-tree, max. values
on right sub-tree
To insert element go down tree, left if smaller,
right if bigger. To remove an element might
remove a whole sub-tree.
36
Algorithm
If root of sub-tree has 2 children, replace
removed node with smallest item in right sub-tree
Class BinaryNode BinaryNode ( Comparable e
) this( e, null, null) BinaryNode (
Comparable e, BinaryNode lt, BInaryNode
rt) element e left lt right
rt Comparable element BinaryNode left
BinaryNode right int size 1
37
Protected BinaryNode find ( Comparable x,
BinaryNode t) throws ItemNotFound while ( t !
null ) if ( x.compares ( t.element ) lt 0 ) t
t.left else if ( x.compares ( t.element ) gt 0
) t t.right else return t throw new
ItemNotFound ( None found) Protected
BinaryNode findMin ( BinaryNode t ) throws
ItemNotFound if ( t null) throw new
ItemNotFound (None found) while ( t.left !
null ) t t.left return t Protected
BinaryNode insert ( Comparable x, BinaryNode t )
throws DuplicateItem if ( t null ) t
new BinaryNode ( x, null, null ) else if (
x.compares ( t.element ) lt 0 ) t.left insert (
x, t.left ) else if ( x.compares ( t.element )
gt 0 ) t.right insert ( x, t.right
) else throw new DuplicateItem ( Duplicate
item found)
38
Protected BinaryNode remove ( Comparable x,
BinaryNode t ) throws ItemNotFound if ( t
null ) throw new ItemNotFound( None found
) if ( x.compares ( t.element ) lt 0 ) t.left
remove ( x, t.left ) else if ( x.compares (
t.element ) gt 0 ) t.right remove ( x, t.right
) else if ( t.left ! null) t.right !
null) t.element findMin ( t.right
).element t.right removeMin ( t.right
) else t ( t.left ! null ) ? t.left
t.right return t
39
public class rankedBinarySearchTree extends
BinarySearchTree public Comparable findKth
(int k) throws ItemNotFound return findKth (
k, root).element protected
BinaryNode findKth (int k, BinaryNode t) throws
ItemNotFound if ( t null ) throw new
ItemNotFound (Not found) int leftSize (
t.left !null) ? t.left.size 0 if ( k lt
leftSize) return findKth ( k, t.left ) else
if ( k leftSize 1 ) return t else
return findKth ( k leftSize 1, t.right
)
40
Complexity of Binary Search Tree operations AVL
Trees
  • The cost of each operation is proportional to the
    number of accesses. So each nodes cost is (1
    its depth).
  • The depth depends on whether each node has 2
    children (totally balanced tree) or each node has
    1 child (totally unbalanced tree).
  • The depth of a balanced tree is (log N), that of
    an unbalanced tree is (N-1).
  • For any node in an AVL Tree, the height of the
    left and right sub-trees can differ by at most 1.

41
Rebalancing an AVL tree
If b replaces a, a replaces g, relative values
remain the same
For the left sub-tree of the left child or the
right sub-tree of the right child a single
rotation of the tree will not change the relative
values of the nodes.
Static BinaryNode LtLtChild ( BinaryNode a
) BinaryNode b a.left a.left
b.right b.right a return b
42
Double rotation for lt-rt or rt-lt child
We rotate nodes such that the right child follows
the right rotation (because it is greater), but
the left child remains behind (because it is
smaller).
/ Insertion into lt subtree of rt child static
BinaryNode doubleLtRt (BinaryNode a) a.left
RtRtChild ( a.left) return LtLtChild ( a )
43
SORTING
  • Insertion sort (complexity ?(N) or O(N2) )
  • Shell sort (complexity O(N3/2) O(N2) )
  • Merge sort ( complexity O(N log N) )
  • Quick sort ( complexity O(N log N) ) but the
    running time is faster than that of Merge sort.

44
Insertion Sort
Public static void insertionSort (Comparable a
) for ( int p 1 p lt a.length p )
Comparable temp ap int j
p for ( j gt 0 temp.lessThan (
aj-1 ) j-- ) aj aj-1 aj
temp
For unsorted arrays the complexity is O(N2). For
a sorted array the complexity is O(N).
45
Shell sort
public static void shellSort (Comparable a
) for ( int gap a.length/2 gap gt 0 gap
gap 2 ? 1 (gap / 2.2) ) for ( int i
gap i lt a.length i ) Comparable
temp a i int j i for ( j gt gap
temp.lessThan (a j - gap) j - gap )
a j a j - gap a j
temp
46
Merge sort
private static void mergeSort ( Comparable a
, Comparable tmpArray , int left, int
right) if ( left lt right ) int center (
left right ) / 2 mergeSort ( a, tmpArray,
left, center ) mergeSort ( a, tmpArray, center
1, right ) merge ( a, tmpArray, left, center
1, right ) public static void mergeSort
( Comparable a ) Comparable tmpArray
new Comparable a.length mergeSort ( a,
tmpArray, 0, a.length 1)
47
private static void merge ( Comparable a ,
Comparable tmpArray , int leftPos,
int rigthPos, int rightEnd ) int leftEnd
rightPos 1 int tmpPos leftPos int
numElements rightEnd leftPos 1 while (
leftPos lt leftEnd rightPos lt rightEnd ) if
( a leftPos.lessThan (a rightPos )
) tmpArray tmpPos a leftPos
else tmpArray tmpPos a rightPos
// Copy rest of first half while ( leftPos
lt leftEnd ) tmpArray tmpPos a leftPos
// Copy rest of second half while (
rightPos lt rightEnd ) tmpArray tmpPos a
rightPos //Copy tmpArray for ( int i
0 i lt numElements I , rightEnd -- ) a
rightEnd tmpArray rightEnd
48
Quick sort
  • We approximate the median by taking the median of
    3 numbers in the array, namely the first, the
    middle and the last elements.
  • We put all nos bigger than pivot to the right,
    all nos smaller than pivot to the right.
  • Pointer i will start at low , go left to right
    will find large numbers.
  • Pointer j will start at high , go right to left
    will find small numbers.

49
Algorithm
  • To find the median we must sort 3 elements. So we
    get a head start by placing them in order in the
    array we place the pivot at the next to last
    position.
  • We can optimize further by stopping Quicksort if
    the subsets are below a certain cutoff, usually
    10 elements, and using Insertion sort.

50
private static void quicksort (Comparable a ,
int low, int high, int cutoff ) if ( low
cutoff gt high) insertionSort ( a, low, high
) else int middle ( low high ) /
2 if ( a middle . lessThan ( a low
)) swap (a, low, middle ) if ( a high .
lessThan ( a low )) swap (a, low, high )
if ( a high . lessThan ( a middle )) swap
(a, middle, high ) // Place pivot swap (a,
middle, high - 1 ) Comparable pivot a
high 1 int i, j for ( i low j high
1 ) while ( a i . lessThan (
pivot ) ) while (pivot . lessThan (a - - j
) ) if ( i lt j ) swap ( a, i, j
) else break // Restore pivot swap (
a, i, high 1 ) quicksort ( a, low, i 1 )
quicksort ( a, i 1, high ) Public
static void quicksort ( Comparable a
) quicksort ( a, 0, a.length 1)
51
Hash Tables
  • Hash tables offer O(c) complexity.
  • The objects need not implement Comparable
    interface
  • To find a word in a dictionary we need a key for
    each word.
  • The key can be the ASCII representation of the
    letters, but it takes 8 bits to represent 1
    character.
  • So we need to find a way to store large keys
    (words) into small keys.

52
Algorithm
  • Collisions can be treated by linear probing. When
    a collision occurs, we just look at the next
    available space wrap around if necessary.
  • To find an item we go to its key if we do not
    find it, we go to the next key,
  • To remove an item we mark it deleted, but leave
    it, otherwise we introduce blank spaces which
    prevent us from finding another item.
  • With linear probing we may cluster many elements
    in one space have to traverse far to get a free
    space.
  • We can use quadratic probing if X H (mod N)
    and space H is getting clustered, we place X in
    space H 12, if full we place it at H 22,
  • We can always place an item if N is prime the
    data is half balanced.

53
Graphs
  • A graph G ( V, E ) has a set of vertices
    (nodes) and a set of edges ( arcs).
  • A directed graph (digraph) has ordered edge
    pairs.
  • An edge is a pair (v, w) where v, w ? V. It may
    have a cost or weight (v, w, c) associated with
    it.
  • v is adjacent to w if (v, w) ? V.
  • V is number of nodes E is number of arcs
    S is the size of the set.

54
Paths
  • A path is a sequence of vertices (v1,v2,,vn)
    such that (vi, vi1) ? E, 1 ? iltN.
  • The length of a path is the number of edges, N-1.
  • The weighted path length is the sum of the costs
    of the edges on the path
  • ?1N-1ci (vi,vi1)
  • A cycle is a path such that v1vN.
  • A DAG (directed acyclic graph) is a directed
    graph with no cycle.
  • A dense graph is one with most edges present
    E ?(V2)
  • A sparse graph is one with few edges present
    E ?(V) .

55
Data structures of weighted graphs
Adjacency matrix for dense graphs Originally set
to INFINITY. Created in O(N2) time.
Adjacency list for sparse graphs. Created in
O(N) time.
56
Data structures for shortest path
Graph table for shortest path.
57
Code for weighted graph Class edge public int
dest public int cost public edge ( int d, int
c ) dest d cost c Class
vertex String name List adj int dist int
prev int scratch Vertex ( String nme
) name nme adj new LinkedList (
) Public Graph ( ) numVertices
0 table new Vertex init_table-size vertexM
ap new QuadraticProbingTable ( )
58
Private void addInternalEdge( int source, int
dest, int cost ) ListItr p new LinkedListItr
(tablesource.adj) try p.insert ( new Edge
(dest, cost) ) catch( ItemNotFound
e) Public void addEdge ( String source, String
dest, int cost ) addInternalEdge (
addNode(source), addNode(dest),
cost) Private void clearData ( ) for (
int I 0 I lt numVertices I ) table i
. dist INFINITY table i . prev
NULL_VERTEX table i . scratch 0
59
Code for printing the shortest path Private void
printPathRec ( int destNode ) if (
tabledestNode.prev ! NULL_VERTEX
) printPathRec ( tabledestNode.prev) Sys
tem.out.print ( To node ) System.out.print
( tabledestNode.name) Private void
printPath ( int destNode) if
(tabledestNode.dist INFINITY
) System.out.println (tabledestNode.name
is unreachable) else printPathrec
(destNode) System.out.println ( (cost is
tabledestNode.dist ) ) System.out.println
( )
60
Unweighted shortest path algorithm
  • The unweighted shortest path problem counts the
    number of edges traversed.
  • We start with a node all paths 0 edges away,
    then 1 edge away, then 2
  • Since all nodes are found once only, the
    algorithm is O(N).

61
Code for unweighted shortest path
Private void unweighted (int startNode ) int
v, w Queue q new QueueAr ( ) clearData (
) table ( startNode ).dist 0 q.enqueue (
new int (startNode ) ) try while (
!q.isEmpty ( ) ) v ( (Integer)q. dequeue (
)).intValue ( ) // front of queue ListItr p
new LinkedListItr ( table v .adj ) for (
p.isInList ( ) p.advance ( ) ) w ( (
Edge)p.retrieve ( ).dest i f( table w
.dist INFINITY ) table w .dist
table v .dist 1 table w .prev
v q.enqueue ( new Integer ( w )
) catch ( Underflow e ) ( )
62
Weighted shortest path - Dijkstras algorithm
  • Find all vertices v such that dv gt ds cs,v and
    set dv ds cs,v.
  • The value obtained for a vertex may be
    re-evaluated later.
  • Dijkstra algorithm does not work if we have
    negative costs.
  • If we have negative costs we can get into
    infinite loops as we find cheaper and cheaper
    paths to reach a node.
  • When a vertex is enqueued, we increment
    scratch. When it is dequeued we increment it
    again.
  • With no cycle a vertex can dequeue at most V
    times the algorithm is at most O(EV). If a
    vertex dequeues more than V times we have a
    cycle.
  • Scratch is odd if if the vertex is on the queue,
    and scratch /2 is the number of times it has left
    the queue.
  • If a queued element has its distance changed, we
    add 2 to scratch, as if it had been queued
    dequeued.

63
Private boolean negativeWeighted (int startNode )
int v, w Queue q new QueueAR ( ) int
cvw clearData ( ) scratch 0 table (
startNode ).dist 0 q.enqueue ( new int
(startNode ) ) table startNode
.scratch try while ( !q.isEmpty ( ) )
v ((Integer)q. dequeue ( ).intValue (
) if table v .scratch gt2 numVertices
) return false ListItr p new
LinkedListItr ( table v .adj ) for (
p.isInList ( ) p.advance ( ) ) w ((
Edge)p.retrieve ( ).dest cvw ((Edge)p .
retrieve( ) ).cost i f( table w .dist gt
table v .dist cvw ) table w .dist
table v .dist cvw table w .prev
v if ( tablew .scratch 2 0
) q.enqueue ( new Integer ( w )
) else table w .scratch
catch ( Underflow e ) ( ) return true
64
  • Weighted shortest path for acyclic graphs
  • The algorithm is much faster if we do not have
    negative-cost cycles. Then, we can order the
    graph by which node comes after another node.
    This is done by Topological Sorting. This is
    applied to Critical-path analysis problems.
  • This is applied to problems where we decide the
    earliest possible completion of a project, what
    activity can be delayed, how long can an activity
    be delayed without lengthening the project?
  • The indegree of a vertex v is the number of edges
    (u, v). Compute indegrees for all vertices.
  • The algorithm finds a vertex v with no incoming
    edge, prints the vertex, removes it its edges
    from consideration by reducing all indegrees of
    vertices adjacent to v.

65
Weighted shortest path for acyclic graphs
If we have no cycle, we can order the graph by
which node comes after another node. This is done
by Topological Sorting.
  • If there is a path from u to v then v comes
    after u.
  • u ---gt v
  • If there are cycles we cannot order
    topologically because v can follow u u can
    follow v. u v

  • w
  • A topological order is not necessarily unique.

66
Critical-Path Analysis
  • This is applied to problems where we decide the
    earliest possible completion of a project.
  • What activity can be delayed?
  • How long can an activity be delayed without
    lengthening the project?
  • Different optimal paths for one problem.

67
Example of Critical Path
Car assembly line
In case of equipment malfunctioning, this
assembly can be replaced by
68
Minimum Spanning Treein undirected graph
  • If we want to build a railroad of minimum cost
    connecting all towns in the county or design a
    bus route, or subway route of minimum cost
    linking all major crossings or design the minimum
    cost telecommunication network linking all
    computers, etc.
  • Generate different trees, from the vertices of an
    undirected graph. Select edges of least weight
    add them, in order, to the trees, providing they
    form no cycles.
  • The different trees in each forests are combined
    to form the Minimum Spanning Tree.

69
Algorithm for MST
  • Sort edges, or arrange them in priority queue.
  • Keep each connected component in forest as a
    disjoint set. If 2 vertices are in same set, we
    cannot add an edge if it creates a cycle.
  • If not in same set, new edge can be accepted
    union formed with already connected components.

70
Random numbers for Uniform Distributions
  • First number equally likely to be any in
    distribution
  • ith number equally likely to be any in
    distribution
  • Expected average of all generated numbers is
    average of distribution
  • The sum of 2 consecutive numbers is equally
    likely to be even or odd.
  • Some numbers will be duplicated.

71
Linear Congruential Generator - LCG
  • Generate set Xi to satisfy Xi1 AXi(mod M)
  • If 1lt X0ltM and M is prime, no number is 0
  • Numbers will repeat after a certain sequence
    which is called the period. The best possible
    period is M-1 which is called the full-period LCG
  • For M prime, different choices for A give a
    full-period LCG.
  • Best proven choices are M 231- 1, A 48,271

72
Poisson Distribution, Negative Exponential
Distribution Random Permutations
The Poisson distribution is for rare events, such
as winning the lottery. The probability of k
occurrences, when the mean number of occurrences
is a, is given by ake-a/k! Negative
Exponential distribution is used to model the
time between occurrences of random events. To
simulate a card game we need random permutations
of a fixed set of numbers.
73
Binary Heap
  • A priority queue allows access to the min (or
    max) item
  • The Binary Heap is a priority queue that allows
    insertion of new items the deletion of the
    minimum item in logarithmic time.
  • The Binary Heap is implemented by an array.

74
  • The correspondence between an array and a
    balanced tree works if
  • the tree is a complete Binary Tree
  • there cannot be gaps in the nodes, including the
    leaves
  • The worst case log complexity works because
  • for N nodes and height H we must have
  • 1 2H-1 ? N ? 1 2 4 2H
  • 1 2H-1 ? N ? 2H1 - 1
  • Therefore the height is at most log N
  • If we start the root node at position 1 rather
    than 0, then for every node in position i will
    have its left child in position 2i and its right
    child in position 2i1.
  • Inversely, given a node in position i, we know
    that its parent node will be in position i/2.

75
Public BinaryHeap ( Comparable negInf
) currentSize 0 orderOK true getArray
(DEFAULT_CAPACITY) array0 negInf Public
Comparable findMin ( ) throws Underflow if (
isEmpty ( ) ) throw new Underflow( Empty
binary heap ) if ( !orderOK) fixHeap (
) return array1 Private void getArray (
int newMaxSize) array new Comparable
newMaxSize 1
76
Insert, delete Toss
  • If inserting there does not respect the
    parent-child order, we slide its parent to that
    position - the hole.
  • This technique is called percolating up.
  • This is done in logarithmic time.
  • If many insertions need to be made, it is
    cheaper to toss at the end and then fixHeap
  • deleteMin creates the reverse problem from
    insert it creates a hole and destroys the
    completeness of the tree, we have to percolate
    down the tree, moving the hole to the last leaf
    on the tree.

77
Private void checkSize ( ) if (currentSize
array.size - 1) Comparable oldarray
array getArray( currentSize 2) for (int
i0 i lt oldArray.length i1) arrayi
oldArrayi public void toss (Comparable
x) checkSize( ) arraycurrentSize
x if (x.lessThan (arraycurrentSize/2)
) orderOK false Public Comparable
deleteMin ( ) throws Underflow Comparable
minItem findMin ( ) array 1 array
currentSize -- percolateDown (1) return
minItem
78
Public void insert (Comparable x) if
(!orderOK) toss (x) return CheckSize
( ) // percolate up int hole
currentSize for ( x.lessThan
(arrayhole/2) hole / 2 ) array hole
array hole/2 array hole x
79
private void percolateDown (int hole) int
child Comparable temp array hole for (
hole 2 lt currentSize hole child) child
hole 2 if (child ! currentSize array
child1.lessThan (array child )
) child if (array child.lessThan (temp)
) array hole array child else
break array hole temp Private void
fixHeap ( ) for ( int i currentSize/2 i gt
0 i --) percolateDown (i) orderOK
true
80
Intractable Problems
If a problem has an O(nk) time algorithm (where k
is a constant), then we class it as having
polynomial time complexity and as being
efficiently solvable. If there is no known
polynomial time algorithm, then the problem is
classed as intractable. The dividing line is not
always obvious. Consider two apparently similar
problems Euler's problem (often characterized
as the Bridges of Königsberg - a popular 18th C
puzzle) asks whether there is a path through a
graph which traverses each edge only
once. Hamilton's problem asks whether there is a
path through a graph which visits each vertex
exactly once.
81
  • Euler showed that an Eulerian path existed iff
  • it is possible to go from any vertex to any other
    by following the edges (the graph must be
    connected) and
  • every vertex must have an even number of edges
    connected to it, with at most two exceptions
    (which constitute the starting and ending
    points).
  • However there is no known efficient algorithm for
    determining whether a Hamiltonian path exists.
    But if a path was found, then it can be verified
    to be a solution in polynomial time
  • Euler's problem lies in the class P problems
    solvable in Polynomial time. Hamilton's problem
    is believed to lie in class NP (Non-deterministic
    Polynomial). Note that I wrote "believed" in the
    previous sentence. No-one has succeeded in
    proving that efficient (ie polynomial time)
    algorithms don't exist yet!
  • The Traveling Salesman problem is finding the
    cheapest tour between multiple points. This
    problem can also be proved to be in NP. It is
    reducible to the Hamiltonian circuit problem. One
    heuristic is the find the minimum spanning
    treeand traverse it twice. So we can find a tour
    which is at most twice as long as the optimum
    tour in polynomial time

82
An algorithm due to Christofides can be shown to
produce a tour which is no more than 50 longer
than the optimal tour. It starts with the MST
and singles out all cities which are linked to an
odd number of cities. These are linked in
pairs. Another strategy is to divide the "map"
into many small regions and to generate the
optimum tour by exhaustive search within those
small regions. A greedy algorithm, such as
nearest neighbor, can then be used to link the
regions. This algorithm will produce tours as
little as 5 longer than the optimum tour in
acceptable times.
Write a Comment
User Comments (0)
About PowerShow.com