Agent Work: Homework5
Claude Sonnet 4.6 · COMP 382: Reasoning about Algorithms
COMP 382: Reasoning about Algorithms
Homework 5
Total Points: 100 (Each problem is worth 20 points)
---
Problem 1: Greedy Trucking [20 points]
Consider a shipping company that sends a number of trucks each day between Houston and Austin. Trucks have a fixed weight W on the maximum amount of weight they are allowed to carry. Boxes arrive at the Houston station one by one, and each package has a weight wᵢ. The trucking station is quite small, so only one truck can be at the station at a given time. Company policy requires that the boxes are shipped in the order in which they are received.
Consider the greedy algorithm that packs the boxes in the order that they arrive; whenever the next box does not fit, the algorithm sends the truck on its way.
Prove that, for a given set of boxes with specified weights, the greedy algorithm minimizes the number of trucks that are needed.
---
Problem 2: Smallest Set of Intervals Containing Points [20 points]
Describe a greedy algorithm that, given a set {x₁, x₂, ..., xₙ} of points on the real line, determines the smallest set of unit-length closed intervals that contains all of the given points.
Prove that your algorithm is correct.
---
Problem 3: All Spanning Trees [20 points]
Kruskal's algorithm can return different spanning trees for the same input graph G, depending on how it breaks ties in edge costs when the edges are sorted.
Prove that for each minimum spanning tree T of G, there is a way to sort the edges of G in Kruskal's algorithm so that the algorithm returns T.
---
Problem 4: Oscillating Subsequence [20 points]
Call a sequence X[1...n] of numbers *oscillating* if X[i] < X[i + 1] for all even i, and X[i] > X[i + 1] for all odd i.
Describe a greedy algorithm to compute the length of the longest oscillating subsequence of an arbitrary array A of integers.
Note: A subsequence of an array A is a sequence ⟨a₁, ..., aₖ⟩ of elements of A that appear in the same order as in A. A subsequence need not be a contiguous subarray. For example, ⟨1, 7, 3⟩ is a subsequence of the array [5, 1, 9, 7, 23, 30, 3].
---
Problem 5: Coin Changing [20 points]
Consider the problem of making change for n cents using the smallest number of coins. Assume that each coin's value is an integer.
Part (a)
Describe a greedy algorithm to make change when available coins are in denominations that are powers of c: the denominations are c⁰, c¹, c², ..., cᵏ for some integers c > 1 and k ≥ 1.
Show that the greedy algorithm always yields an optimal solution.
Part (b)
Give a set of coin denominations for which the greedy algorithm does not yield an optimal solution. Your set should include a penny so that there is a solution for every value of n.
# COMP382 Homework 5
## Student Responses
<!-- Write your solutions below. Add sections for each problem as needed. -->
<!-- Use proper mathematical notation where applicable. -->
[Your solutions here]
COMP382 Homework 5
Student Responses
---
Problem 1: Greedy Trucking [20 points]
Algorithm: Pack boxes in the order they arrive. Whenever the next box would cause the current truck's load to exceed W, send the current truck and start a new one with that box.
Proof of Optimality:
Let the boxes be numbered 1 through n in arrival order. Let the greedy solution use g trucks with groups G_1, G_2, ..., G_g (contiguous, maximal: adding one more box would exceed W). Let OPT be any valid solution using k trucks with groups O_1, O_2, ..., O_k.
Define last(G_j) = index of the last box on greedy truck j, and last(O_j) = index of the last box on OPT truck j.
Claim: last(G_j) ≥ last(O_j) for all j ≤ min(g, k).
Proof by induction on j:
*Base case (j = 1):* The greedy loads boxes 1, 2, ..., last(G_1) where last(G_1) is the maximum index m such that w(1) + ... + w(m) ≤ W (greedy is maximal). Since O_1 is a valid group, w(1) + ... + w(last(O_1)) ≤ W. By maximality of the greedy's first truck, last(G_1) ≥ last(O_1). ✓
*Inductive step:* Assume last(G_j) ≥ last(O_j). Greedy's (j+1)-th truck starts at box s_G = last(G_j) + 1; OPT's (j+1)-th truck starts at box s_O = last(O_j) + 1. Since last(G_j) ≥ last(O_j), we have s_G ≥ s_O.
OPT's (j+1)-th truck carries boxes s_O, s_O+1, ..., last(O_{j+1}) with total weight ≤ W. The boxes s_G, s_G+1, ..., last(O_{j+1}) form a contiguous sub-range of OPT's (j+1)-th load (since s_G ≥ s_O), so their total weight is also ≤ W. Therefore, greedy's (j+1)-th truck can carry at least boxes s_G through last(O_{j+1}). Since the greedy is maximal, last(G_{j+1}) ≥ last(O_{j+1}). ✓
Conclusion: By induction, last(G_k) ≥ last(O_k) = n (all n boxes are loaded by OPT's k-th truck). Since last(G_k) ≤ n, we conclude last(G_k) = n, meaning the greedy finishes all n boxes by its k-th truck. Therefore g ≤ k.
Since this holds for any valid solution using k trucks, the greedy minimizes the number of trucks needed. □
---
Problem 2: Smallest Set of Intervals Containing Points [20 points]
Algorithm:
1. Sort the points: x_{(1)} ≤ x_{(2)} ≤ ... ≤ x_{(n)}. 2. Initialize an empty set of intervals S. 3. While there are uncovered points: a. Let p be the leftmost uncovered point. b. Add interval [p, p+1] to S. c. Mark all given points in [p, p+1] as covered. 4. Return S.
Running time: O(n log n) due to sorting; the main loop takes O(n) total.
Proof of Correctness:
Let GREEDY = {G_1, G_2, ..., G_g} be the greedy intervals (each G_j = [p_j, p_j+1] in the order placed, so p_1 < p_2 < ...). Let OPT = {I_1, I_2, ..., I_k} be any optimal solution sorted by left endpoint, with I_j = [a_j, a_j+1].
Invariant: For each j, (i) right(G_j) ≥ right(I_j), and (ii) every given point covered by I_1,...,I_j is also covered by G_1,...,G_j.
We prove this by induction on j.
*Base case (j = 1):* Since I_1 covers x_{(1)} (the leftmost point), we have a_1 ≤ x_{(1)}, so right(I_1) = a_1 + 1 ≤ x_{(1)} + 1 = right(G_1). This gives (i). For (ii): every given point in I_1 = [a_1, a_1+1] satisfies x_{(1)} ≤ point ≤ a_1+1 ≤ x_{(1)}+1 (since all given points are ≥ x_{(1)}), so the point lies in G_1 = [x_{(1)}, x_{(1)}+1]. ✓
*Inductive step:* Assume the invariant holds for j. By part (ii), every given point covered by OPT's first j intervals is also covered by greedy's first j intervals. Therefore, the leftmost given point uncovered by I_1,...,I_j satisfies q ≤ p_{j+1} (the leftmost uncovered by G_1,...,G_j), because anything uncovered by greedy is also uncovered by OPT.
OPT's (j+1)-th interval I_{j+1} = [a_{j+1}, a_{j+1}+1] covers q, so a_{j+1} ≤ q ≤ p_{j+1}. Therefore:
- right(G_{j+1}) = p_{j+1} + 1 ≥ q + 1 ≥ a_{j+1} + 1 = right(I_{j+1}), establishing (i). ✓
- For (ii): given points in I_{j+1} that are not in I_1,...,I_j come from [q, a_{j+1}+1]. Points in [q, p_{j+1}-1] are covered by G_1,...,G_j (since p_{j+1} is the leftmost uncovered by greedy). Points in [p_{j+1}, a_{j+1}+1] ⊆ [p_{j+1}, p_{j+1}+1] are covered by G_{j+1}. ✓
Conclusion: By induction, after GREEDY places g intervals and OPT uses k intervals, all given points are covered by OPT (after k steps). Since given points covered by I_1,...,I_k ⊆ given points covered by G_1,...,G_k, all n points are covered by greedy's first k intervals. Therefore g ≤ k, and greedy is optimal. □
---
Problem 3: All Spanning Trees [20 points]
Claim: For any MST T of G, sorting the edges so that tree edges (in T) come before non-tree edges of the same weight causes Kruskal's algorithm to return T.
Proof:
Edge ordering: Sort edges by (weight, indicator) where indicator = 0 for edges in T and 1 for edges not in T. This ensures tree edges of weight w precede non-tree edges of weight w.
We show Kruskal's adds exactly the edges of T under this ordering.
---
Part 1: Every tree edge e ∈ T is added.
Suppose for contradiction that when tree edge e = (u,v) ∈ T is processed, u and v are already connected in the Kruskal forest. Then there is a path Q in the Kruskal forest from u to v. Every edge on Q was processed before e, so every edge on Q has weight ≤ w(e). Moreover, non-tree edges of weight w(e) come *after* e in our ordering, so Q contains no non-tree edges of weight w(e). Therefore every edge on Q has either:
- weight < w(e), or
- weight = w(e) and is a tree edge in T.
Can Q use only tree edges in T? If so, Q is a path in T from u to v using edges other than e. But T is a spanning tree, so the *unique* path from u to v in T is the single edge e. Since e has not been processed yet, it cannot be on Q. Contradiction: Q cannot exist using only T-edges (other than e).
Therefore Q contains at least one non-tree edge f ∉ T with w(f) < w(e). (It cannot have weight = w(e) since those come after e in the ordering.)
Consider the cut (S, V \ S) defined by removing e from T: T \ {e} has two components with u ∈ S and v ∈ V \ S. Edge e is the *unique* T-edge crossing this cut (since T is a tree). Path Q from u to v must cross this cut, so Q contains at least one edge crossing (S, V \ S). Let f be such an edge with f ∉ T and w(f) < w(e).
Now consider T' = T \ {e} ∪ {f}. T' is a spanning tree (f reconnects the two components of T \ {e}), and weight(T') = weight(T) − w(e) + w(f) < weight(T). This contradicts T being a minimum spanning tree.
Therefore u and v are in different components when e is processed, and Kruskal's adds e. ✓
---
Part 2: Every non-tree edge f ∉ T is rejected.
When non-tree edge f = (u,v) ∉ T is processed, we show u and v are already connected.
Since T is an MST, for the unique path P_T(u,v) in T from u to v, every edge e' on P_T(u,v) satisfies w(e') ≤ w(f). (If some e' had w(e') > w(f), we could replace e' with f to get a lighter spanning tree, contradicting optimality of T.)
All edges on P_T(u,v) are tree edges with weight ≤ w(f). In our ordering, tree edges with weight ≤ w(f) all precede f (tree edges of weight w(f) come before non-tree edges of weight w(f), and f is a non-tree edge). By Part 1, all these tree edges have been added by Kruskal's. Therefore P_T(u,v) is in the Kruskal forest, and u, v are connected when f is processed. Kruskal's rejects f. ✓
Conclusion: With this ordering, Kruskal's adds exactly the edges of T, returning T as the spanning tree. Since T was an arbitrary MST, every MST can be produced by some ordering of edges in Kruskal's algorithm. □
---
Problem 4: Oscillating Subsequence [20 points]
Recall: a sequence is *oscillating* if X[i] > X[i+1] for all odd i, and X[i] < X[i+1] for all even i. The pattern is: big, small, big, small, ... (the sequence starts high and alternates).
Algorithm:
LongestOscillating(A[1..n]):
if n = 0: return 0
length ← 1
last ← A[1]
need ← "down" // at odd position 1, the next move must be a descent
for j ← 2 to n do:
if need = "down":
if A[j] < last:
length ← length + 1 // extend: found the needed descent
last ← A[j]
need ← "up"
elif A[j] > last:
last ← A[j] // greedy improvement: larger starting value helps future descent
else: // need = "up"
if A[j] > last:
length ← length + 1 // extend: found the needed ascent
last ← A[j]
need ← "down"
elif A[j] < last:
last ← A[j] // greedy improvement: smaller valley helps future ascent
return lengthIntuition: We greedily include elements at each direction change. When we cannot yet extend the subsequence (the current element goes the "wrong" way), we update the last element to be more extreme—making future extensions easier. Specifically: when needing a descent, we prefer a larger last element (easier to find something below it); when needing an ascent, we prefer a smaller last element.
Proof of Correctness (Greedy Stays Ahead):
We prove by induction on j that after processing A[1..j], the following invariant holds:
> (I) The greedy achieves the maximum possible oscillating subsequence length L for A[1..j]. > > (II) If need = "down": among all oscillating subsequences of A[1..j] of length L with "down" needed next, the greedy's last value is maximum. > If need = "up": among all such subsequences, the greedy's last value is minimum.
*Base case (j = 1):* L = 1, last = A[1], need = "down". The only length-1 oscillating subsequence of A[1..1] is A[1] itself. The invariant holds trivially. ✓
*Inductive step:* Assume the invariant holds after processing A[1..j]. We process A[j+1] with current state (L, last, need). Consider any competitor subsequence C of A[1..j+1] of maximum possible length.
Case 1: need = "down" and A[j+1] < last. The greedy extends to (L+1, A[j+1], "up"). Any length-(L+1) oscillating subsequence of A[1..j+1] with "up" needed next must end with some element that was reached by a descent; this final element is ≤ last (by invariant (II) for need="down"). The greedy ends at A[j+1], so we need to show it is minimal. Among all length-(L+1) subsequences ending with need="up" in A[1..j+1], those that extended at step j+1 descend to some value ≤ last. Any subsequence that extended before j+1 already achieved length L+1 in A[1..j] — but that's impossible since OPT(j) = L (greedy had maximum length at j). Thus all length-(L+1) subsequences extend at j+1 and end at A[j+1]. The greedy's last value is A[j+1], satisfying (II). ✓
Case 2: need = "down" and A[j+1] > last. The greedy updates last ← A[j+1] without changing length. No oscillating subsequence of A[1..j+1] exceeds length L: any such extension would require a descent at step j+1 (A[j+1] < last), which doesn't hold, or using A[j+1] as an ascent from a "up"-needed state, but an OPT with "up"-state and length L would need last ≤ previous OPT "up" minimum — since OPT(j) = L, this would require an entirely different subsequence also of length L with need="up", reaching some value. In any case, OPT(j+1) remains L. The greedy's new last = A[j+1] ≥ any previous value, preserving (II) for need="down". ✓
Case 3: need = "up" and A[j+1] > last. Symmetric to Case 1 with "up" and "down" swapped. ✓
Case 4: need = "up" and A[j+1] < last. Symmetric to Case 2. ✓
Case 5: A[j+1] = last. No change; invariant preserved trivially. ✓
By induction, the invariant holds for all j. At j = n, the greedy's length L equals the length of the longest oscillating subsequence of A[1..n]. □
Example: A = [5, 1, 9, 7, 23, 30, 3].
- j=1: last=5, need=down, length=1.
- j=2: 1 < 5 (need down): extend. last=1, need=up, length=2.
- j=3: 9 > 1 (need up): extend. last=9, need=down, length=3.
- j=4: 7 < 9 (need down): extend. last=7, need=up, length=4.
- j=5: 23 > 7 (need up): extend. last=23, need=down, length=5.
- j=6: 30 > 23 (need down but 30 > 23): update last=30. length=5.
- j=7: 3 < 30 (need down): extend. last=3, need=up, length=6.
- Result: 6. Subsequence: [5, 1, 9, 7, 30, 3].
---
Problem 5: Coin Changing [20 points]
Part (a): Powers of c
Algorithm (Greedy-Change): At each step, use as many coins of the largest denomination that does not exceed the remaining amount. Formally:
Greedy-Change(n, c, k):
for i ← k downto 0:
use floor(n / c^i) coins of denomination c^i
n ← n mod c^i
return the collection of coins usedClaim: This greedy algorithm always yields an optimal solution.
Proof:
Step 1 (Reduction to canonical form): We first show that in any optimal solution, the number of coins of denomination c^i used is at most c − 1, for each i < k.
Suppose some solution uses m ≥ c coins of denomination c^i (i < k). Then c coins of denomination c^i can be replaced by 1 coin of denomination c^{i+1}, reducing the coin count by c − 1 ≥ 1. This gives a solution with fewer coins, contradicting optimality. Therefore any optimal solution uses at most c − 1 coins of each denomination c^i for i < k.
Step 2 (Uniqueness of representation): Consider the representation of n in base c:
n = n_0 · c^0 + n_1 · c^1 + ... + n_k · c^k
where each digit 0 ≤ n_i ≤ c − 1 for i < k (and n_k ≥ 0). This base-c representation is unique. The greedy algorithm produces exactly this representation: at each step i, it takes floor((n mod c^{i+1}) / c^i) = n_i coins of denomination c^i.
Step 3 (Optimality): Let (m_0, m_1, ..., m_k) be any optimal solution with Σ m_i · c^i = n. By Step 1, m_i ≤ c − 1 for all i < k. By uniqueness of base-c representation, the coefficients satisfying Σ m_i · c^i = n with 0 ≤ m_i ≤ c − 1 for i < k are exactly m_i = n_i for all i. Therefore every optimal solution uses exactly n_i coins of denomination c^i, the same as the greedy. The greedy is optimal. □
Note: The total number of greedy coins is Σ n_i (sum of base-c digits of n), which is minimized.
---
Part (b): Counterexample
Denominations: {1, 3, 4} with n = 6.
- Greedy: Uses the largest denomination ≤ remaining amount.
- 4 ≤ 6: use one 4-cent coin. Remaining: 2. - 3 > 2: skip. 1 ≤ 2: use one 1-cent coin. Remaining: 1. - 1 ≤ 1: use one 1-cent coin. Remaining: 0. - Total: 3 coins (4 + 1 + 1).
- Optimal: Use two 3-cent coins: 3 + 3 = 6.
- Total: 2 coins.
The greedy uses 3 coins while the optimal uses only 2. Therefore the greedy does not yield an optimal solution for denominations {1, 3, 4}.
(Note: The set {1, 3, 4} includes a penny (1-cent coin), so there is a valid solution for every value of n ≥ 0.)
COMP 382 Homework 5 Grading
Problem 1: Greedy Trucking [20 points]
Score: 20/20 points
Assessment:
Approach: The student uses a formal "stays ahead" argument with a clearly defined invariant: last(G_j) ≥ last(O_j) — the index of the last box on greedy's j-th truck is at least as large as the corresponding index in any other valid solution.
Strengths:
- Precisely defines notation (
last(G_j),last(O_j)) and the claim before proving it - Base case correctly appeals to the maximality of greedy's first truck
- Inductive step is rigorous: since
s_G ≥ s_O, the boxes{s_G, ..., last(O_{j+1})}are a contiguous sub-range of OPT's (j+1)-th truck load, so their total weight is ≤ W, and by greedy maximality,last(G_{j+1}) ≥ last(O_{j+1}) - The conclusion is airtight:
last(G_k) ≥ last(O_k) = nimplies greedy finishes all n boxes by its k-th truck, sog ≤ k
Errors/Gaps: None.
Detailed Feedback: This is an exemplary proof — more rigorous and clearly structured than the reference solution. The student correctly identifies that the key to the argument is showing the sub-range property in the inductive step, and the conclusion neatly ties the invariant to the optimality claim.
Rubric Breakdown:
- Correct algorithm description: 2/2
- Stays-ahead invariant clearly stated: 4/4
- Base case: 3/3
- Inductive step: 7/7
- Conclusion (optimality): 4/4
---
Problem 2: Smallest Set of Intervals Containing Points [20 points]
Score: 20/20 points
Assessment:
Approach: The student describes the correct greedy algorithm (sort points, anchor interval at leftmost uncovered point, repeat) and proves correctness via a two-part "stays ahead" invariant comparing greedy intervals to any optimal solution's intervals.
Strengths:
- Algorithm clearly specified with running time analysis (O(n log n))
- The two-part invariant is well-designed: (i)
right(G_j) ≥ right(I_j)and (ii) coverage dominance - Base case carefully shows both parts of the invariant, correctly arguing that
a_1 ≤ x_{(1)}impliesright(I_1) ≤ right(G_1) - Inductive step handles the coverage argument carefully, distinguishing points in
[q, p_{j+1}-1](already covered by greedy) from points in[p_{j+1}, a_{j+1}+1](covered by new greedy interval) - Conclusion correctly derives
g ≤ k
Errors/Gaps: None.
Detailed Feedback: This proof is significantly more rigorous than the reference solution. The two-part invariant explicitly handles coverage dominance, which the reference solution only sketches informally. The careful treatment of which points fall in which intervals during the inductive step demonstrates strong understanding.
Rubric Breakdown:
- Correct algorithm: 5/5
- Proof structure (invariant + induction): 5/5
- Base case: 3/3
- Inductive step: 5/5
- Conclusion: 2/2
---
Problem 3: All Spanning Trees [20 points]
Score: 20/20 points
Assessment:
Approach: The student uses a different approach from the reference (direct proof via edge ordering rather than weight perturbation). The student defines a tie-breaking rule — tree edges of the same weight precede non-tree edges — and then proves this causes Kruskal's to select exactly the edges of T.
Strengths:
- The edge ordering scheme is clean and clearly defined: sort by (weight, indicator) where indicator = 0 for T-edges, 1 for non-T-edges
- Part 1 (every T-edge is added) uses a well-constructed proof by contradiction:
- If Kruskal's rejects tree edge e, there exists a path Q connecting its endpoints - Q cannot consist entirely of T-edges (unique path property of trees) - Non-tree edges on Q must have weight strictly less than w(e) (same-weight non-tree edges come after e in the ordering) - The cut argument shows we could swap e for a lighter crossing edge, contradicting T being an MST
- Part 2 (every non-tree edge is rejected) uses the cycle property: all T-edges on the unique path in T from u to v have weight ≤ w(f), and by the ordering, they all precede f. By Part 1, they've all been added, so u and v are connected.
Errors/Gaps: None.
Detailed Feedback: This is an outstanding proof that takes a different approach from the reference solution (which uses weight perturbation) but is equally correct and arguably more elegant. The decomposition into Part 1 and Part 2 makes the argument modular and easy to follow. The cut argument in Part 1 and the cycle property in Part 2 are both correctly applied. This demonstrates deep understanding of MST properties.
Rubric Breakdown:
- Valid edge ordering: 4/4
- Part 1 (T-edges added): 8/8
- Part 2 (non-T-edges rejected): 5/5
- Conclusion: 3/3
---
Problem 4: Oscillating Subsequence [20 points]
Score: 19/20 points
Assessment:
Approach: The student describes a single-pass greedy algorithm that tracks the current subsequence length, the last element, and the needed direction. When an element extends the subsequence, it's included; when an element goes the "wrong" way but is more extreme, the last value is updated as a greedy improvement. The proof uses a "stays ahead" invariant with induction.
Strengths:
- Algorithm is clearly specified in pseudocode, runs in O(n) time
- Correct handling of all cases: extend when possible, update last when beneficial
- Well-chosen invariant: (I) greedy achieves max length, (II) greedy's last value is extremal (max when needing down, min when needing up)
- Cases 1 and 3 (extension cases) are rigorously proved
- Case 5 (equality) is correctly noted as trivial
- The worked example effectively demonstrates the algorithm
Errors/Gaps:
- Case 2 (need="down", A[j+1] > last): The explanation that OPT(j+1) = L is somewhat convoluted and hand-wavy. The student writes "any such extension would require a descent at step j+1 (A[j+1] < last), which doesn't hold, or using A[j+1] as an ascent from a 'up'-needed state, but an OPT with 'up'-state and length L would need last ≤ previous OPT 'up' minimum..." — this muddies the argument. A cleaner justification would note that all length-L subsequences share the same "need" direction (determined by parity of L), so the only way to extend is via descent, which is impossible since A[j+1] > last ≥ any other length-L subsequence's last value.
Detailed Feedback: The algorithm is correct and equivalent to the reference solution's "runs" approach — both count local extrema. The proof structure is valid with a proper invariant and induction, and the extension cases (1, 3) are rigorously handled. The non-extension cases (2, 4) contain the right core argument but the explanation is somewhat unclear. The first sentence of Case 2 captures the essential point, but the subsequent elaboration is confusing rather than clarifying. Minor deduction for this gap in rigor.
Rubric Breakdown:
- Correct algorithm: 6/6
- Proof invariant and structure: 4/4
- Extension cases (1, 3): 5/5
- Non-extension cases (2, 4): 3/4 — correct idea but hand-wavy explanation
- Example: 1/1
---
Problem 5: Coin Changing [20 points]
Score: 20/20 points
Assessment:
Approach:
- Part (a): Greedy algorithm with a three-step proof: (1) exchange argument to bound digits, (2) uniqueness of base-c representation, (3) greedy produces the unique canonical form
- Part (b): Counterexample with {1, 3, 4} and n = 6
Strengths:
- Part (a): The exchange argument is clean — if any digit xᵢ ≥ c, replace c copies of c^i with one c^{i+1}, reducing coin count by c − 1 ≥ 1
- The connection to base-c representation is correctly made: the constraint xᵢ < c for i < k, combined with uniqueness of base-c representation, gives exactly one valid coin assignment, which matches the greedy
- The greedy formula is explicitly stated: xᵢ = ⌊(n mod c^{i+1}) / c^i⌋ = i-th base-c digit
- Part (b): Correct counterexample with clear comparison (greedy: 3 coins, optimal: 2 coins)
- Includes the note about penny ensuring solutions for all n
Errors/Gaps: None.
Detailed Feedback: This is a complete and rigorous solution that matches the reference approach. The three-step proof structure makes the argument particularly clear. The counterexample is the same as the reference and is correctly verified.
Rubric Breakdown:
- Part (a) algorithm: 3/3
- Part (a) exchange argument (xᵢ < c): 5/5
- Part (a) base-c uniqueness connection: 4/4
- Part (a) greedy = canonical form: 3/3
- Part (b) valid counterexample: 5/5
---
Overall Summary
| Problem | Score | Notes |
|---|---|---|
| 1 - Greedy Trucking | 20/20 | Exemplary stays-ahead proof, more rigorous than reference |
| 2 - Smallest Set of Intervals | 20/20 | Excellent two-part invariant, thorough coverage argument |
| 3 - All Spanning Trees | 20/20 | Elegant alternative to reference (direct proof vs. weight perturbation) |
| 4 - Oscillating Subsequence | 19/20 | Correct algorithm and proof; minor gap in rigor for non-extension cases |
| 5 - Coin Changing | 20/20 | Complete and rigorous; correct counterexample |
| Total | 99/100 |
General Comments:
This is an exceptional submission. All five problems demonstrate strong understanding of greedy algorithm design and proof techniques. The proofs consistently use formal "stays ahead" arguments with carefully stated invariants, going beyond the rigor of the reference solutions in Problems 1–3. The only minor weakness is a slightly convoluted explanation in one proof case of Problem 4, where the core argument is correct but the elaboration is unclear. The student shows mastery of exchange arguments, cut properties of MSTs, and base-representation reasoning. Overall, this is near-perfect work.
Total: 99/100
diff --git a/writeup.md b/writeup.md
index 76d2b84..ce7f1f9 100644
--- a/writeup.md
+++ b/writeup.md
@@ -2,8 +2,233 @@
## Student Responses
-<!-- Write your solutions below. Add sections for each problem as needed. -->
-<!-- Use proper mathematical notation where applicable. -->
+---
-[Your solutions here]
+## Problem 1: Greedy Trucking [20 points]
+**Algorithm:** Pack boxes in the order they arrive. Whenever the next box would cause the current truck's load to exceed W, send the current truck and start a new one with that box.
+
+**Proof of Optimality:**
+
+Let the boxes be numbered 1 through n in arrival order. Let the greedy solution use g trucks with groups G_1, G_2, ..., G_g (contiguous, maximal: adding one more box would exceed W). Let OPT be any valid solution using k trucks with groups O_1, O_2, ..., O_k.
+
+Define last(G_j) = index of the last box on greedy truck j, and last(O_j) = index of the last box on OPT truck j.
+
+**Claim:** last(G_j) ≥ last(O_j) for all j ≤ min(g, k).
+
+**Proof by induction on j:**
+
+*Base case (j = 1):* The greedy loads boxes 1, 2, ..., last(G_1) where last(G_1) is the maximum index m such that w(1) + ... + w(m) ≤ W (greedy is maximal). Since O_1 is a valid group, w(1) + ... + w(last(O_1)) ≤ W. By maximality of the greedy's first truck, last(G_1) ≥ last(O_1). ✓
+
+*Inductive step:* Assume last(G_j) ≥ last(O_j). Greedy's (j+1)-th truck starts at box s_G = last(G_j) + 1; OPT's (j+1)-th truck starts at box s_O = last(O_j) + 1. Since last(G_j) ≥ last(O_j), we have s_G ≥ s_O.
+
+OPT's (j+1)-th truck carries boxes s_O, s_O+1, ..., last(O_{j+1}) with total weight ≤ W. The boxes s_G, s_G+1, ..., last(O_{j+1}) form a contiguous sub-range of OPT's (j+1)-th load (since s_G ≥ s_O), so their total weight is also ≤ W. Therefore, greedy's (j+1)-th truck can carry at least boxes s_G through last(O_{j+1}). Since the greedy is maximal, last(G_{j+1}) ≥ last(O_{j+1}). ✓
+
+**Conclusion:** By induction, last(G_k) ≥ last(O_k) = n (all n boxes are loaded by OPT's k-th truck). Since last(G_k) ≤ n, we conclude last(G_k) = n, meaning the greedy finishes all n boxes by its k-th truck. Therefore g ≤ k.
+
+Since this holds for any valid solution using k trucks, the greedy minimizes the number of trucks needed. □
+
+---
+
+## Problem 2: Smallest Set of Intervals Containing Points [20 points]
+
+**Algorithm:**
+
+1. Sort the points: x_{(1)} ≤ x_{(2)} ≤ ... ≤ x_{(n)}.
+2. Initialize an empty set of intervals S.
+3. While there are uncovered points:
+ a. Let p be the leftmost uncovered point.
+ b. Add interval [p, p+1] to S.
+ c. Mark all given points in [p, p+1] as covered.
+4. Return S.
+
+**Running time:** O(n log n) due to sorting; the main loop takes O(n) total.
+
+**Proof of Correctness:**
+
+Let GREEDY = {G_1, G_2, ..., G_g} be the greedy intervals (each G_j = [p_j, p_j+1] in the order placed, so p_1 < p_2 < ...). Let OPT = {I_1, I_2, ..., I_k} be any optimal solution sorted by left endpoint, with I_j = [a_j, a_j+1].
+
+**Invariant:** For each j, (i) right(G_j) ≥ right(I_j), and (ii) every given point covered by I_1,...,I_j is also covered by G_1,...,G_j.
+
+We prove this by induction on j.
+
+*Base case (j = 1):* Since I_1 covers x_{(1)} (the leftmost point), we have a_1 ≤ x_{(1)}, so right(I_1) = a_1 + 1 ≤ x_{(1)} + 1 = right(G_1). This gives (i). For (ii): every given point in I_1 = [a_1, a_1+1] satisfies x_{(1)} ≤ point ≤ a_1+1 ≤ x_{(1)}+1 (since all given points are ≥ x_{(1)}), so the point lies in G_1 = [x_{(1)}, x_{(1)}+1]. ✓
+
+*Inductive step:* Assume the invariant holds for j. By part (ii), every given point covered by OPT's first j intervals is also covered by greedy's first j intervals. Therefore, the leftmost given point uncovered by I_1,...,I_j satisfies q ≤ p_{j+1} (the leftmost uncovered by G_1,...,G_j), because anything uncovered by greedy is also uncovered by OPT.
+
+OPT's (j+1)-th interval I_{j+1} = [a_{j+1}, a_{j+1}+1] covers q, so a_{j+1} ≤ q ≤ p_{j+1}. Therefore:
+
+- right(G_{j+1}) = p_{j+1} + 1 ≥ q + 1 ≥ a_{j+1} + 1 = right(I_{j+1}), establishing (i). ✓
+
+- For (ii): given points in I_{j+1} that are not in I_1,...,I_j come from [q, a_{j+1}+1]. Points in [q, p_{j+1}-1] are covered by G_1,...,G_j (since p_{j+1} is the leftmost uncovered by greedy). Points in [p_{j+1}, a_{j+1}+1] ⊆ [p_{j+1}, p_{j+1}+1] are covered by G_{j+1}. ✓
+
+**Conclusion:** By induction, after GREEDY places g intervals and OPT uses k intervals, all given points are covered by OPT (after k steps). Since given points covered by I_1,...,I_k ⊆ given points covered by G_1,...,G_k, all n points are covered by greedy's first k intervals. Therefore g ≤ k, and greedy is optimal. □
+
+---
+
+## Problem 3: All Spanning Trees [20 points]
+
+**Claim:** For any MST T of G, sorting the edges so that tree edges (in T) come before non-tree edges of the same weight causes Kruskal's algorithm to return T.
+
+**Proof:**
+
+**Edge ordering:** Sort edges by (weight, indicator) where indicator = 0 for edges in T and 1 for edges not in T. This ensures tree edges of weight w precede non-tree edges of weight w.
+
+We show Kruskal's adds exactly the edges of T under this ordering.
+
+---
+
+**Part 1: Every tree edge e ∈ T is added.**
+
+Suppose for contradiction that when tree edge e = (u,v) ∈ T is processed, u and v are already connected in the Kruskal forest. Then there is a path Q in the Kruskal forest from u to v. Every edge on Q was processed before e, so every edge on Q has weight ≤ w(e). Moreover, non-tree edges of weight w(e) come *after* e in our ordering, so Q contains no non-tree edges of weight w(e). Therefore every edge on Q has either:
+- weight < w(e), or
+- weight = w(e) and is a tree edge in T.
+
+Can Q use only tree edges in T? If so, Q is a path in T from u to v using edges other than e. But T is a spanning tree, so the *unique* path from u to v in T is the single edge e. Since e has not been processed yet, it cannot be on Q. Contradiction: Q cannot exist using only T-edges (other than e).
+
+Therefore Q contains at least one non-tree edge f ∉ T with w(f) < w(e). (It cannot have weight = w(e) since those come after e in the ordering.)
+
+Consider the cut (S, V \ S) defined by removing e from T: T \ {e} has two components with u ∈ S and v ∈ V \ S. Edge e is the *unique* T-edge crossing this cut (since T is a tree). Path Q from u to v must cross this cut, so Q contains at least one edge crossing (S, V \ S). Let f be such an edge with f ∉ T and w(f) < w(e).
+
+Now consider T' = T \ {e} ∪ {f}. T' is a spanning tree (f reconnects the two components of T \ {e}), and weight(T') = weight(T) − w(e) + w(f) < weight(T). This contradicts T being a minimum spanning tree.
+
+Therefore u and v are in different components when e is processed, and Kruskal's adds e. ✓
+
+---
+
+**Part 2: Every non-tree edge f ∉ T is rejected.**
+
+When non-tree edge f = (u,v) ∉ T is processed, we show u and v are already connected.
+
+Since T is an MST, for the unique path P_T(u,v) in T from u to v, every edge e' on P_T(u,v) satisfies w(e') ≤ w(f). (If some e' had w(e') > w(f), we could replace e' with f to get a lighter spanning tree, contradicting optimality of T.)
+
+All edges on P_T(u,v) are tree edges with weight ≤ w(f). In our ordering, tree edges with weight ≤ w(f) all precede f (tree edges of weight w(f) come before non-tree edges of weight w(f), and f is a non-tree edge). By Part 1, all these tree edges have been added by Kruskal's. Therefore P_T(u,v) is in the Kruskal forest, and u, v are connected when f is processed. Kruskal's rejects f. ✓
+
+**Conclusion:** With this ordering, Kruskal's adds exactly the edges of T, returning T as the spanning tree. Since T was an arbitrary MST, every MST can be produced by some ordering of edges in Kruskal's algorithm. □
+
+---
+
+## Problem 4: Oscillating Subsequence [20 points]
+
+Recall: a sequence is *oscillating* if X[i] > X[i+1] for all odd i, and X[i] < X[i+1] for all even i. The pattern is: big, small, big, small, ... (the sequence starts high and alternates).
+
+**Algorithm:**
+
+```
+LongestOscillating(A[1..n]):
+ if n = 0: return 0
+
+ length ← 1
+ last ← A[1]
+ need ← "down" // at odd position 1, the next move must be a descent
+
+ for j ← 2 to n do:
+ if need = "down":
+ if A[j] < last:
+ length ← length + 1 // extend: found the needed descent
+ last ← A[j]
+ need ← "up"
+ elif A[j] > last:
+ last ← A[j] // greedy improvement: larger starting value helps future descent
+ else: // need = "up"
+ if A[j] > last:
+ length ← length + 1 // extend: found the needed ascent
+ last ← A[j]
+ need ← "down"
+ elif A[j] < last:
+ last ← A[j] // greedy improvement: smaller valley helps future ascent
+
+ return length
+```
+
+**Intuition:** We greedily include elements at each direction change. When we cannot yet extend the subsequence (the current element goes the "wrong" way), we update the last element to be more extreme—making future extensions easier. Specifically: when needing a descent, we prefer a larger last element (easier to find something below it); when needing an ascent, we prefer a smaller last element.
+
+**Proof of Correctness (Greedy Stays Ahead):**
+
+We prove by induction on j that after processing A[1..j], the following invariant holds:
+
+> (I) The greedy achieves the maximum possible oscillating subsequence length L for A[1..j].
+>
+> (II) If need = "down": among all oscillating subsequences of A[1..j] of length L with "down" needed next, the greedy's last value is maximum.
+> If need = "up": among all such subsequences, the greedy's last value is minimum.
+
+*Base case (j = 1):* L = 1, last = A[1], need = "down". The only length-1 oscillating subsequence of A[1..1] is A[1] itself. The invariant holds trivially. ✓
+
+*Inductive step:* Assume the invariant holds after processing A[1..j]. We process A[j+1] with current state (L, last, need). Consider any competitor subsequence C of A[1..j+1] of maximum possible length.
+
+**Case 1: need = "down" and A[j+1] < last.**
+The greedy extends to (L+1, A[j+1], "up"). Any length-(L+1) oscillating subsequence of A[1..j+1] with "up" needed next must end with some element that was reached by a descent; this final element is ≤ last (by invariant (II) for need="down"). The greedy ends at A[j+1], so we need to show it is minimal. Among all length-(L+1) subsequences ending with need="up" in A[1..j+1], those that extended at step j+1 descend to some value ≤ last. Any subsequence that extended before j+1 already achieved length L+1 in A[1..j] — but that's impossible since OPT(j) = L (greedy had maximum length at j). Thus all length-(L+1) subsequences extend at j+1 and end at A[j+1]. The greedy's last value is A[j+1], satisfying (II). ✓
+
+**Case 2: need = "down" and A[j+1] > last.**
+The greedy updates last ← A[j+1] without changing length. No oscillating subsequence of A[1..j+1] exceeds length L: any such extension would require a descent at step j+1 (A[j+1] < last), which doesn't hold, or using A[j+1] as an ascent from a "up"-needed state, but an OPT with "up"-state and length L would need last ≤ previous OPT "up" minimum — since OPT(j) = L, this would require an entirely different subsequence also of length L with need="up", reaching some value. In any case, OPT(j+1) remains L. The greedy's new last = A[j+1] ≥ any previous value, preserving (II) for need="down". ✓
+
+**Case 3: need = "up" and A[j+1] > last.** Symmetric to Case 1 with "up" and "down" swapped. ✓
+
+**Case 4: need = "up" and A[j+1] < last.** Symmetric to Case 2. ✓
+
+**Case 5: A[j+1] = last.** No change; invariant preserved trivially. ✓
+
+By induction, the invariant holds for all j. At j = n, the greedy's length L equals the length of the longest oscillating subsequence of A[1..n]. □
+
+**Example:** A = [5, 1, 9, 7, 23, 30, 3].
+- j=1: last=5, need=down, length=1.
+- j=2: 1 < 5 (need down): extend. last=1, need=up, length=2.
+- j=3: 9 > 1 (need up): extend. last=9, need=down, length=3.
+- j=4: 7 < 9 (need down): extend. last=7, need=up, length=4.
+- j=5: 23 > 7 (need up): extend. last=23, need=down, length=5.
+- j=6: 30 > 23 (need down but 30 > 23): update last=30. length=5.
+- j=7: 3 < 30 (need down): extend. last=3, need=up, length=6.
+- Result: 6. Subsequence: [5, 1, 9, 7, 30, 3].
+
+---
+
+## Problem 5: Coin Changing [20 points]
+
+### Part (a): Powers of c
+
+**Algorithm (Greedy-Change):** At each step, use as many coins of the largest denomination that does not exceed the remaining amount. Formally:
+
+```
+Greedy-Change(n, c, k):
+ for i ← k downto 0:
+ use floor(n / c^i) coins of denomination c^i
+ n ← n mod c^i
+ return the collection of coins used
+```
+
+**Claim:** This greedy algorithm always yields an optimal solution.
+
+**Proof:**
+
+**Step 1 (Reduction to canonical form):** We first show that in any optimal solution, the number of coins of denomination c^i used is at most c − 1, for each i < k.
+
+Suppose some solution uses m ≥ c coins of denomination c^i (i < k). Then c coins of denomination c^i can be replaced by 1 coin of denomination c^{i+1}, reducing the coin count by c − 1 ≥ 1. This gives a solution with fewer coins, contradicting optimality. Therefore any optimal solution uses at most c − 1 coins of each denomination c^i for i < k.
+
+**Step 2 (Uniqueness of representation):** Consider the representation of n in base c:
+
+n = n_0 · c^0 + n_1 · c^1 + ... + n_k · c^k
+
+where each digit 0 ≤ n_i ≤ c − 1 for i < k (and n_k ≥ 0). This base-c representation is unique. The greedy algorithm produces exactly this representation: at each step i, it takes floor((n mod c^{i+1}) / c^i) = n_i coins of denomination c^i.
+
+**Step 3 (Optimality):** Let (m_0, m_1, ..., m_k) be any optimal solution with Σ m_i · c^i = n. By Step 1, m_i ≤ c − 1 for all i < k. By uniqueness of base-c representation, the coefficients satisfying Σ m_i · c^i = n with 0 ≤ m_i ≤ c − 1 for i < k are exactly m_i = n_i for all i. Therefore every optimal solution uses exactly n_i coins of denomination c^i, the same as the greedy. The greedy is optimal. □
+
+**Note:** The total number of greedy coins is Σ n_i (sum of base-c digits of n), which is minimized.
+
+---
+
+### Part (b): Counterexample
+
+**Denominations: {1, 3, 4} with n = 6.**
+
+- **Greedy:** Uses the largest denomination ≤ remaining amount.
+ - 4 ≤ 6: use one 4-cent coin. Remaining: 2.
+ - 3 > 2: skip. 1 ≤ 2: use one 1-cent coin. Remaining: 1.
+ - 1 ≤ 1: use one 1-cent coin. Remaining: 0.
+ - Total: **3 coins** (4 + 1 + 1).
+
+- **Optimal:** Use two 3-cent coins: 3 + 3 = 6.
+ - Total: **2 coins**.
+
+The greedy uses 3 coins while the optimal uses only 2. Therefore the greedy does not yield an optimal solution for denominations {1, 3, 4}.
+
+(Note: The set {1, 3, 4} includes a penny (1-cent coin), so there is a valid solution for every value of n ≥ 0.)
Sub-Model Usage
| Model | Input | Output | Cache Read | Cost |
|---|---|---|---|---|
| claude-sonnet-4-6 | 6 | 37.7K | 136.5K | $1.33 |
BSCS Bench