Agent Work: Homework5
Claude Opus 4.6 · COMP 382: Reasoning about Algorithms
COMP 382: Reasoning about Algorithms
Homework 5
Total Points: 100 (Each problem is worth 20 points)
---
Problem 1: Greedy Trucking [20 points]
Consider a shipping company that sends a number of trucks each day between Houston and Austin. Trucks have a fixed weight W on the maximum amount of weight they are allowed to carry. Boxes arrive at the Houston station one by one, and each package has a weight wᵢ. The trucking station is quite small, so only one truck can be at the station at a given time. Company policy requires that the boxes are shipped in the order in which they are received.
Consider the greedy algorithm that packs the boxes in the order that they arrive; whenever the next box does not fit, the algorithm sends the truck on its way.
Prove that, for a given set of boxes with specified weights, the greedy algorithm minimizes the number of trucks that are needed.
---
Problem 2: Smallest Set of Intervals Containing Points [20 points]
Describe a greedy algorithm that, given a set {x₁, x₂, ..., xₙ} of points on the real line, determines the smallest set of unit-length closed intervals that contains all of the given points.
Prove that your algorithm is correct.
---
Problem 3: All Spanning Trees [20 points]
Kruskal's algorithm can return different spanning trees for the same input graph G, depending on how it breaks ties in edge costs when the edges are sorted.
Prove that for each minimum spanning tree T of G, there is a way to sort the edges of G in Kruskal's algorithm so that the algorithm returns T.
---
Problem 4: Oscillating Subsequence [20 points]
Call a sequence X[1...n] of numbers *oscillating* if X[i] < X[i + 1] for all even i, and X[i] > X[i + 1] for all odd i.
Describe a greedy algorithm to compute the length of the longest oscillating subsequence of an arbitrary array A of integers.
Note: A subsequence of an array A is a sequence ⟨a₁, ..., aₖ⟩ of elements of A that appear in the same order as in A. A subsequence need not be a contiguous subarray. For example, ⟨1, 7, 3⟩ is a subsequence of the array [5, 1, 9, 7, 23, 30, 3].
---
Problem 5: Coin Changing [20 points]
Consider the problem of making change for n cents using the smallest number of coins. Assume that each coin's value is an integer.
Part (a)
Describe a greedy algorithm to make change when available coins are in denominations that are powers of c: the denominations are c⁰, c¹, c², ..., cᵏ for some integers c > 1 and k ≥ 1.
Show that the greedy algorithm always yields an optimal solution.
Part (b)
Give a set of coin denominations for which the greedy algorithm does not yield an optimal solution. Your set should include a penny so that there is a solution for every value of n.
# COMP382 Homework 5
## Student Responses
<!-- Write your solutions below. Add sections for each problem as needed. -->
<!-- Use proper mathematical notation where applicable. -->
[Your solutions here]
COMP382 Homework 5
Student Responses
---
Problem 1: Greedy Trucking
Claim: The greedy algorithm (pack boxes in arrival order; send the truck when the next box does not fit) minimizes the number of trucks.
Proof. Let boxes arrive with weights w₁, w₂, ..., wₙ, and let W be the truck capacity. Because company policy requires shipping boxes in order, every valid solution partitions the sequence of boxes into consecutive groups (trucks), where each group's total weight is at most W.
Let the greedy solution partition the boxes into groups ending at positions g₁ < g₂ < ... < gₖ = n (so the first truck carries boxes 1 through g₁, the second carries g₁+1 through g₂, etc.). Let any other feasible solution (in particular, an optimal solution OPT) partition the boxes into groups ending at positions o₁ < o₂ < ... < oₘ = n.
Lemma: For every i such that both gᵢ and oᵢ are defined, gᵢ ≥ oᵢ.
*Proof of Lemma by induction on i:*
Base case (i = 1): The greedy algorithm loads boxes 1, 2, ..., g₁ onto the first truck, where g₁ is the largest index such that w₁ + w₂ + ... + w_{g₁} ≤ W. Any feasible solution's first truck carries boxes 1, ..., o₁ with total weight ≤ W, so o₁ ≤ g₁ since g₁ is the maximum such index.
Inductive step: Assume gᵢ ≥ oᵢ. Consider the (i+1)-th truck in each solution.
- The greedy's (i+1)-th truck starts at box gᵢ + 1 and loads the maximum number of consecutive boxes that fit, ending at g_{i+1}.
- The optimal's (i+1)-th truck starts at box oᵢ + 1 and carries boxes oᵢ + 1, ..., o_{i+1}.
Since gᵢ ≥ oᵢ, the set {gᵢ + 1, ..., o_{i+1}} is a subset of {oᵢ + 1, ..., o_{i+1}}. Thus the total weight of boxes gᵢ + 1, ..., o_{i+1} is at most the total weight of boxes oᵢ + 1, ..., o_{i+1}, which is ≤ W (since they fit on one truck in OPT). Therefore boxes gᵢ + 1, ..., o_{i+1} all fit on one truck. Since the greedy loads the maximum number of boxes, g_{i+1} ≥ o_{i+1}. This completes the inductive step.
Main argument. The greedy finishes all n boxes using k trucks, so gₖ = n. Suppose for contradiction that k > m. Then by the Lemma, gₘ ≥ oₘ = n. This means the greedy has shipped all n boxes using only m trucks, contradicting that the greedy uses k > m trucks. Therefore k ≤ m.
Since OPT is any feasible solution, the greedy uses the minimum number of trucks. ∎
---
Problem 2: Smallest Set of Intervals Containing Points
Algorithm:
Input: A set of points {x₁, x₂, ..., xₙ} on the real line.
1. Sort the points in increasing order: x₍₁₎ ≤ x₍₂₎ ≤ ... ≤ x₍ₙ₎. 2. Initialize the set of intervals S = ∅ and set i = 1. 3. While i ≤ n: - Place the interval I = [x₍ᵢ₎, x₍ᵢ₎ + 1]. Add I to S. - Advance i past all points covered by I: set i to the smallest index j > i such that x₍ⱼ₎ > x₍ᵢ₎ + 1. (If no such j exists, stop.) 4. Return S.
The algorithm greedily places each interval's left endpoint at the leftmost uncovered point, which extends coverage as far right as possible.
Proof of Correctness. We prove the greedy solution G = {I₁, I₂, ..., Iₖ} has minimum cardinality.
Let O = {J₁, J₂, ..., Jₘ} be any optimal solution with m intervals. We show k ≤ m.
Let the greedy intervals be I₁ = [x_{a₁}, x_{a₁} + 1], I₂ = [x_{a₂}, x_{a₂} + 1], ..., where x_{a₁} is the leftmost point, and x_{a_j} is the leftmost point not covered by I₁, ..., I_{j-1}.
We prove by induction that for each j, the right endpoint of Iⱼ is at least as large as the right endpoint of the j-th interval in any optimal solution (when optimal intervals are sorted by left endpoint).
Base case (j = 1): The interval J₁ in OPT that covers the leftmost point x_{a₁} must satisfy l₁ ≤ x_{a₁}, where J₁ = [l₁, l₁ + 1]. Thus J₁'s right endpoint is l₁ + 1 ≤ x_{a₁} + 1, which is I₁'s right endpoint. Since x_{a₁} is the leftmost point, there are no points to the left of x_{a₁}, so I₁ covers every point that J₁ covers (among our point set), and possibly more (since its right endpoint extends at least as far).
Inductive step: Assume I₁, ..., Iⱼ together cover all points covered by J₁, ..., Jⱼ. Then x_{a_{j+1}} (the leftmost uncovered point after the first j greedy intervals) is also not covered by J₁, ..., Jⱼ. Therefore OPT needs a distinct interval J_{j+1} to cover x_{a_{j+1}}. Since J_{j+1} must contain x_{a_{j+1}}, its left endpoint is ≤ x_{a_{j+1}}, so its right endpoint is ≤ x_{a_{j+1}} + 1 = I_{j+1}'s right endpoint. Hence I_{j+1} covers all points that J_{j+1} covers, maintaining the inductive invariant.
Since each greedy interval Iⱼ maps to a distinct interval Jⱼ in OPT, we get k ≤ m. As OPT is optimal, k = m, so the greedy solution is optimal. ∎
---
Problem 3: All Spanning Trees from Kruskal's
Claim: For every minimum spanning tree T of a graph G, there exists an ordering of G's edges (consistent with non-decreasing weight) such that Kruskal's algorithm returns T.
Proof. Given an MST T, define the following edge ordering: sort edges by non-decreasing weight, and among edges of the same weight, place edges in T before edges not in T (breaking remaining ties arbitrarily).
We show that Kruskal's algorithm with this ordering accepts exactly the edges of T.
Partition the edges of G into weight classes W₁ < W₂ < ... < Wₚ. We prove by induction on j that after processing all edges in weight classes W₁, ..., Wⱼ, Kruskal's forest F consists of exactly the edges of T with weight ≤ Wⱼ.
Base case (j = 0): Before processing any edges, F = ∅, which equals the set of T-edges with weight less than W₁. ✓
Inductive step: Assume that after processing weight classes W₁, ..., W_{j-1}, the forest F consists exactly of the T-edges with weight < Wⱼ. Now we process weight class Wⱼ.
*T-edges are accepted:* Within weight class Wⱼ, edges in T come first in our ordering. Adding these edges one by one to F always produces a subset of T's edge set (since F ⊆ T by the inductive hypothesis). Since T is a tree (hence acyclic), every subset of T's edges is acyclic. So Kruskal accepts every T-edge in weight class Wⱼ.
*Non-T edges are rejected:* After accepting all T-edges of weight Wⱼ, the forest is F' = {all edges of T with weight ≤ Wⱼ}. Consider a non-T edge f with w(f) = Wⱼ. Since T is a spanning tree and f ∉ T, adding f to T creates a unique cycle C. Every edge of C other than f belongs to T. We claim every edge e ∈ C \ {f} has w(e) ≤ Wⱼ.
Suppose for contradiction that some edge e ∈ C ∩ T has w(e) > Wⱼ = w(f). Then T' = T − {e} ∪ {f} is a spanning tree (removing e from cycle C disconnects T into two components, and f reconnects them). Its total weight is w(T) − w(e) + w(f) < w(T), contradicting the minimality of T.
Therefore all edges of C \ {f} have weight ≤ Wⱼ, so they are all in F'. This means f creates a cycle with the edges in F', and Kruskal rejects f.
By induction, Kruskal accepts exactly the edges of T, so it returns T. ∎
---
Problem 4: Oscillating Subsequence
Recall: a sequence X[1..n] is *oscillating* if X[i] > X[i+1] for all odd i, and X[i] < X[i+1] for all even i. The pattern is: high, low, high, low, ... (the first step is a decrease, the second is an increase, and so on alternating).
Greedy Algorithm:
LONGEST-OSCILLATING(A[1..n]):
if n = 0: return 0
length = 1
last = A[1]
state = "need_decrease" // position 1 is odd, so X[1] > X[2]
for i = 2 to n:
if state = "need_decrease" and A[i] < last:
length = length + 1
last = A[i]
state = "need_increase"
else if state = "need_increase" and A[i] > last:
length = length + 1
last = A[i]
state = "need_decrease"
else:
// Greedily update last to a more extreme value
if state = "need_decrease":
last = max(last, A[i]) // higher peak → easier descent
else:
last = min(last, A[i]) // lower valley → easier ascent
return lengthDescription: We scan A from left to right maintaining the last value selected and the direction of the next required step. At each position:
- If the current element satisfies the required direction (decrease or increase) relative to
last, we extend the subsequence and flip the required direction. - Otherwise, we greedily update
last: if we need a decrease next, we replacelastwith a higher value (to make a future decrease more likely); if we need an increase, we replacelastwith a lower value (to make a future increase more likely).
Correctness Argument:
The algorithm is correct because of two key observations:
1. Extending is always optimal. When we can extend the oscillating subsequence, we should—delaying can only cause us to miss an opportunity, never create a better one. Adding an element now does not prevent future extensions.
2. Updating `last` is safe and optimal. When we cannot extend but encounter an element that is more extreme in the desired direction (a higher peak or a lower valley), replacing last with this element is safe: the new value appears later in A than the previous last, and it still satisfies the oscillating property with respect to all preceding chosen elements. Having a more extreme value strictly increases (or does not decrease) the chance of extending the subsequence later.
More formally, let L(i) be the length of the longest oscillating subsequence of A[1..i] that the algorithm produces after processing position i. By the above reasoning, L(i) is optimal at each step, and the greedy choices maintain an optimal "state" (the best possible endpoint value) for future extensions.
The algorithm runs in O(n) time and O(1) extra space.
---
Problem 5: Coin Changing
Part (a)
Available denominations: c⁰ = 1, c¹, c², ..., cᵏ, where c > 1 and k ≥ 1 are integers.
Greedy Algorithm:
COIN-CHANGE(n, c, k):
coins_used = 0
remaining = n
for i = k down to 0:
coins_used = coins_used + ⌊remaining / cⁱ⌋
remaining = remaining mod cⁱ
return coins_usedAt each step, use as many coins as possible of the largest remaining denomination.
Proof of Optimality.
Observation: The greedy algorithm produces the base-c representation of n. Specifically, if n = aₖcᵏ + a_{k-1}c^{k-1} + ... + a₁c + a₀, where 0 ≤ aᵢ ≤ c − 1 for all i < k and aₖ ≥ 0, then the greedy uses exactly aᵢ coins of denomination cⁱ. This is the standard base-c digit decomposition.
Lemma: In any optimal solution, the number of coins used of denomination cⁱ (for i = 0, 1, ..., k−1) is at most c − 1.
*Proof of Lemma:* Suppose an optimal solution uses c or more coins of denomination cⁱ for some i < k. Then we can replace c coins of denomination cⁱ with a single coin of denomination cⁱ⁺¹ (since c · cⁱ = cⁱ⁺¹), producing the same total value with c − 1 fewer coins. This contradicts optimality. ∎
Main Proof: Since any optimal solution uses at most c − 1 coins of each denomination below cᵏ, it corresponds to a representation n = a'_k · cᵏ + a'_{k-1} · c^{k-1} + ... + a'₀ · c⁰, where 0 ≤ a'ᵢ ≤ c − 1 for i < k. But the base-c representation of n with this constraint is unique: the digits a₀, a₁, ..., a_{k-1} are uniquely determined, and consequently aₖ is also uniquely determined. Therefore the optimal solution must use exactly the same coin counts as the greedy algorithm.
Since the greedy solution is the unique representation using at most c − 1 coins of each denomination below cᵏ, and every optimal solution must also have this property, the greedy algorithm is optimal. ∎
Part (b)
Counterexample denominations: {1, 3, 4}.
Consider making change for n = 6 cents:
- Greedy: Choose the largest denomination ≤ 6, which is 4. Remaining = 2. Then use two 1-cent coins. Total: 3 coins (4 + 1 + 1).
- Optimal: Use two 3-cent coins. Total: 2 coins (3 + 3).
The greedy algorithm yields 3 coins, but the optimal solution uses only 2 coins. Therefore, the greedy algorithm does not yield an optimal solution for the denomination set {1, 3, 4}. ∎
Problem 1: Greedy Trucking
Score: 20/20 points
Assessment:
Approach: Classic "stays ahead" proof via induction on the number of trucks, formalized with a clean Lemma.
Strengths:
- Excellent formal structure: clearly defines the greedy partition (g₁, ..., gₖ) and any feasible partition (o₁, ..., oₘ), then states and proves a precise Lemma (gᵢ ≥ oᵢ for all i)
- Base case is crisp: greedy loads the maximum number of boxes, so g₁ ≥ o₁
- Inductive step correctly argues that since gᵢ ≥ oᵢ, the boxes {gᵢ+1, ..., o_{i+1}} are a subset of {oᵢ+1, ..., o_{i+1}}, hence fit on one truck, and greedy loads maximally so g_{i+1} ≥ o_{i+1}
- The final contradiction argument (if k > m then gₘ ≥ oₘ = n, contradicting k > m) is clean and complete
Errors/Gaps:
- None
Detailed Feedback: This is an exemplary proof that goes beyond the reference solution in rigor and clarity. The separation of the Lemma from the main argument is a nice structural choice, and every step is fully justified. The subset argument in the inductive step is particularly well-articulated.
Rubric Breakdown:
- Correct proof technique (stays ahead): 5/5
- Base case: 5/5
- Inductive step: 5/5
- Correct conclusion (optimality follows from lemma): 5/5
---
Problem 2: Smallest Set of Intervals Containing Points
Score: 20/20 points
Assessment:
Approach: Greedy algorithm placing each interval's left endpoint at the leftmost uncovered point, with a "stays ahead" optimality proof.
Strengths:
- Algorithm is clearly specified with precise pseudocode-style description
- Correctly identifies the greedy choice: left endpoint at leftmost uncovered point
- Proof uses a clean inductive "stays ahead" argument: greedy interval Iⱼ's right endpoint ≥ OPT's j-th interval's right endpoint
- Base case and inductive step are both rigorous
- Final conclusion (k ≤ m via injection from greedy intervals to OPT intervals) is correct
Errors/Gaps:
- None
Detailed Feedback: The algorithm matches the reference solution. The proof is actually more detailed than the reference — while the reference gives a somewhat informal argument ("at each step there is a clearly optimal choice"), the student provides a full inductive proof with explicit base case and inductive step. The key insight that J_{j+1} must contain the leftmost uncovered point x_{a_{j+1}}, so its right endpoint ≤ x_{a_{j+1}} + 1 = I_{j+1}'s right endpoint, is correctly established.
Rubric Breakdown:
- Correct algorithm description: 6/6
- Proof structure (induction/exchange): 5/5
- Base case: 4/4
- Inductive step: 5/5
---
Problem 3: All Spanning Trees from Kruskal's
Score: 20/20 points
Assessment:
Approach: Orders edges by weight with T-edges prioritized within each weight class, then proves by induction on weight classes that Kruskal accepts exactly T's edges.
Strengths:
- Clever and clean ordering: within each weight class, T-edges before non-T edges
- Induction on weight classes is well-structured
- T-edges accepted: correctly argues any subset of T's edges is acyclic (since T is a tree)
- Non-T edges rejected: uses the fundamental cycle property — adding non-T edge f to T creates cycle C, then argues all other edges in C have weight ≤ w(f) via MST optimality (otherwise swapping would yield a lighter spanning tree)
- The contradiction argument for why cycle edges have weight ≤ Wⱼ is rigorous and correct
Errors/Gaps:
- None
Detailed Feedback: This proof takes a different approach from the reference solution (which perturbs edge weights to make T the unique MST) but is equally valid and arguably more elegant. The reference uses a perturbation argument; the student uses a direct inductive argument on weight classes. The critical step — showing non-T edges are rejected because they create cycles with edges already in the forest — is handled with a clean contradiction argument using MST optimality. This is a complete, correct proof.
Rubric Breakdown:
- Valid edge ordering construction: 5/5
- T-edges accepted (acyclicity argument): 5/5
- Non-T edges rejected (cycle + MST optimality): 7/7
- Overall correctness and completeness: 3/3
---
Problem 4: Oscillating Subsequence
Score: 19/20 points
Assessment:
Approach: Linear scan maintaining the last selected value and required direction, greedily extending or updating the endpoint to a more extreme value.
Strengths:
- Algorithm is clearly specified with clean pseudocode
- Correctly identifies the two key actions: extend when possible, otherwise update
lastto a more extreme value - The greedy update strategy (higher peak for future descent, lower valley for future ascent) is correct and well-motivated
- O(n) time and O(1) space complexity correctly stated
- The algorithm is essentially equivalent to counting local extrema (the reference approach), presented from a different but valid perspective
Errors/Gaps:
- The correctness argument, while capturing the right intuitions, is somewhat informal. The student mentions "More formally, let L(i)..." but does not carry out a full formal induction. A rigorous proof would explicitly show by induction that the greedy maintains an optimal state at each step.
Detailed Feedback:
The algorithm is correct and effectively equivalent to the reference solution's approach of identifying "runs" and taking their endpoints. The greedy update step (replacing last with a more extreme value when we can't extend) correctly captures transitions between runs. The two key observations — that extending is always optimal and that updating to a more extreme value is safe — are the right insights. The proof loses a small amount of credit for not being fully formalized; the student gestures toward a formal induction ("let L(i) be the length...") but doesn't complete it. The reference solution's proof is also informal, but the student could have strengthened their argument with an explicit exchange argument or inductive proof.
Rubric Breakdown:
- Correct greedy algorithm: 10/10
- Correctness argument (key insights present but not fully formal): 7/8
- Time complexity: 2/2
---
Problem 5: Coin Changing
Score: 20/20 points
Assessment:
Part (a):
Approach: Shows greedy produces the base-c representation, proves any optimal solution must also be the base-c representation via an exchange argument.
Strengths:
- Algorithm is clearly stated with pseudocode
- Key observation: greedy = base-c representation of n
- Lemma (xᵢ < c for i < k in any optimal solution) is correctly proved by exchange: c coins of cⁱ can be replaced by 1 coin of cⁱ⁺¹, saving c−1 coins
- Main proof correctly argues uniqueness of base-c representation forces optimal = greedy
- Clean, well-organized proof structure
Part (b):
Approach: Counterexample with denominations {1, 3, 4} and n = 6.
Strengths:
- Correct counterexample matching the reference solution exactly
- Clearly shows greedy gives 3 coins (4+1+1) vs optimal 2 coins (3+3)
Errors/Gaps:
- None
Detailed Feedback: This is a thorough and correct solution to both parts. The proof of optimality for Part (a) closely follows the reference solution's structure and is fully rigorous. The key insight — that the constraint xᵢ < c for i < k, combined with uniqueness of base-c representation, pins down the solution — is clearly articulated. Part (b) provides the same clean counterexample as the reference.
Rubric Breakdown:
- Part (a) algorithm: 3/3
- Part (a) exchange argument (Lemma): 5/5
- Part (a) uniqueness/optimality conclusion: 5/5
- Part (b) valid counterexample: 5/5
- Part (b) demonstration greedy fails: 2/2
---
Overall Summary
| Problem | Score | Notes |
|---|---|---|
| 1 - Greedy Trucking | 20/20 | Exemplary "stays ahead" proof with clean Lemma structure |
| 2 - Smallest Set of Intervals | 20/20 | Complete algorithm and rigorous inductive proof |
| 3 - All Spanning Trees | 20/20 | Elegant direct proof via weight classes (different from but equivalent to reference) |
| 4 - Oscillating Subsequence | 19/20 | Correct algorithm, slightly informal correctness argument |
| 5 - Coin Changing | 20/20 | Rigorous base-c argument for (a), correct counterexample for (b) |
| Total | 99/100 |
General Comments:
This is an outstanding submission demonstrating deep understanding of greedy algorithm design and proof techniques. All five problems show strong mathematical maturity. The proofs are generally more detailed and rigorous than the reference solutions, with clear structure (Lemma + Main Proof), explicit base cases, and well-articulated inductive steps. Problem 3 is particularly impressive — the student's direct inductive proof via weight classes with the cycle/MST-optimality argument is arguably cleaner than the reference's perturbation approach. The only minor weakness is the slightly informal correctness argument in Problem 4, where the key intuitions are present but a full formal induction is not carried out. Overall, excellent work.
Total: 99/100
diff --git a/writeup.md b/writeup.md
index 76d2b84..98e9071 100644
--- a/writeup.md
+++ b/writeup.md
@@ -2,8 +2,182 @@
## Student Responses
-<!-- Write your solutions below. Add sections for each problem as needed. -->
-<!-- Use proper mathematical notation where applicable. -->
+---
-[Your solutions here]
+## Problem 1: Greedy Trucking
+**Claim:** The greedy algorithm (pack boxes in arrival order; send the truck when the next box does not fit) minimizes the number of trucks.
+
+**Proof.** Let boxes arrive with weights w₁, w₂, ..., wₙ, and let W be the truck capacity. Because company policy requires shipping boxes in order, every valid solution partitions the sequence of boxes into consecutive groups (trucks), where each group's total weight is at most W.
+
+Let the greedy solution partition the boxes into groups ending at positions g₁ < g₂ < ... < gₖ = n (so the first truck carries boxes 1 through g₁, the second carries g₁+1 through g₂, etc.). Let any other feasible solution (in particular, an optimal solution OPT) partition the boxes into groups ending at positions o₁ < o₂ < ... < oₘ = n.
+
+**Lemma:** For every i such that both gᵢ and oᵢ are defined, gᵢ ≥ oᵢ.
+
+*Proof of Lemma by induction on i:*
+
+**Base case (i = 1):** The greedy algorithm loads boxes 1, 2, ..., g₁ onto the first truck, where g₁ is the largest index such that w₁ + w₂ + ... + w_{g₁} ≤ W. Any feasible solution's first truck carries boxes 1, ..., o₁ with total weight ≤ W, so o₁ ≤ g₁ since g₁ is the maximum such index.
+
+**Inductive step:** Assume gᵢ ≥ oᵢ. Consider the (i+1)-th truck in each solution.
+
+- The greedy's (i+1)-th truck starts at box gᵢ + 1 and loads the maximum number of consecutive boxes that fit, ending at g_{i+1}.
+- The optimal's (i+1)-th truck starts at box oᵢ + 1 and carries boxes oᵢ + 1, ..., o_{i+1}.
+
+Since gᵢ ≥ oᵢ, the set {gᵢ + 1, ..., o_{i+1}} is a subset of {oᵢ + 1, ..., o_{i+1}}. Thus the total weight of boxes gᵢ + 1, ..., o_{i+1} is at most the total weight of boxes oᵢ + 1, ..., o_{i+1}, which is ≤ W (since they fit on one truck in OPT). Therefore boxes gᵢ + 1, ..., o_{i+1} all fit on one truck. Since the greedy loads the maximum number of boxes, g_{i+1} ≥ o_{i+1}. This completes the inductive step.
+
+**Main argument.** The greedy finishes all n boxes using k trucks, so gₖ = n. Suppose for contradiction that k > m. Then by the Lemma, gₘ ≥ oₘ = n. This means the greedy has shipped all n boxes using only m trucks, contradicting that the greedy uses k > m trucks. Therefore k ≤ m.
+
+Since OPT is any feasible solution, the greedy uses the minimum number of trucks. **∎**
+
+---
+
+## Problem 2: Smallest Set of Intervals Containing Points
+
+**Algorithm:**
+
+Input: A set of points {x₁, x₂, ..., xₙ} on the real line.
+
+1. Sort the points in increasing order: x₍₁₎ ≤ x₍₂₎ ≤ ... ≤ x₍ₙ₎.
+2. Initialize the set of intervals S = ∅ and set i = 1.
+3. While i ≤ n:
+ - Place the interval I = [x₍ᵢ₎, x₍ᵢ₎ + 1]. Add I to S.
+ - Advance i past all points covered by I: set i to the smallest index j > i such that x₍ⱼ₎ > x₍ᵢ₎ + 1. (If no such j exists, stop.)
+4. Return S.
+
+The algorithm greedily places each interval's left endpoint at the leftmost uncovered point, which extends coverage as far right as possible.
+
+**Proof of Correctness.** We prove the greedy solution G = {I₁, I₂, ..., Iₖ} has minimum cardinality.
+
+Let O = {J₁, J₂, ..., Jₘ} be any optimal solution with m intervals. We show k ≤ m.
+
+Let the greedy intervals be I₁ = [x_{a₁}, x_{a₁} + 1], I₂ = [x_{a₂}, x_{a₂} + 1], ..., where x_{a₁} is the leftmost point, and x_{a_j} is the leftmost point not covered by I₁, ..., I_{j-1}.
+
+We prove by induction that for each j, the right endpoint of Iⱼ is at least as large as the right endpoint of the j-th interval in any optimal solution (when optimal intervals are sorted by left endpoint).
+
+**Base case (j = 1):** The interval J₁ in OPT that covers the leftmost point x_{a₁} must satisfy l₁ ≤ x_{a₁}, where J₁ = [l₁, l₁ + 1]. Thus J₁'s right endpoint is l₁ + 1 ≤ x_{a₁} + 1, which is I₁'s right endpoint. Since x_{a₁} is the leftmost point, there are no points to the left of x_{a₁}, so I₁ covers every point that J₁ covers (among our point set), and possibly more (since its right endpoint extends at least as far).
+
+**Inductive step:** Assume I₁, ..., Iⱼ together cover all points covered by J₁, ..., Jⱼ. Then x_{a_{j+1}} (the leftmost uncovered point after the first j greedy intervals) is also not covered by J₁, ..., Jⱼ. Therefore OPT needs a distinct interval J_{j+1} to cover x_{a_{j+1}}. Since J_{j+1} must contain x_{a_{j+1}}, its left endpoint is ≤ x_{a_{j+1}}, so its right endpoint is ≤ x_{a_{j+1}} + 1 = I_{j+1}'s right endpoint. Hence I_{j+1} covers all points that J_{j+1} covers, maintaining the inductive invariant.
+
+Since each greedy interval Iⱼ maps to a distinct interval Jⱼ in OPT, we get k ≤ m. As OPT is optimal, k = m, so the greedy solution is optimal. **∎**
+
+---
+
+## Problem 3: All Spanning Trees from Kruskal's
+
+**Claim:** For every minimum spanning tree T of a graph G, there exists an ordering of G's edges (consistent with non-decreasing weight) such that Kruskal's algorithm returns T.
+
+**Proof.** Given an MST T, define the following edge ordering: sort edges by non-decreasing weight, and among edges of the same weight, place edges in T before edges not in T (breaking remaining ties arbitrarily).
+
+We show that Kruskal's algorithm with this ordering accepts exactly the edges of T.
+
+Partition the edges of G into weight classes W₁ < W₂ < ... < Wₚ. We prove by induction on j that after processing all edges in weight classes W₁, ..., Wⱼ, Kruskal's forest F consists of exactly the edges of T with weight ≤ Wⱼ.
+
+**Base case (j = 0):** Before processing any edges, F = ∅, which equals the set of T-edges with weight less than W₁. ✓
+
+**Inductive step:** Assume that after processing weight classes W₁, ..., W_{j-1}, the forest F consists exactly of the T-edges with weight < Wⱼ. Now we process weight class Wⱼ.
+
+*T-edges are accepted:* Within weight class Wⱼ, edges in T come first in our ordering. Adding these edges one by one to F always produces a subset of T's edge set (since F ⊆ T by the inductive hypothesis). Since T is a tree (hence acyclic), every subset of T's edges is acyclic. So Kruskal accepts every T-edge in weight class Wⱼ.
+
+*Non-T edges are rejected:* After accepting all T-edges of weight Wⱼ, the forest is F' = {all edges of T with weight ≤ Wⱼ}. Consider a non-T edge f with w(f) = Wⱼ. Since T is a spanning tree and f ∉ T, adding f to T creates a unique cycle C. Every edge of C other than f belongs to T. We claim every edge e ∈ C \ {f} has w(e) ≤ Wⱼ.
+
+Suppose for contradiction that some edge e ∈ C ∩ T has w(e) > Wⱼ = w(f). Then T' = T − {e} ∪ {f} is a spanning tree (removing e from cycle C disconnects T into two components, and f reconnects them). Its total weight is w(T) − w(e) + w(f) < w(T), contradicting the minimality of T.
+
+Therefore all edges of C \ {f} have weight ≤ Wⱼ, so they are all in F'. This means f creates a cycle with the edges in F', and Kruskal rejects f.
+
+By induction, Kruskal accepts exactly the edges of T, so it returns T. **∎**
+
+---
+
+## Problem 4: Oscillating Subsequence
+
+Recall: a sequence X[1..n] is *oscillating* if X[i] > X[i+1] for all odd i, and X[i] < X[i+1] for all even i. The pattern is: high, low, high, low, ... (the first step is a decrease, the second is an increase, and so on alternating).
+
+**Greedy Algorithm:**
+
+```
+LONGEST-OSCILLATING(A[1..n]):
+ if n = 0: return 0
+ length = 1
+ last = A[1]
+ state = "need_decrease" // position 1 is odd, so X[1] > X[2]
+
+ for i = 2 to n:
+ if state = "need_decrease" and A[i] < last:
+ length = length + 1
+ last = A[i]
+ state = "need_increase"
+ else if state = "need_increase" and A[i] > last:
+ length = length + 1
+ last = A[i]
+ state = "need_decrease"
+ else:
+ // Greedily update last to a more extreme value
+ if state = "need_decrease":
+ last = max(last, A[i]) // higher peak → easier descent
+ else:
+ last = min(last, A[i]) // lower valley → easier ascent
+
+ return length
+```
+
+**Description:** We scan A from left to right maintaining the last value selected and the direction of the next required step. At each position:
+
+- If the current element satisfies the required direction (decrease or increase) relative to `last`, we extend the subsequence and flip the required direction.
+- Otherwise, we greedily update `last`: if we need a decrease next, we replace `last` with a higher value (to make a future decrease more likely); if we need an increase, we replace `last` with a lower value (to make a future increase more likely).
+
+**Correctness Argument:**
+
+The algorithm is correct because of two key observations:
+
+1. **Extending is always optimal.** When we can extend the oscillating subsequence, we should—delaying can only cause us to miss an opportunity, never create a better one. Adding an element now does not prevent future extensions.
+
+2. **Updating `last` is safe and optimal.** When we cannot extend but encounter an element that is more extreme in the desired direction (a higher peak or a lower valley), replacing `last` with this element is safe: the new value appears later in A than the previous `last`, and it still satisfies the oscillating property with respect to all preceding chosen elements. Having a more extreme value strictly increases (or does not decrease) the chance of extending the subsequence later.
+
+More formally, let L(i) be the length of the longest oscillating subsequence of A[1..i] that the algorithm produces after processing position i. By the above reasoning, L(i) is optimal at each step, and the greedy choices maintain an optimal "state" (the best possible endpoint value) for future extensions.
+
+The algorithm runs in O(n) time and O(1) extra space.
+
+---
+
+## Problem 5: Coin Changing
+
+### Part (a)
+
+**Available denominations:** c⁰ = 1, c¹, c², ..., cᵏ, where c > 1 and k ≥ 1 are integers.
+
+**Greedy Algorithm:**
+
+```
+COIN-CHANGE(n, c, k):
+ coins_used = 0
+ remaining = n
+ for i = k down to 0:
+ coins_used = coins_used + ⌊remaining / cⁱ⌋
+ remaining = remaining mod cⁱ
+ return coins_used
+```
+
+At each step, use as many coins as possible of the largest remaining denomination.
+
+**Proof of Optimality.**
+
+**Observation:** The greedy algorithm produces the base-c representation of n. Specifically, if n = aₖcᵏ + a_{k-1}c^{k-1} + ... + a₁c + a₀, where 0 ≤ aᵢ ≤ c − 1 for all i < k and aₖ ≥ 0, then the greedy uses exactly aᵢ coins of denomination cⁱ. This is the standard base-c digit decomposition.
+
+**Lemma:** In any optimal solution, the number of coins used of denomination cⁱ (for i = 0, 1, ..., k−1) is at most c − 1.
+
+*Proof of Lemma:* Suppose an optimal solution uses c or more coins of denomination cⁱ for some i < k. Then we can replace c coins of denomination cⁱ with a single coin of denomination cⁱ⁺¹ (since c · cⁱ = cⁱ⁺¹), producing the same total value with c − 1 fewer coins. This contradicts optimality. **∎**
+
+**Main Proof:** Since any optimal solution uses at most c − 1 coins of each denomination below cᵏ, it corresponds to a representation n = a'_k · cᵏ + a'_{k-1} · c^{k-1} + ... + a'₀ · c⁰, where 0 ≤ a'ᵢ ≤ c − 1 for i < k. But the base-c representation of n with this constraint is unique: the digits a₀, a₁, ..., a_{k-1} are uniquely determined, and consequently aₖ is also uniquely determined. Therefore the optimal solution must use exactly the same coin counts as the greedy algorithm.
+
+Since the greedy solution is the unique representation using at most c − 1 coins of each denomination below cᵏ, and every optimal solution must also have this property, the greedy algorithm is optimal. **∎**
+
+### Part (b)
+
+**Counterexample denominations:** {1, 3, 4}.
+
+Consider making change for **n = 6 cents**:
+
+- **Greedy:** Choose the largest denomination ≤ 6, which is 4. Remaining = 2. Then use two 1-cent coins. Total: **3 coins** (4 + 1 + 1).
+- **Optimal:** Use two 3-cent coins. Total: **2 coins** (3 + 3).
+
+The greedy algorithm yields 3 coins, but the optimal solution uses only 2 coins. Therefore, the greedy algorithm does not yield an optimal solution for the denomination set {1, 3, 4}. **∎**
Sub-Model Usage
| Model | Input | Output | Cache Read | Cost |
|---|---|---|---|---|
| claude-opus-4-6 | 4 | 14.6K | 76.2K | $0.58 |
BSCS Bench