CHEMA Hub Operations · Sort Intelligence Branch
Mathematical Abstractions of the UPS Chelmsford Sort Training System
Rafael Almeida, Employee 6068314 LabelTrainingCertification v1.0 Last updated May 2026 Living document — read from top before resuming
State

Project State & Key Data Structures

The system is a single-page browser app (index.html + five JS modules) that trains UPS package sorters on routing decisions. The core data structures underlying all mathematical analysis are:

SymbolCode nameDescription
$Z$TRUTH_RAW_tmapSet of ~41,000 ZIP codes with routing
$B$BELT_NAMES indices 0–1516 belt destinations
$f$_tmap : Map<zip, entry>The routing function
$\mathcal{F}$BELT_FAMILIESConfusion neighbourhood graph
$\mathcal{E}$sort_events storeSequence of sorter answer events
$\mathcal{O}$overlays storeFinite set of routing corrections
$w$_cfg.stateWeightsState weight vector
$M$computed in missortMatrix()The missort count matrix

Belt index reference (canonical, corrected 2026-05-13):

0 Top Black (PD-01)
8 Top Red (PD-09)
1 Top Yellow (PD-02)
9 Orange (PD-10)
2 Top White (PD-03)
10 Bottom Yellow (PD-11)
3 Middle Yellow (PD-04)
11 Top Blue (PD-12)
4 Bottom Red (PD-05)
12 Middle Blue (PD-13)*
5 Middle White (PD-06)
13 Middle Green (AF1)
6 Top Green (PD-07)
14 Bottom Green (AF2)
7 Middle Red (PD-08)
15 Middle Black (Secondary)

*PD-13/Middle Blue (index 12): peak season overflow only. CCHIL belts: {3, 4, 8}. Air belts: {13, 14}.

00

The OJS-to-LTC Certification Chain

0.1 What OJS Certifies

The LTC system does not operate in isolation. It is the knowledge layer in a two-layer employee certification architecture; the method layer beneath it is the On-Job Supervision system (GEMS #1760). Four role-specific OJS certification sheets exist:

SheetRoleTotal MethodsNature of Certification
unloader-ojsMetro Unloader127Equipment setup, package selection, unload sequence, flow management
sorter-ojsSort aisle sorter111Presort, label reading, body position, package routing to color-coded belts
pickoff-ojsPick-off employee87Chute routing, belt flow management, loader coordination
loader-ojsLoader (Load SMART Scanning)168Scanner setup, ULD bay-scan, package scanning, wall building, smalls handling

Each method must be explained (E) with a benefit or consequence in the trainee's own words, and demonstrated (D) at production rate. Passing threshold: 95% or higher. Recertification: every six months.

0.2 How OJS and LTC Differ

The two certifications address orthogonal dimensions of competency. OJS certifies how to execute the role: correct body position, hand-to-surface handling, scanner setup, ULD bay-scan sequence, "scan one, load one" cadence. LTC certifies where: SLIC-to-belt routing for sorters, ZIP exception routing for pick-offs, scan attribution discipline for loaders.

An employee with full OJS but incomplete LTC routing knowledge executes the correct physical method while making routing errors — producing missorts (sorters) or misloads (pick-offs). An employee with complete LTC routing knowledge but degraded OJS method compliance may know the correct destination but execute the scanning sequence in ways that corrupt iGate attribution (e.g., skipping Method #20 — the bay-door ULD scan — or using a coworker's ID in violation of Method #7).

0.3 Implication for the Sorter Channel Model

Section 5 models the sorter as a discrete memoryless channel with transition matrix $\mathbf{M}$. Departures from the identity $\mathbf{I}_{16}$ arise from two distinct sources the channel model does not currently separate:

  1. Knowledge gaps (LTC layer): the sorter does not know the correct belt for a given ZIP/SLIC. This is what LTC quiz accuracy measures.
  2. Method gaps (OJS layer): the sorter knows the correct belt but reads the label incorrectly (label facing wrong, fatigue-degraded pattern recognition). These produce errors even at full routing knowledge.
[OPEN — §13.11]

Partition the confusion matrix $\mathbf{M}$ into a knowledge-gap component $\mathbf{M}_K$ and a method-gap component $\mathbf{M}_E$ such that $\mathbf{M} = \mathbf{M}_K \cdot \mathbf{M}_E$. LTC directly reduces $\mathbf{M}_K \to \mathbf{I}$; OJS recertification reduces $\mathbf{M}_E \to \mathbf{I}$. Both are needed for $\mathbf{M} \to \mathbf{I}$.

01

The Routing Function

1.1 Basic Structure

Let $Z \subset \{0, \ldots, 99999\}$ be the set of ZIP codes with defined routes, $|Z| \approx 41{,}000$. Let $B = \{0, 1, \ldots, 15\}$ be the belt index set. The routing function is:

$$f : Z \to B$$

This is a total function on its domain (every loaded ZIP maps to exactly one belt), but a partial function over all possible 5-digit strings $\{00000, \ldots, 99999\}$. The pair $(Z, B, f)$ constitutes a finite relational structure — specifically a functional binary relation (each $z \in Z$ has exactly one image under $f$).

1.2 The CCHIL Multivalued Extension

For ZIPs in the 13 CCHIL states (MS, IA, WI, MN, SD, ND, NE, LA, OK, CO, WY, AZ, NM), the app accepts any of the three CCHIL belts as correct. The routing becomes a multifunction:

$$\tilde{f} : Z \to \mathcal{P}(B), \quad \tilde{f}(z) = \begin{cases} \{f(z)\} & z \notin Z_{\text{CCHIL}} \\ \{3, 4, 8\} & z \in Z_{\text{CCHIL}} \end{cases}$$

The sorter's answer $a \in B$ is marked correct iff $a \in \tilde{f}(z)$. This is equivalent to defining an acceptance relation $\mathcal{A} \subseteq Z \times B$ where $(z, b) \in \mathcal{A}$ iff $b \in \tilde{f}(z)$.

1.3 Properties

$f$ is not injective in general: many ZIPs in different states share a belt destination. $f$ is not surjective over $B$ from every state. The preimage partition $\{f^{-1}(b) : b \in B\}$ is a partition of $Z$ into 16 classes — this partition is the fundamental structure the sorter must learn.

[OPEN — §13.1]

Characterise the preimage sizes $|f^{-1}(b)|$ for each belt $b$. Belts with large preimages are easier to learn by frequency; belts with small preimages are easily missed and dominate the Missort Matrix.

02

The ZIP Code Space as an Ultrametric

2.1 Prefix Metric

ZIP codes viewed as 5-character strings admit a natural prefix metric. Define the longest common prefix length $\ell(z_1, z_2) = \max\{k : z_1[1..k] = z_2[1..k]\}$ and the metric:

$$d(z_1, z_2) = 10^{5 - \ell(z_1, z_2)}$$

(with $d(z,z)=0$). This satisfies the ultrametric inequality:

$$d(z_1, z_3) \leq \max(d(z_1, z_2),\; d(z_2, z_3))$$

which is strictly stronger than the triangle inequality. The set $(Z_{\text{all}}, d)$ is therefore an ultrametric space. Ultrametric spaces are "tree-like" — the metric corresponds to depth of divergence in the ZIP prefix trie, which matches physical geography: nearby ZIP codes share longer prefixes and route to the same or related belts.

2.2 Local Constancy of the Routing Function

Empirically, $f$ is locally constant at the 3-digit prefix level:

$$d(z_1, z_2) \leq 100 \implies f(z_1) = f(z_2) \quad \text{(almost always)}$$

Formally, $f$ approximately factors through the quotient map $\pi : Z \to Z/{\sim_3}$ where $z_1 \sim_3 z_2 \iff z_1[1..3] = z_2[1..3]$, giving $f \approx \bar{f} \circ \pi$ where $\bar{f} : Z/{\sim_3} \to B$ is the prefix routing function. The ~800 active 3-digit prefixes cover all 41,000 ZIP entries.

[OPEN — §13.2]

Quantify the exceptions to local constancy — ZIPs within the same 3-digit block routing to different belts. These are the cases where the sort chart has sub-prefix distinctions and are prime candidates for sorter confusion.

03

The Belt Confusion Graph

3.1 Definition

Define the confusion graph $G = (B, E)$ as a directed graph on the 16 belt nodes, where $(b_i, b_j) \in E \iff b_j \in \texttt{BELT\_FAMILIES}[b_i]$. The graph encodes domain knowledge about which belts are physically or visually similar.

Top Red (8) confuses with Top Blue (11) — sister belts in the Z3 area (PD-09/PD-12): same physical quadrant, different colour. Middle Red (7) and Middle Blue (12) form the PD-08/PD-13 sister pair — high confusion because PD-13 appears only during peak season, making it unfamiliar even to experienced sorters returning after non-peak rotation.

3.2 Role in the Quiz Engine

When generating a wrong-answer distractor set, the quiz selects answers from $\mathcal{F}(b^*) = \{b : (b^*, b) \in E\}$ — the out-neighbourhood of the correct belt. This ensures distractors are semantically hard, not random. From a learning theory perspective, this is related to hard negative mining: training on confusable pairs accelerates discriminative learning compared to random negatives.

3.3 Graph-Theoretic Properties

BELT_FAMILIES is not symmetric in general, so $G$ is a directed graph. Define confusion classes as weakly connected components. From the belt naming, the natural colour families are: Black {0, 15}, Blue {11, 12}, Red {8, 7, 4}, Green {6, 13, 14}, Yellow {1, 3, 10}, White {2, 5}, Air {13, 14}, Orange {9} (singleton).

[OPEN — §13.3]

Compute the chromatic number of the undirected version of $G$. A $k$-colouring assigns each belt a "colour class" such that confusable belts get different colours — the minimum number of visually distinguishable categories needed to make belt identification unambiguous.

04

Weighted Question Selection as a Mixture Distribution

4.1 The Four-Pool Model

The quiz generates questions from four pools — Ground ($P_g$), CCHIL ($P_c$), Exception labels ($P_e$), and Air sort ($P_a$) — where the average per-state contribution $\bar{n}$ normalises all special pools:

$$\bar{n} = \frac{\sum_{s \notin \text{CCHIL}} w_s \cdot |G_s|}{|\{s : s \notin \text{CCHIL}\}|}$$

The question type is selected by sampling uniformly from the total pool, giving a categorical mixture distribution over question types:

$$\Pr[\text{type} = t] = \frac{P_t}{P_g + P_c + P_e + P_a}$$

4.2 The Pre-v0.14 Bug and Its Fix

Before v0.14, air and exception bucket sizes were computed as $|\text{candidates}| \times w$ (e.g., $7 \times 20 = 140$) while the ground pool was ~33,000 — giving an air share of $140/33{,}140 \approx 0.4\%$, negligible at any weight. The fix: replace the raw count with $w \times \bar{n}$, making all pools commensurate. With $w_a = 3$ and $\bar{n} \approx 868$: $P_a = 3 \times 868 = 2604$, giving $\Pr[\text{air}] \approx 7.4\%$. The fix is equivalent to converting heterogeneous measures to a common denominator before mixing — the mixture distribution becomes well-calibrated with respect to the weight sliders.

4.3 The CCHIL Isolation Fix (v0.12)

Before v0.12, CCHIL states were included in the ground pool weighted by $w_s \times |G_s|$, causing their aggregate contribution to dominate. The fix isolates CCHIL into its own pool with a single weight $w_c$, imposing independence of CCHIL frequency from regional weight adjustments — making the CCHIL component orthogonal to regional adjustments in the mixture.

05

The Missort Matrix as a Discrete Communication Channel

5.1 Channel Model

A sorter can be modelled as a discrete memoryless channel with input and output alphabet $B$. The channel transition matrix is:

$$\mathbf{M} \in \mathbb{R}^{|B| \times |B|}, \quad M_{ij} = \frac{\text{count}(f(z) = b_i,\; \text{answer} = b_j)}{\text{count}(f(z) = b_i)}$$

$\mathbf{M}$ is a right stochastic matrix: each row sums to 1. A perfect sorter has $\mathbf{M} = \mathbf{I}_{16}$. A random guesser has $M_{ij} = 1/16$ for all $i, j$. The diagonal $M_{ii}$ is per-belt accuracy; off-diagonal $M_{ij}$ are missort rates.

5.2 Information-Theoretic Measures

Per-belt Shannon entropy of the sorter's output distribution (the $i$-th row of $\mathbf{M}$):

$$H_i = -\sum_{j} M_{ij} \log_2 M_{ij} \in [0, \log_2 16] \text{ bits}$$

The mutual information between correct belt $X$ and chosen belt $Y$:

$$I(X; Y) = H(Y) - H(Y|X) = H(X) - H(X|Y)$$

$I(X;Y)$ measures how much the sorter's answer reveals about the correct belt. For a perfect sorter, $I(X;Y) = H(X)$ (full information). For a random guesser, $I(X;Y) = 0$. Channel capacity $C = \max_{p(X)} I(X;Y)$ bits per question.

[OPEN — §13.5]

Add an entropy column to the belt analysis tab showing $H_i$ per belt. High-entropy belts are a priority for coaching regardless of absolute accuracy, because entropy captures systematic confusion (wrong internal model), whereas low accuracy on a low-entropy belt may just mean the sorter doesn't know it — a simpler training problem.

06

Sorter Performance as a Statistical Process

6.1 Bernoulli Trials Model

Each quiz question for sorter $k$ is modelled as a Bernoulli trial with success probability $\theta_k \in [0,1]$. The MLE of $\theta_k$ is the sample accuracy: $\hat{\theta}_k = \text{correct}/n$.

6.2 Confidence Intervals

The Wald interval behaves poorly for small $n$. The Wilson score interval is preferred:

$$\left[\frac{\hat{\theta} + \frac{z^2}{2n} \mp z\sqrt{\frac{\hat{\theta}(1-\hat{\theta})}{n} + \frac{z^2}{4n^2}}}{1 + \frac{z^2}{n}}\right]$$

with $z = 1.96$ for 95% confidence. This interval has correct coverage for all $n$ and all $\theta$, including $\theta$ near 0 or 1. A sorter with 5/5 correct ($\hat{\theta} = 100\%$) has a Wilson interval of $[0.54, 1.00]$ — the data are entirely consistent with a true skill of 54%.

[OPEN — §13.4]

Add Wilson lower-bound sorting as an alternative ranking mode — rank sorters by the lower endpoint of their Wilson interval rather than $\hat{\theta}$. This corrects for the optimism bias that makes new sorters with 3/3 appear above experienced sorters with 120/150.

6.3 Trend Detection: Half-Split Test

The proper statistical test for non-stationarity is a two-sample proportions test:

$$z = \frac{\hat{\theta}_{\text{late}} - \hat{\theta}_{\text{early}}}{\sqrt{\hat{\theta}(1-\hat{\theta})(1/n_1 + 1/n_2)}}$$

under $H_0: \theta_{\text{early}} = \theta_{\text{late}}$, where $\hat{\theta}$ is the pooled accuracy.

[OPEN — §13.6]

Implement the formal two-proportions $z$-test and display a $p$-value alongside the trend arrow. A threshold like $p < 0.05$ gives statistical confidence that the apparent trend is real rather than noise.

6.4 Session-Level Trend: A CUSUM Perspective

The sparkline chart can be interpreted as a CUSUM (cumulative sum control chart): if we define $S_n = \sum_{t=1}^{n}(X_t - \theta_0)$ for some target accuracy $\theta_0$, then $S_n > h$ signals that the process has shifted above $\theta_0$. CUSUM is standard in industrial quality control — exactly the setting in which this system operates.

07

The Analytics Reducers as Monoid Homomorphisms

7.1 Algebraic Setup

Let $\mathcal{E}^*$ be the set of all finite sequences of sort events, with concatenation $(\cdot)$ as the binary operation and the empty sequence $\varepsilon$ as the identity. Then $(\mathcal{E}^*, \cdot, \varepsilon)$ is the free monoid over the event type $\mathcal{E}$.

Each analytics function that maps events to additive summary statistics is a monoid homomorphism: $(\mathcal{E}^*, \cdot, \varepsilon) \to (\text{Stats}, \oplus, 0)$. For example:

$$\texttt{overviewStats}(e_1 \cdot e_2) = \texttt{overviewStats}(e_1) \oplus \texttt{overviewStats}(e_2)$$

7.2 Why This Matters

The monoid structure implies the computation can be parallelised via MapReduce: partition the event log into chunks, reduce each chunk independently, then merge. This is the mathematical reason the analytics module is architected as pure reducers — the "no side effects in computation functions" design note is a structural requirement for the monoid homomorphism property to hold. Practically, incremental computation becomes possible: store partial aggregates and update only the new events.

7.3 Functions That Are Not Monoid Homomorphisms

worstBelt, worstSorter — these involve sorting and filtering, which are non-linear operations on the merged stats. missortMatrix — the matrix itself is a homomorphic aggregate, but rendering top confusion pairs involves a sort, making the render step non-homomorphic. The pattern is: (1) compute the aggregate homomorphically, (2) apply a non-linear query to the aggregate — analogous to SQL's GROUP BY (homomorphic) vs ORDER BY / HAVING (not).

08

The Overlay System as a Pointed Function Update

8.1 Algebraic Formulation

An overlay is a finite partial function $o : D_o \to B$ where $D_o \subset Z$. The overlay application operator $\triangleleft$ produces:

$$f \triangleleft o = \lambda z.\; \begin{cases} o(z) & \text{if } z \in D_o \\ f(z) & \text{otherwise} \end{cases}$$

This is the pointwise override — $o$ takes precedence at its domain. In the category of sets and partial functions, $f \triangleleft o$ is a pushout.

8.2 Composability and Audit Trail

Multiple overlays $o_1, o_2, \ldots, o_k$ applied sequentially: $f' = f \triangleleft o_1 \triangleleft o_2 \triangleleft \cdots \triangleleft o_k$. The operation $\triangleleft$ is not commutative in general (later overlays override earlier ones on shared domains). The set of all finite partial functions $D \to B$ with $\triangleleft$ forms a left-regular band — an idempotent semigroup where $a \triangleleft a = a$ and $a \triangleleft b \triangleleft a = a \triangleleft b$. Associativity holds. The append-only log (overlays.jsonl) preserves the exact sequence of overrides — the history is auditable.

09

The Knowledge Map as a Linear Functional

9.1 Event Space and Filters

A filter $\varphi : \mathcal{E} \to \{0,1\}$ selects a subset of events. The sorter accuracy under filter $\varphi$ for sorter $k$ is:

$$a_k(\varphi) = \frac{\sum_{e : e.k = k} \varphi(e) \cdot e.c}{\sum_{e : e.k = k} \varphi(e)}$$

This is a rational linear functional on the indicator function space — a ratio of two linear functionals (numerator and denominator are both linear in $\varphi$).

9.2 Filter Combinations as Set Operations

The combined filter $\varphi_{S, \text{CCHIL}, \text{air}}$ is the logical OR (union) of event subsets. The filter algebra is the Boolean algebra $\mathcal{P}(\mathcal{E})$ under union, intersection, and complement — the Knowledge Map explores a slice of this algebra indexed by the state/category selectors.

[OPEN — §13.7]

Implement intersection filters: "sorters who are strong in CCHIL and Air" — selects events matching both criteria, or evaluates sorter accuracy separately per category and ranks by the minimum (weakest category). Identifies sorters with balanced, broad knowledge versus specialists.

10

The Sort Chart as a Finite Automaton

The truth table was derived from a Sort Charts Master spreadsheet mapping ZIP → SLIC → Belt. This pipeline defines a two-step composed function:

$$Z \xrightarrow{g} \text{SLIC} \xrightarrow{h} B, \quad f = h \circ g$$

Viewing the routing lookup as a decision process, the 3-digit ZIP prefix determines the SLIC, and the SLIC determines the belt. This is a deterministic finite transducer. The "truth edits" system (truth_edits.jsonl) applies state-transition overrides — rewriting individual arcs in the transducer without rebuilding the whole automaton.

11

The Probabilistic Skill Model (Bayesian Extension)

11.1 Beta-Bernoulli Model

A natural Bayesian model treats each sorter's true accuracy $\theta_k$ as a random variable with a Beta prior:

$$\theta_k \sim \text{Beta}(\alpha, \beta) \quad \Rightarrow \quad \theta_k | c, n \sim \text{Beta}(\alpha + c,\; \beta + n - c)$$

With a uniform prior $(\alpha = \beta = 1)$, the posterior mean is the Laplace-smoothed accuracy:

$$\mathbb{E}[\theta_k | c, n] = \frac{c + 1}{n + 2}$$

This shrinks extreme estimates towards 0.5, appropriate for small $n$.

11.2 Hierarchical Model for Belt-Specific Skill

A sorter's skill is not uniform across belts. A hierarchical Bayesian model would parameterise global sorter skill $\theta_k \sim \text{Beta}(\alpha_0, \beta_0)$ and per-belt skill $\theta_{k,b} \sim \text{Beta}(\alpha(\theta_k), \beta(\theta_k))$. The belt-specific posterior enables more precise coaching: a sorter globally proficient but with 60% accuracy on Top Blue (PD-12) likely has sister-belt confusion with Top Red (PD-09) — targeted intervention, not general retraining.

[OPEN — §13.9]

Formalise the Beta-Bernoulli model per belt per sorter and evaluate whether it changes coaching recommendations in practice. Each sorter has a latent skill vector $(\theta_{k,s})_{s \in \text{States}}$ indexed by state — a proper ranking should account for uncertainty using Wilson CI as a frequentist approximation.

12

Summary Table of Mathematical Structures

ComponentMathematical ObjectKey Property
ZIP-to-belt mappingPartial function $f : Z \to B$Locally constant at 3-digit prefix
CCHIL routingMultifunction $\tilde{f} : Z \to \mathcal{P}(B)$Acceptance relation, set-valued
ZIP code spaceUltrametric space $(Z, d)$$d(x,z) \leq \max(d(x,y), d(y,z))$
Belt set with confusionDirected graph $G = (B, E)$Encodes hard-negative structure
Question selectionCategorical mixture distribution4-pool weighted sampling
Sorter channelRight stochastic matrix $\mathbf{M}$Perfect sorter $\Rightarrow \mathbf{M} = \mathbf{I}$
AccuracyBinomial proportion $\hat{\theta}$Wilson CI for uncertainty
TrendTwo-proportions testFormal non-stationarity check
Overlay systemPointed function update $f \triangleleft o$Left-regular band
Analytics reducersMonoid homomorphisms $\mathcal{E}^* \to \text{Stats}$Enables MapReduce
Knowledge map filterBoolean algebra $\mathcal{P}(\mathcal{E})$OR-composition of state indicators
Sorter skillBeta-Bernoulli posteriorBayesian shrinkage for small $n$
Pipeline compositionFinite transducer $g \circ h$Two-step ZIP → SLIC → Belt
13

Open Problems & Future Sessions

Preimage size analysis
Characterise $|f^{-1}(b)|$ across belts. Which belts are underrepresented in training data purely by geography?
Local constancy exceptions
Find ZIPs where $f$ is not locally constant at the 3-digit prefix level. These are training hard cases.
Chromatic number of confusion graph
What is the minimum number of distinct visual cues needed to make all belts unambiguous?
Wilson CI ranking
Add lower-bound sorting to the Sorters tab as an alternative ranking mode. Corrects optimism bias for new sorters with small sample sizes.
Entropy column in Belt Analysis
Add Shannon entropy $H_i$ per belt to quantify systematic confusion versus low-frequency misses.
Formal trend test
Implement two-proportions $z$-test for the half-split trend comparison, with displayed $p$-value.
Intersection filters in Knowledge Map
AND-combination: rank sorters by their weakest category, or by the product of category accuracies. Distinguishes balanced from specialist sorters.
CUSUM monitoring
Implement a CUSUM chart for per-sorter accuracy over time, replacing or augmenting the sparkline.
Bayesian hierarchical model
Formalise the Beta-Bernoulli model per belt per sorter and evaluate whether it changes coaching recommendations in practice.
Air sort internal routing (pending feature)
When air sort slide sub-destinations are added, the routing becomes $f_{\text{air}} : (\text{rule} \times Z) \to B_{\text{air}}$ — another partial function on a product domain.
OJS-LTC confusion matrix decomposition
Partition $\mathbf{M}$ into knowledge-gap and method-gap components $\mathbf{M}_K$ and $\mathbf{M}_E$. Requires cross-referencing quiz accuracy with missort category data.
OJS method compliance score as a covariate
LTC quiz accuracy $\hat{\theta}_k$ combined with OJS certification status (passed within 6 months / lapsed) as a categorical covariate in the sorter performance model. A lapsed OJS combined with declining $\hat{\theta}_k$ may signal a higher-risk employee profile.
Dual-role certification tracking
Employees cross-trained in multiple roles (sorter + pick-off, or pick-off + loader) have separate OJS sheets and knowledge domains. LTC multi-role extension would test secondary routing domain and flag conflicts between primary and secondary role knowledge.

End of Session 2. Next session: pick up any [OPEN] item, or extend the framework with new mathematical connections discovered during implementation. Read Session Log before resuming.