Rank Orderings @cite{luce-1959} #
@cite{luce-1959}: the probability of observing a complete rank ordering under the Luce choice rule. The key insight is that ranking probability decomposes into a product of successive top-choices from shrinking alternative sets — a direct consequence of IIA.
Main results #
rankProb: probability of a ranking (as aList) under the Luce model, defined as the product of successivepChoicevalues from shrinking tails.rankProb_eq_score_prod: express ranking probability in terms of score ratiosv(aᵢ) / ∑ⱼ≥ᵢ v(aⱼ)(Theorem 9).rankProb_sum_eq_one: ranking probabilities over all permutations sum to 1.rankProb_marginal_first: marginalizing rankings over the first choice recoverspChoice.
The tail suffix of a list starting at position i (0-indexed).
Used to represent the shrinking alternative set at each step of ranking.
Equations
- Core.tailSuffix ranking i = (List.drop i ranking).toFinset
Instances For
Probability of a single step in the ranking: choosing ranking[i] from
the remaining alternatives {ranking[i], ranking[i+1],...}.
Equations
- Core.rankStepProb ra s ranking i = match ranking[i]? with | none => 1 | some a => ra.pChoice s (Core.tailSuffix ranking i) a
Instances For
Ranking probability (@cite{luce-1959}, Theorem 9):
The probability of observing the complete rank ordering a₁ > a₂ >... > aₙ
is the product of successive top-choices from shrinking sets:
P(a₁ > a₂ >... > aₙ) = P(a₁ | {a₁,...,aₙ}) · P(a₂ | {a₂,...,aₙ}) ·... · P(aₙ₋₁ | {aₙ₋₁, aₙ})
Under the Luce model with ratio scale v, this becomes:
P(a₁ >... > aₙ) = ∏ᵢ v(aᵢ) / ∑ⱼ≥ᵢ v(aⱼ)
Equations
- Core.rankProb ra s ranking = List.foldl (fun (acc : ℝ) (i : ℕ) => acc * Core.rankStepProb ra s ranking i) 1 (List.range ranking.length)
Instances For
Recursive characterization of ranking probability: the first-choice probability times the ranking probability of the remaining alternatives.
Equations
- Core.rankProbRec ra s [] = 1
- Core.rankProbRec ra s (a :: rest) = ra.pChoice s (a :: rest).toFinset a * Core.rankProbRec ra s rest
Instances For
rankProbRec agrees with the explicit rankProb definition.
Proof by list induction. The key steps use:
List.range_succ_eq_mapto decomposerange(n+1) = 0 :: map succ (range n)List.foldl_mapto shift indices through the mapfoldl_mul_comm_initto factor out the first-choice probability- Definitional equalities:
rankStepProb (a::rest) 0 = pChoiceandrankStepProb (a::rest) (i+1) = rankStepProb rest i
Ranking probability is non-negative: each factor is a pChoice value,
hence non-negative.
The score-ratio factor at position i: v(aᵢ) / ∑ⱼ≥ᵢ v(aⱼ).
This is the i-th factor in the score-product form of ranking probability.
Equations
- One or more equations did not get rendered due to their size.
Instances For
The score-product form of ranking probability:
∏ᵢ v(aᵢ) / ∑ⱼ≥ᵢ v(aⱼ).
Equations
- Core.rankProbScoreProd ra s ranking = List.foldl (fun (acc : ℝ) (i : ℕ) => acc * Core.scoreRatio ra s ranking i) 1 (List.range ranking.length)
Instances For
Theorem 9 (score form): ranking probability equals the product of score ratios.
Each pChoice factor equals the corresponding score ratio by definition of
the Luce choice rule, so the two products are term-by-term equal.
All permutations of a finset, as lists.
Equations
Instances For
Every ranking in allRankings T is a permutation of T.
Uses List.mem_permutations, List.perm_ext_iff_of_nodup, and
Multiset.mem_toList from mathlib to connect the List-level
permutation API with Finset membership.
Ranking probabilities sum to 1 (@cite{luce-1959}, Theorem 9 completeness):
Over all n! permutations of the alternative set, ranking probabilities
form a proper distribution.
The proof proceeds by induction on |T|:
- Base (
T = ∅):allRankings ∅ = {[]},rankProb [] = 1. - Step: decompose
allRankings Tby first element, factor outpChoice(which sums to 1 bypChoice_sum_eq_one), and apply the inductive hypothesis to each(n-1)-element ranking.
Requires strictly positive scores (Luce's ratio scale assumption).
Rankings starting with a given element a.
Equations
- Core.rankingsStartingWith T a = {r ∈ Core.allRankings T | r.head? = some a}
Instances For
Marginal first-choice (@cite{luce-1959}, Theorem 9 corollary):
Summing the ranking probability over all rankings that start with a
recovers the choice probability pChoice(a, T).
This is because P(a first) = P(a | T) · ∑_σ P(σ | T \ {a}) = P(a | T) · 1,
where the inner sum equals 1 by rankProb_sum_eq_one on the remaining
alternatives.
The rank of element a in a ranking (1-indexed, so rank 1 = best).
Returns 0 if a is not in the ranking.
Equations
- Core.rankOf ranking a = if a ∈ ranking then List.findIdx (fun (x : A) => x == a) ranking + 1 else 0
Instances For
Expected rank of alternative a under the ranking distribution.
E[rank(a)] = ∑_σ P(σ) · rank(a, σ)
The monotonicity theorem expectedRank_lt_of_score_gt shows that
alternatives with higher v(a) have lower (better) expected rank.
Equations
- Core.expectedRank ra s T a = ∑ r ∈ Core.allRankings T, Core.rankProb ra s r * ↑(Core.rankOf r a)
Instances For
Expected rank monotonicity: higher score implies lower expected rank.
If v(a₁) > v(a₂) then E[rank(a₁)] < E[rank(a₂)]: the alternative
with higher ratio-scale value is expected to be ranked higher (closer to 1).
This is a natural property of the Plackett–Luce model (@cite{luce-1959}, @cite{plackett-1975}) but does not appear as a formal theorem in either source. @cite{luce-1959} proves ranking probability decomposition (Theorem 9) and @cite{marden-1995} covers estimation, but neither states the expected rank monotonicity result explicitly.
The proof uses conditional expectation decomposition:
E[rank(a, T)] = 1 + ∑_{b≠a} pChoice(b,T) · E[rank(a, T\{b})]
and combines within-set induction with cross-set monotonicity
(a higher-scored element gets better expected rank against the same field).
Equal scores imply equal expected ranks: if score(a₁) = score(a₂),
then E[rank(a₁, T)] = E[rank(a₂, T)].
The proof uses the conditional expectation decomposition and antisymmetry:
decompose both expected ranks by first element, show the common terms are
equal by induction, and show the cross terms are equal by applying
expectedRank_cross_le_aux in both directions (since v(a₁) ≥ v(a₂) and
v(a₂) ≥ v(a₁) both hold).