Skip to main content

Building interval intuition: fromPathP and toPathP

··9 mins

I am working through exercise 8 in the HoTTEST summer school and am trying to come up with a good intuition for the proof of toPathP, which takes takes a path and lifts it to a higher-order path (a "path-over" by lecture 7 terminology). Here, I want to define a pair of functions parameterized by non-dependent types A that convert from- / to- higher order paths (PathPs) and simple substitutions (using transport). The types are as follows:

fromPathP
  : {A : I → Type ℓ} {x : A i0} {y : A i1}
  → PathP A x y
  → transport (λ i → A i) x ≡ y

and

toPathP
  : {A : I → Type ℓ} {x : A i0} {y : A i1}
  → transport (λ i → A i) x ≡ y
  → PathP A x y

The from- direction is a straightforward application of interval types. Given a path-over p : PathP A x y, and a point along the interval i, we can construct a witness of fromPathP with transp:

fromPathP
  : {A : I → Type ℓ} {x : A i0} {y : A i1}
  → PathP A x y
  → transport (λ i → A i) x ≡ y
fromPathP {A = A} p i
  = transp (λ j → A (j ∨ i)) i (p i)

Intuitively, transp states that we can move along an interval i, satisfying the necessary bounds at j = i0 and j = i1 (the cubical additions to the goal pane), so long as (λ j → A (j ∨ i)) is constant when i = i1 and results in y (the implicit requirement that the goal pane doesn't tell you about, which can be frustrating to encounter when the intervals line up).

Inspecting the hole in transp (λ j → ?) i (p i), we find that the proof requires:

j = i0 ⊢ A i
j = i1 ⊢ A i1

Transport is moving along i and binds the position along this path to j: i and p i are necessary to pin a "base point" — sort of like how we always project paths out of a diagonalization in the J rule, but inverted so that we always end up at a final value. Since our base point is parameterized by the i in outer scope we must find the maximal position, the largest point along i and j, i ∨ j. The hint informs us to "transport the point p i to the fibre A i1, such that the transport computes away at i = i1. This looks like the path:

 x ----(p i)----> y
A i0    A i      A i1

This corroborates the desired boundary constraints and "constant endpoint" requirement: at j = i0 we have any path along i, at the end of the path, we always end up at i1.

This picture gets a little more confusing in the other direction — when we construct a dependent path out of the simple one — but the picture makes sense in hindsight if you re-orient how you traverse your intervals.

toPathP
  : {A : I → Type ℓ} {x : A i0} {y : A i1}
  → transport (λ i → A i) x ≡ y
  → PathP A x y
toPathP {A = A} {x = x} {y = y} p i =
  hcomp
    (λ {j (i = i0) → ?;
        j (i = i1) → ? })
   focus-here
 where
  focus-here : A i
  focus-here = ?

In this opposite direction, we use hcomp to perform box filling with the corresponding box constructing the path between x and y:

       x  - - - -> y
       ^           ^
       ¦           ¦
  refl ¦           ¦ p j
       ¦           ¦
       ¦           ¦
       x --------> transport A x

In this case, we provide hcomp with a partial function that tells us about what happens at the endpoints of the path: at i0 we are at x, at i1 we arrive at some endpoint in a dependent path p j ("at each point i, the vertical fiber of the diagram should lie in A i"). We can fill our first two holes with λ {j (i = i0) → x; j (i = i1) → p j }.

The story is a little more intricate if you examine focus-here. To construct this path we are told that (1) the key is to parameterize this path and make it dependent on x and that (2) to construct transport A x requires writing a transp that computes away at i = i0. (1) is important to keep in mind because we are lifting the path into a dependent one, so the "constant" requirement means that we treat x as a base point: fill = transp (λ j → ?) ? x.

There are two ways to proceed. the first, naive way is to fill the remaining holes as we did before, yielding: fill = transp (λ j → A (j ∧ i)) i x. We might need to play a little type tetris to realize that we travel backwards and find the minimum point in the square, but Agda will gracefully guide us to the type-safe solution. Unfortunately, giving this proof to agda yields:

/HoTTEST-Summer-School/Agda/Cubical/Exercises8.lagda.md:100,18-38
i₁ != i0 of type I
when checking that the expression transp (λ j → ?) i x has type A i

But normalizing and inferring our goal suggests everything is fine:

Goal: Type (_ℓ_62 j)
———— Boundary (wanted) —————————————————————————————————————
j = i0 ⊢ A i0
j = i1 ⊢ A i

Have: Type ℓ
———— Boundary (actual) —————————————————————————————————————
j = i0 ⊢ A i0
j = i1 ⊢ A i

———— Bound variables ———————————————————————————————————————
p : transp (λ i₁ → A i₁) i0 x ≡ y
...
ℓ : Level   (not in scope)
———— Constraints ———————————————————————————————————————————

Somehow i₁, bound in p from the top level transport (λ i → A i), is not correctly unifying with the witness. The key to resolve this error is to reconsider (2): we want to construct a transp that computes away at i = i0 — as opposed to i1! Since we are transporting in the opposite direction of the square with the endpoint / base point / constant function on the left (at i0) and not the right (at i1). We must flip the direction of the path, yielding the correct solution:

fill = transp (λ j → A (j ∧ i)) (~ i) x

We might be able to break this proof down in a modular fashion using equality reasoning (as Astra does in her office hours for this problem set) and, indeed, this might be the best way to go about the problem. However a halfway-point between this "aggressive refinement" and "long-drown out equality reasoning" is to leave the hole as wide as possible and inspect the points along the interval. So, given:

fill = {! transp (λ j → A (j ∧ i)) i x !}

You can manually inspect where the intervals land when the i in closure is instantiated. When inspecting the naive solution at i = i0, we have:

fill = {! transp (λ j → A (j ∧ i0)) i0 x !}
-- Goal: A i
-- ———— Boundary (wanted) —————————————————————————————————————
-- i = i0 ⊢ x
-- i = i1 ⊢ transp (λ i₁ → A i₁) i0 x
-- Have: A i0
-- ———— Boundary (actual) —————————————————————————————————————
-- i = i0 ⊢ transp (λ j → A i0) i0 x
-- i = i1 ⊢ transp (λ j → A i0) i0 x

And when i = i1:

fill = {! transp (λ j → A (j ∧ i1)) i1 x !}
-- Goal: A i
-- ———— Boundary (wanted) —————————————————————————————————————
-- i = i0 ⊢ x
-- i = i1 ⊢ transp (λ i₁ → A i₁) i0 x
-- Have: A i1
-- ———— Boundary (actual) —————————————————————————————————————
-- i = i0 ⊢ x
-- i = i1 ⊢ x

This isn't the most full-proof method, but we can uncover a little more information to see what the boundary is expecting: we actually want j ∧ i0 (the minimum of j and i) to be x at i0 and we want to i1 case to be transp (λ i₁ → A i₁) i0 x. Applying ∼ i to our interval, we see the correct values for i0, and i1 corrects itself:

fill = {! transp (λ j → A (j ∧ i1)) (~ i1) x !}
-- Goal: A i
-- ———— Boundary (wanted) —————————————————————————————————————
-- i = i0 ⊢ x
-- i = i1 ⊢ transp (λ i₁ → A i₁) i0 x
-- Have: A i1
-- ———— Boundary (actual) —————————————————————————————————————
-- i = i0 ⊢ transp (λ j → A j) i0 x
-- i = i1 ⊢ transp (λ j → A j) i0 x

Another way we could have foreseen running into trouble is if we placed a hole in our goal.

fill = {! transp (λ j → ?) i x !}
-- Goal: A i
-- ———— Boundary (wanted) —————————————————————————————————————
-- i = i0 ⊢ x
-- i = i1 ⊢ transp (λ i₁ → A i₁) i0 x
-- Have: ?3 (j = i1)
-- ———— Boundary (actual) —————————————————————————————————————
-- i = i0 ⊢ transp (λ j → ?3 (p = p) (i = i0) (j = j)) i0 x
-- i = i1 ⊢ x

…and it is worth noting that the "Boundary (actual)" information was actually unavailable to folks in the HoTTEST summer school during 2022, as Amelia implemented this feature for the goal pane in pr#6529 in 2023.

I think there is possibly more ergonomic ways to deal with boundaries in agda, but it might require a sizable amount of leg work. In this presentation on redtt, another cubical proof assistant based on cartesian cubical type theory, Jon Sterling tells us how redtt tracks boundaries in a more refined manner than Agda, which is forced to reconstruct boundary information from the current proof state, which loses information in the process.

What lessons did we learn here?

  1. As documented, there is an implicit constraint about the fixed "base point" in cubical, which is an additional intuition that agda will not help you develop.
  2. When constructing dependent paths, it's important to think about the intuition of where the end points of the path are: will they all contract to a single point (as in fromPathP), or will they end in a space parameterized by an interval (as in toPathP). When possible, draw squares everywhere and think very hard about what direction along the path you are moving in.
  3. Even if your intervals look correct, this implicit constraint will crop up and errors will only provide you with hints when you normalize the goal pane.

    1. for this reason, it is almost always best to lazily refine goals involving intervals and inspect boundary conditions using the strategies
    2. another equally valid tactic is cracking open cubical's equality reasoning module.

Hat tip to Julius Marozas for pointing out {! transp (λ j → ?) i x !}.