# Codes and Protocols for Distilling , controlled-, and Toffoli Gates

###### Abstract

We present several different codes and protocols
to distill , controlled-, and Toffoli (or ) gates.
One construction is based on codes that generalize the triorthogonal codes of Ref. Bravyi and Haah, 2012,
allowing any of these gates to be induced at the logical level by transversal .
We present a randomized construction of generalized triorthogonal codes
obtaining an asymptotic distillation efficiency .
We also present a Reed-Muller based construction of these codes
which obtains a worse but performs well at small sizes.
Additionally, we present protocols based on checking the stabilizers of magic states
at the logical level by transversal gates applied to codes;
these protocols generalize the protocols of Ref. Haah *et al.*, .
Several examples, including a Reed-Muller code for -to-Toffoli distillation,
punctured Reed-Muller codes for -gate distillation,
and some of the check based protocols,
require a lower ratio of input gates to output gates
than other known protocols at the given order of error correction for the given code size.
In particular, we find a T-gate to Toffoli gate code with distance as well
as triorthogonal codes with parameters with very low prefactors in front of the leading order error terms in those codes.

## I Introduction

Magic state distillation Knill (2004a, b); Bravyi and Kitaev (2005)
is a standard proposed approach to implementing a universal quantum computer.
This approach
begins by implementing the Clifford group
to high accuracy
using either stabilizer
codes Gottesman (1996); Calderbank *et al.* (1997)
or using Majorana fermions Karzig *et al.* (2017).
Then, to obtain universality, some non-Clifford operation is necessary,
such as the -rotation (T-gate) or the Toffoli gate (or which is equivalent to Toffoli up to conjugation by Cliffords).
These non-Clifford operations are implemented
using a resource, called a magic state, which is injected into a circuit that uses only Clifford operations.

Since these magic can produce non-Clifford operations, they cannot themselves be produced by Clifford operations.
Instead, in distillation, the Clifford operations are used to distill a small number of high accuracy magic states
from a larger number of low quality magic state.
There are many proposed distillation protocols for the magic state for gates Knill (2004a); Bravyi and Kitaev (2005); Meier *et al.* (2013); Bravyi and Haah (2012); Jones (2013a); Haah *et al.* , as well as some
proposed protocols Eastin (2013); Jones (2013b) to distill magic states for Toffoli gates from -gates.

In such distillation architectures, the resources (space, number of Clifford operations, and number of noisy non-Clifford operations) required to distill magic states far exceed the resources required to implement most quantum algorithms using these magic states. Hence, improvements in distillation efficiency can greatly impact the total resource cost.

This paper presents a variety of loosely related ideas in distillation.
One common theme is exploring various protocols to distill magic states
for Toffoli, controlled-, as well as -gates.
We present several approaches to this.
We use a generalization of triorthogonal codes Bravyi and Haah (2012) to allow this distillation.
In section III, we give a randomized construction of such codes
which achieves distillation efficiency Bravyi and Haah (2012) ;
this approach is of some theoretical interest
because not only is the distance of the code fairly large
(of order square-root number of qubits)
but also the least weight stabilizer has comparable weight.
In section IV, we give another approach based on Reed-Muller codes.
In addition to theoretical asymptotic results here,
we also find a particularly striking code
which distills -gates into magic states
while obtaining eight order reduction in error.
We also present approaches to distilling Toffoli states
which are not based on a single triorthogonal (or generalized triorthogonal code)
but rather on implementing a protocol using a sequence of checks,
similar to Ref. Haah *et al.*, .
As in Ref. Haah *et al.*, we use inner codes to measure various stabilizers of the magic state.
We present two different methods of doing this,
one based on hyperbolic inner codes in section V
and one based on normal inner code in section VI
(hyperbolic and normal codes were called even and odd inner codes, respectively,
in an early version of Ref. Haah *et al.*, ).

In addition to these results for distilling Toffoli states, we present other results useful specifically for distilling -gates. In particular, in IV.5 we study punctured Reed-Muller codes and find some protocols with a better ratio of input -gates to output -gates than any other known protocol for certain orders of error reduction. Another result in II.4 is a method of reducing the space required for any protocol based on triorthogonal codes at the cost of increased depth.

We use matrices , and .

## Ii Triorthogonal Matrices: Definitions and Generalizations

### ii.1 Definitions

We consider codes with bits, so that code words are vectors in . Given a vector , let denote the Hamming weight, i.e., the number of nonzero entries of . Given a vector , let denote the -th entry of . Given two vectors , let denote the entry wise product of and , i.e., . Let denote the inner product, so that , where the sum is taken modulo .

For us, a code will always refer to a linear subspace of . Given two codes , let denote the subspace spanned by vectors for and . Given a code , let denote the dual code, i.e. for any vector , we have if and only if for all . Given two codes, , let denote the span of and .

Following Bravyi and Haah Bravyi and Haah (2012), a binary matrix of size -by- is called triorthogonal if

(II.1) |

for all pairs , and

(II.2) |

for all triples of rows .

Further, we will always assume that the first rows of have odd weight, i.e. for and the remaining rows have even weight, i.e., for . (The notation instead of was used in Ref. Bravyi and Haah, 2012.) Let

(II.3) |

Let denote the span of the even weight rows of . Let denote the span of the odd weight rows of . Let denote the span of all the rows of .

The distance of a triorthogonal matrix is defined to be the minimum weight of a vector such that but . The distance of a subspace is defined to be the minimum weight of a nonzero vector in that subspace. Clearly, the distance of is at least the distance of .

### ii.2 Triorthogonal Spaces and Punctured Triorthogonal Matrices

Let us define a “triorthogonal subspace” to be a subspace such that for any , we have . Given a triorthogonal matrix , the vector space is a triorthogonal space. Thus, any -by- matrix whose rows span is a triorthogonal matrix. However, if , then the span of the rows of is not a triorthogonal space.

In this regard, we note the following. Let be an arbitrary triorthogonal matrix of the form

(II.4) |

where is -by- (and contains the odd weight rows of ) and is -by- (and contains the even weight rows of ). Consider the matrix

(II.5) |

where denotes a -by- identity matrix and denotes the zero matrix of size -by-. This matrix is a triorthogonal matrix with all rows having even weight, and its row span defines a triorthogonal space . Thus, from a triorthogonal matrix, we can construct a triorthogonal space by adding additional coordinates to the vector and padding the matrix by .

We now show a converse direction, based on the idea of puncturing a code. Given any subspace of dimension , there exists a matrix whose rows form a basis of (after possibly permuting the coordinates of the space) such that

for some matrix , where is an -by- identity matrix. Such a matrix in the reduced row echelon form is unique once an ordering of coordinate is fixed, and can be computed by Gauss elimination from any spanning set for . Choose any such that . Let be the first rows of and let be the remaining rows of . Let , where is the -by- zero matrix, and let , where is the -by- identity matrix. Then, the matrix

is a triorthogonal matrix. We say that this matrix is obtained by “puncturing” the previous code on the given coordinates. By the uniqueness of the reduced row echelon form, the matrices and are determined by , , and the ordering of the coordinates.

This idea of padding is related to the following protocol for distillation Fowler *et al.* (2012).
We consider for the moment, but a generalization to a larger is straightforward.
Observe that on a Bell pair (we ignore global normalization factors),
the action of on the first qubit is the same as on the second: .
Once we have , suppose we measure out the second qubit onto .
The state on the first qubit is then the magic state .
If we instead measure the second qubit in the state, we can apply a Pauli correction
to bring the first qubit to the desired magic state.
If the second qubit of this Bell pair is a logical qubit of a code,
where the logical can be fault-tolerantly implemented,
then the above observation enables fault-tolerant creation of the magic state.

The protocol is thus as follows. Consider a triorthogonal code defined by some matrix ; for brevity, we also refer to this code as below. Let be the space obtained by padding as above. (i) Create a Bell pair where the second qubit is embedded in the code . The Bell pair is the eigenstate of and , which is simply the state stabilized by for any in the triorthogonal space , and by for any in . Thus, this step can be implemented by a circuit consisting of control-NOTs. This circuit can be thought of as the preparation circuit of the superposition of all classical code words of . (ii) Apply the transversal gate on the qubits of , followed by possible Clifford corrections; these Clifford corrections are either phase gate or control-Z Bravyi and Haah (2012). (iii) Project the logical qubit of the code onto a or state. This step can be done simply by measuring individual qubits of the code in the basis without inverse-encoding, and classical post-processing. The reason is that operator on individual qubits commutes with logical of the code, and hence after the measurement, the state of the qubit that comprised the code is some eigenstate of the logical operator. The eigenvalue of this logical can be inferred by taking the parity of the measurement outcomes, and if necessary we apply a Pauli correction to the magic state on the other side of the initial Bell pair. The eigenvalues of the stabilizers of the code can also be checked similarly and we post-select on these being in the state.

This protocol is particularly simple to describe in the case that the matrix is obtained by puncturing some triorthogonal subspace on some set of coordinates. Then the protocol is: prepare the superposition of all classical code words of , then apply a transversal gate on all unpunctured coordinates (followed possibly by a Clifford correction), then measure all unpunctured coordinates, and finally, if classical postprocessing shows that all stabilizers are in the state, the punctured coordinates are in the desired magic state (up to a Pauli correction which is determined by the classical post-processing).

This protocol is different from preparing encoded , applying , and inverse-encoding, in that the Clifford depth is smaller. The only Clifford cost is in the initial preparation of the pre-puncture stabilizer state, and the Clifford correction after . The Clifford correction after is absent if the pre-puncture code is triply even.

### ii.3 Generalized Triorthogonal Matrices: T-to-CCZ Distillation

Let us now generalize the definition of triorthogonal matrices. This definition has some similarity to the “synthillation” protocols of Ref. Campbell and Howard, 2017. Our definition is a special case in that we consider only codes that distill -gates, controlled- gates, and gates, rather than arbitrary diagonal matrices at the third level of the Clifford hierarchy. On the other hand, we will present codes of arbitrary distance, rather than just distance .

###### Definition 1.

A -by- binary matrix is generalized triorthogonal if it can be written up to permutations of rows as

(II.6) |

where has rows, has pairs of rows, and has triples of rows such that

(II.7) |

Such a generalized triorthogonal matrix can be used to distill -gates into -gates, controlled- gates, and CCZ gates, where the CCZ gate is a controlled-controlled-Z gate which is conjugate to the Toffoli gate by Clifford operations. Define a quantum code on qubits. Take -type stabilizers of the quantum code which correspond to rows of (i.e., for each row of , there is a generator of the stabilizer group which is a product of Pauli on all qubits for which there is a entry in that row of ). For each row of and there is one logical qubit, with logical -type operators corresponding to the row. The corresponding -type logical operators can be determined by the requirement that they commute with the -type stabilizers and by the commutation relations for logical and operators. Finally, the -type stabilizers of the code are the maximal set of operators that commutes with all logical operators and -type stabilizers. It is easy to show, by generalizing the arguments of Ref. Bravyi and Haah, 2012, that applying a -gate to every qubit will apply -gates to the logical qubits corresponding to rows of and will apply controlled- gates to each pair of logical qubits corresponding to a pair of rows of , and will apply CCZ gates to each triple of logical qubits corresponding to a triple of rows of , up to an overall Clifford operation on the logical qubits. Input errors are detected up to an order given by the distance of the code, where the distance of a generalized triorthogonal matrix is defined to be the minimum weight of a vector such that and such that , with being the row spans of respectively.

### ii.4 Space-Time Tradeoff For Triorthogonal Codes

We now briefly discuss a way of reducing the space required in any protocol based on a triorthogonal code, at the cost of increasing circuit depth. Consider a code with a total of logical qubits (), a total of -type stabilizer generators, and -type stabilizer generators. The number is equal to the number of rows of . The usual protocol to prepare magic states is to first initialize the logical qubits in the state, encode, then apply transversal , measure stabilizers, and, if no error is found, finally decode yielding the desired magic states. It is possible to implement this protocol using only total qubits as follows.

The idea is to always work on the unencoded state, but we instead spread potential errors so that we can detect them. Recall that encoding is done by preparing a total of ancilla qubits in the state (call these the ancilla qubits), a total of ancilla qubits in the state (call these the ancilla qubits), and applying a Clifford. Call this Clifford . Then, an equivalent protocol is: prepare a total of ancilla qubits in the state, a total of ancilla qubits in the state, and apply , then measure whether all the ancilla qubits are still in the state. (There is no need to check the ancilla qubits since our error model has only errors after twirling.)

The operator is equal to where , which is a product of Pauli operators. Let where is a product of Pauli operators on some set of logical qubits (which are not embedded in a code space!) and ancilla qubits, and is product of Pauli on some set of ancilla qubits. Since the ancilla qubits remain in the state throughout the protocol, an equivalent protocol involving only total qubits is: prepare a total of ancilla qubits in the state, and apply , then measure whether all the ancilla qubits are still in the state. Note that although the product over ranges from to , there are only physical qubits.

This operator can be applied by a sequence consisting of a Clifford, a gate, and the inverse of the Clifford. If a subset of consists of (multiplicatively) independent operators, where , then we can apply these operators simultaneously by finding a Clifford that conjugates each of the operators to distinct Pauli operators. In the best situation, we can obtain a protocol using total qubits, that requires rounds of Cliffords and -gates. While the -depth of the circuit is larger than the original protocol, the total circuit depth may or may not increase: if the Cliffords are implemented by elementary CNOT gates, then the circuit depth depends upon the depth required to implement the various encoding and decoding operations. Other tradeoffs are possible by varying the number of ancillas that are kept: keeping all ancillas is the original protocol with minimal depth and maximal space, while reducing the number will increase depth at the cost of space.

A error on a gate will propagate due to the Cliffords. Specifically, a Clifford that maps to , will map an error to , but the error will not further be affected by the other since they commute. The accumulated error will flip some ancilla qubits as well as the logical qubits that would be flipped in the usual protocol. The association from the errors in gates to the logical and ancilla qubits is identical to the usual protocol. Hence, in the present space-time tradeoff, the output error probability and the success probability are identical to the usual protocol, whenever the error model is such that only gates suffer from errors.

For example, for the qubit protocol below to distill magic states based on , the number of physical qubits required is . For protocols based on a punctured below, , leading in both cases to a large reduction in space required.

## Iii Randomized Constructon of Triorthogonal and Generalized Triorthogonal Matrix

We now give a randomized algorithm that either returns a triorthogonal or generalized triorthogonal matrix with the desired , or returns failure. For notational simplicity, we begin with the case of , i.e., a triorthogonal matrix. We then explain at the end how to construct generalized triorthogonal matrices by a straightforward generalization of this algorithm.

### iii.1 Randomized Construction of Triorthogonal Matrices

The matrix is constructed as follows. We construct the rows of the matrix iteratively, choosing each row uniformly at random subject to constraints given by previous rows. More precisely, when choosing the -th row of the matrix, we choose the row uniformly at random subject to (i) the constraint (II.1) for and for all , (ii) the constraint (II.2) for and for all , and (iii) the constraint that the row has either even or odd weight depending on whether it is one of the first rows of or not. If it is not possible to satisfy all these constraints, then we terminate the construction and declare failure. Otherwise, we continue the algorithm. If we are able to satisfy the constraints for all rows of , we return the resulting matrix; in this case, we say that the algorithm “succeeds.”

Note that all of these constraints that enter into choosing the -th row are linear constraints on the entries of the row. Eq. (II.1) gives constraints while Eq. (II.2) gives constraints (the constraints need not be independent). We can express these constraints as follows: let denote the -th row vector of . Then, let be a -by- matrix, with the first rows of being equal to the first rows of . The next rows of are vectors for . The last row of is the all s vector . The constraints on can then be written as

(III.1) |

for and

(III.2) |

for . If is in the span of the first rows of , then the constraints have no solution; otherwise, the constraints have a solution. Let denote the row span of ; then, for , the constraint (III.2) is equivalent to requiring that .

We now analyze the probability that the algorithm succeeds, returning a matrix . We also analyze the distance of . Our goal is to show a lower bound on the probability that the distance is at least , for some . The analysis of the distance is based on the first moment method: We estimate the probability that a given vector is in . We then sum this probability over all choices of such that and bound the result.

Let be a given vector with and . Let us first compute the probability that and conditioned on the algorithm succeeding. Since , then for all . Hence,

(III.3) |

since the condition is independent of the constraint . Note that success of the algorithm depends only on the choices of the odd weight rows, and the even weight rows are chosen after the odd weight rows so that the choice of does not affect success. So,

(III.4) |

Now consider the probability that the algorithm succeeds and . As a warm-up, we consider the probability that the algorithm succeeds and that some vector with small Hamming weight is in . We will use big-O notation from here on, considering the asymptotics of large . Let be the binary entropy function.

###### Lemma 1.

Consider any fixed . Then, the probability that the algorithm succeeds and that is in is bounded as:

(III.5) |

Further, let be a constant, and let be such that . Then, for , we have

(III.6) |

(The above equation means that for some function which is the probability that there exists a nonzero with or is ).

###### Proof.

Suppose is in . Then, for some coefficients . We consider each of the possible nonzero choices of the vector and bound the probability that, for the given choice of , for chosen by the algorithm. For a given choice of nonzero , let be the largest such that . The vector is chosen randomly subject to constraints. Hence, for given and given , , the probability that is bounded by . There are possible choices of . Summing over these choices and summing over , Eq. (III.5) follows.

By a first moment bound, the probability that there is a nonzero vector of weight at most in is bounded by

Similarly, the probability that there is a vector with weight at least in is bounded by

For with , the exponent reads . The number of vectors with weight at most or at least , is . By choosing such that , the first moment bound gives a result which is . We can instead choose , where is some positive function, so that and the first moment bound gives a result which is . ∎

###### Lemma 2.

Let and be chosen as in Lemma 1, and suppose . Let be a constant. Then, the probability that the algorithm succeeds and that the (classical) minimum distance of is smaller than is at most

For sufficiently small , there are such that this expression tends to zero for large .

###### Proof.

We say that has good distance if all nonzero vectors in have , where term is from Lemma 1. By Eq. III.6, the probability that the algorithm succeeds and that does not have good distance is .

Let . We now bound the probability that the algorithm succeeds and that has good distance and that .

If , then for some -by- binary upper triangular matrix and for some , we have

(III.7) |

We consider each of the possible nonzero choices of the matrix and each of the two choices of , and bound the probability that Eq. (III.7) holds for the given choice.

Suppose (the case follows from this case by considering the vector ). For a given choice of , let be the largest such that for some . Let be given; we compute the probability that is such that Eq. (III.7) holds. Eq. (III.7) imposes an inhomogeneous linear constraint on as

(III.8) |

where

Assuming has good distance, we have . Then, the linear contraint Eq. (III.8) has rank at least ; in fact, it fixes at least components of . The vector is chosen randomly subject to linear constraints. Hence, the probability that Eq. (III.8) holds is at most

Summing over all choices of , the probability that the algorithm succeeds and that has good distance and that is bounded by

The number of vectors with is (for )

(III.9) |

Hence, by a first moment argument, the probability that the algorithm succeeds and that has good distance and that has distance smaller than for is

We know as . For small enough we have . Hence this probability is for sufficiently small . ∎

Finally,

###### Lemma 3.

Let . Then, for sufficiently small , the algorithm succeeds with probability .

###### Proof.

Suppose the algorithm fails on step . Then, the first steps of the algorithm succeed and the vector must be in . The probability that this happens is , as we can see using the same proof as in Lemma 2. There is one minor modification to the proof: Eq. (III.7) is replaced by

(III.10) |

Also, there is no need to sum over vectors as instead we are considering the probability that a fixed vector is in . Otherwise, the proof is the same. ∎

Hence,

###### Theorem 1.

We can choose and , so that and with high probability the algorithm succeeds and the triorthogonal matrix has distance

###### Proof.

By Lemma 3, the algorithm succeeds with high probability for sufficiently small . By Lemma 2, for sufficiently small , for , the distance of is with high probability. Now we condition on the event that the algorithm succeeds and has linear distance.

The distance of the triorthogonal matrix can be bounded by a first moment bound. Since has linear distance, the event that for any nonzero of weight does not happen. Then, we can apply Eq. III.4 using the fact that for any constant , the number of vectors with weight at most is . So, for sufficiently small the first moment bound implies that the probability that there is of weight is . ∎

Now that in this regime, the distillation efficiency Bravyi and Haah (2012) defined as converges to as .

### iii.2 Randomized Construction of Generalized Triorthogonal Matrices

The randomized construction of triorthogonal matrices above immediately generalizes to a randomized construction of generalized triorthogonal matrices. In the previous randomized construction, each vector was chosen at random subject to certain linear constraints. Note that Eqs. (III.2,III.1) have the same left-hand side but different right-hand side. These constraints were homogeneous for row vectors in (i.e., Eq. (III.2) has the zero vector on the right-hand side) and inhomogeneous for row vectors in (i.e., Eq. (III.1) has one nonzero entry on the right-hand side). For a generalized triorthogonal matrix, we follow the same randomized algorithm as before except that we modify the constraints on the vectors . The vectors will still be subject to linear constraints that is equal to some fixed vector, with the same as before. However, the fixed vector is changed in the generalized algorithm to obey the definition of a generalized triorthogonal matrix. This modifies the success probability of the algorithm, but one may verify that the algorithm continues to succeed with high probability in the regime considered before.

## Iv Reed-Muller Code Based Distillation

In Refs. Eastin, 2013; Jones, 2013b, a construction was presented to distill a single Toffoli gate from gates, so that any single error in the gates is detected. More quantitatively, if the input gates have error probability , the output Toffoli has error probability .

In this subsection, we present alternatives to these constructions using generalized triorthogonal codes based on Reed-Muller codes. The protocols of Refs. Eastin, 2013; Jones, 2013b will be similar to the smallest instances.

### iv.1 Review of classical Reed-Muller codes

The space of -valued functions over binary variables is a vector space of dimension , and every such function can be identified with a polynomial in . We choose a bijection defined by

(IV.1) |

where the right-hand side is the list of function values. In this bijection, the ordering of elements of is implicit, but a different ordering is nothing but a different ordering of bits, and hence as a block-code it is immaterial. For example, the degree zero polynomial is a constant function, that corresponds to all-1 vector of length , and a degree 1 polynomial is a function that corresponds to a vector of length and weight . Since the variables are binary, we have , and every polynomial function is a unique sum of monomials where each variable has exponent 0 or 1.

For an integer the Reed-Muller code is defined to be the set of all polynomials (modulo the ideal ) of degree at most , expressed as the lists of function values.

(IV.2) |

By definition, . For example, is the repetition code of length . A basis of consists of monomials that are products of at most distinct variables. Hence, the number of encoded (classical) bits in is equal to . The code distance of is , which can be proved by induction in .

A property we make routine use of is that whenever a polynomial does not contain (the product of all variables), the corresponding vector of length has even weight. This allows us to see that the dual of is again a Reed-Muller code, and direct dimension counting shows that

(IV.3) |

In Reed-Muller code, it is easy to consider the wedge product of two codes, which appears naturally in the triorthogonality. Namely, given two binary subspaces and , we define the wedge product as

(IV.4) | ||||

(IV.5) |

By definition, . Since a code word of a Reed-Muller code is a list of function values, we see that

(IV.6) |

It follows that is triorthogonal subspace if . (In fact, it is triply even.)

Since a basis of Reed-Muller codes consists of monomials where each variable has exponent 0 or 1, it is often convenient to think of a monomial as a binary -tuple, that specifies which variable is a factor of the monomial. For example, if , the constant function can be represented as , the function can be represented as , and the function can be represented as . This -tuple is called an indicator vector. (In contrast to what the name suggests, the “sum” of indicator vectors is not defined.) An indicator vector that defines a monomial corresponds to a code word . Under the wedge product of two code words, the corresponding two monomials is multiplied. In terms of indicator vector, this amounts to taking bit-wise OR operation which we denote by :

(IV.7) |

For example, if ,

### iv.2 Triorthogonal codes for CCZ

Let be a multiple of 3. We consider to build a generalized triorthogonal code on qubits, with but . Since , the generating matrix of qualifies to be . The -distance of the triorthogonal code is at least the distance of , which is . (In fact, it is exactly this.)

We choose triples of specified by triples of indicator vectors . The triorthogonality conditions can be summarized as follows.

(IV.8) | ||||

(A similar set of conditions for should be straightforward.) We choose to have weight exactly , so that the first six conditions above are automatically satisfied.

We will give three constructions of triples obeying these requirements. One construction will be analytic, one will be numerical, and one will be a randomized construction using the Lovasz local lemma. It may be useful for the reader to think of a vector as corresponding to a subset of some set with . Then, a triple consists of three disjoint subsets of cardinality each.

The analytic construction is as follows:

(IV.9) |

where we labeled a triple by . So, we have triples. Here, denotes the indicator vector of length formed by concatenating three bit strings of length , and is the complement of so that . By construction, one can verify that for any . The case and are excluded, for the triple to satisfy the other generalized triorthogonality conditions. Suppose that are rows of and are not all from the same triple. We need to check that . Any potential violation to this condition is when and and for some because there is no way to have unless , which case we have excluded. But then, we must have that to have .

In the particular case , this construction gives . However, we can instead have with the triple of indicator vectors , corresponding to polynomials . The full generalized triorthogonal matrix is