In BKO we simply have:
-- NB: we drop/ignore terms from superpositions that have coeff == 0. M1 |x1> => 3|y2> M1 |x2> => 7|y1> + 6|y2> M1 |x3> => |y1> + 4|y2> M1 |x4> => |y1> M1 |x5> => 6|y1> + 4|y2> M1 |x6> => 4|y1> + 8|y2> M1 |x7> => |y1> + 2|y2> M2 |y1> => 6|z1> + 2|z2> + 7|z3> + 9|z4> + 5|z5> M2 |y2> => 3|z2> + 4|z3> + |z5>Now as matrices:
sa: matrix[M1] [ y1 ] = [ 0 7.00 1.00 1.00 6.00 4.00 1.00 ] [ x1 ] [ y2 ] [ 3.00 6.00 4.00 0 4.00 8.00 2.00 ] [ x2 ] [ x3 ] [ x4 ] [ x5 ] [ x6 ] [ x7 ] sa: matrix[M2] [ z1 ] = [ 6.00 0 ] [ y1 ] [ z2 ] [ 2.00 3.00 ] [ y2 ] [ z3 ] [ 7.00 4.00 ] [ z4 ] [ 9.00 0 ] [ z5 ] [ 5.00 1.00 ] sa: matrix[M2,M1] [ z1 ] = [ 6.00 0 ] [ 0 7.00 1.00 1.00 6.00 4.00 1.00 ] [ x1 ] [ z2 ] [ 2.00 3.00 ] [ 3.00 6.00 4.00 0 4.00 8.00 2.00 ] [ x2 ] [ z3 ] [ 7.00 4.00 ] [ x3 ] [ z4 ] [ 9.00 0 ] [ x4 ] [ z5 ] [ 5.00 1.00 ] [ x5 ] [ x6 ] [ x7 ] sa: merged-matrix[M2,M1] [ z1 ] = [ 0 42.00 6.00 6.00 36.00 24.00 6.00 ] [ x1 ] [ z2 ] [ 9.00 32.00 14.00 2.00 24.00 32.00 8.00 ] [ x2 ] [ z3 ] [ 12.00 73.00 23.00 7.00 58.00 60.00 15.00 ] [ x3 ] [ z4 ] [ 0 63.00 9.00 9.00 54.00 36.00 9.00 ] [ x4 ] [ z5 ] [ 3.00 41.00 9.00 5.00 34.00 28.00 7.00 ] [ x5 ] [ x6 ] [ x7 ]So I guess the take home point is that it is easy to map back and forth between sw/bko and matrices. Yeah, matrices are pretty to look at, but in general I find it easier to work with BKO. And certainly in the case of large sparse matrices, sw format is far more efficient!
Unlike standard matrices, where the input and output vectors usually aren't associated with labels, here they are.
eg, if we consider:
sa: matrix[M1,M2] [ ] = [ 0 0 0 0 0 ] [ 6.00 0 ] [ y1 ] [ 2.00 3.00 ] [ y2 ] [ 7.00 4.00 ] [ 9.00 0 ] [ 5.00 1.00 ]This is because the output from the M2 matrix is [z1,z2,z3,z4,z5] which has no knowledge of the M1 operator/matrix, which has only been defined with respect to |xn>.
The other thing to note is that M2 M1 does the right then when applied to ket/superpositions.
sa: M2 M1 |x3> 6.000|z1> + 14.000|z2> + 23.000|z3> + 9.000|z4> + 9.000|z5> sa: M2 M1 |x7> 6.000|z1> + 8.000|z2> + 15.000|z3> + 9.000|z4> + 7.000|z5>which is exactly what we expect when we look at the merged M2 M1 matrix above.
Update: I think I should add a little more here. First up, the point I was trying to make above is that in standard maths, matrices are "anonymous". They don't care about the names of the incoming vector basis elements, and the out-going vector base element names. But in BKO there are no anonymous matrices at all! They are all defined with respect to kets. I guess in the neural network context this makes sense. Matrices are defined with respect to particular neurons (ie, kets), rather than some unnamed collection of input neurons. This also means if you apply a BKO matrix to the wrong superposition, then you get the empty superposition as a result. eg, the matrix[M1,M2] example above.
Also, I want to do another matrix example:
Say we have this matrix:
y = M x [ y1 ] [ 0 1 1 0 ] [ x1 ] [ y2 ] = [ 4 0 2 3 ] [ x2 ] [ y3 ] [ 2 1 4 4 ] [ x3 ] [ x4 ]In BKO this is (yeah, in this example I included the 0 coeff kets too):
M |x1> => 0|y1> + 4|y2> + 2|y3> M |x2> => |y1> + 0|y2> + |y3> M |x3> => |y1> + 2|y2> + 4|y3> M |x4> => 0|y1> + 3|y2> + 4|y3>Now, let's show an example of matrix multiplication with a vector. Let's say x = (1,1,1,1), then we have:
sa: M (|x1> + |x2> + |x3> + |x4>) 2|y1> + 9|y2> + 11|y3>ie, we interpret the resulting superposition as: y = (2,9,11).
Next, say x = (9,3,0,4):
sa: M (9|x1> + 3|x2> + 0|x3> + 4|x4>) 3|y1> + 48|y2> + 37|y3>ie, y = (3,48,37)