Saturday, 6 February 2016

learning a sequence using if-then machines

Last post I claimed that we can easily learn sequences using if-then machines. This post is just to give an example of that.

Let's dive in:
context if-then machine learning a sequence

-- define our superpositions:
-- let's make them random 10 dimensional, with coeffs in range [0,20]
the |sp1> => absolute-noise[20] 0 range(|x: 1>,|x: 10>)
the |sp2> => absolute-noise[20] 0 range(|x: 1>,|x: 10>)
the |sp3> => absolute-noise[20] 0 range(|x: 1>,|x: 10>)
the |sp4> => absolute-noise[20] 0 range(|x: 1>,|x: 10>)
the |sp5> => absolute-noise[20] 0 range(|x: 1>,|x: 10>)

-- define our if-then machines:
-- ie, learn the sequence of superpositions
seq |node: 1: 1> => the |sp1>
then |node: 1: *> => the |sp2>

seq |node: 2: 1> => the |sp2>
then |node: 2: *> => the |sp3>

seq |node: 3: 1> => the |sp3>
then |node: 3: *> => the |sp4>

seq |node: 4: 1> => the |sp4>
then |node: 4: *> => the |sp5>

seq |node: 5: 1> => the |sp5>
then |node: 5: *> => |the finish line>

-- define the input superposition:
the |input> => the |sp1>

-- see what we have:
table[node,coeff] 100 similar-input[seq] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 1: 1 | 100.0  |
| 5: 1 | 69.718 |
| 2: 1 | 65.306 |
| 3: 1 | 65.192 |
| 4: 1 | 62.993 |
+------+--------+

table[node,coeff] 100 similar-input[seq] then drop-below[0.9] similar-input[seq] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 2: 1 | 100    |
| 1: 1 | 65.306 |
| 3: 1 | 64.579 |
| 4: 1 | 62.829 |
| 5: 1 | 52.732 |
+------+--------+

table[node,coeff] 100 similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 3: 1 | 100    |
| 5: 1 | 79.326 |
| 4: 1 | 73.162 |
| 1: 1 | 65.192 |
| 2: 1 | 64.579 |
+------+--------+

table[node,coeff] 100 similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 4: 1 | 100    |
| 5: 1 | 76.359 |
| 3: 1 | 73.162 |
| 1: 1 | 62.993 |
| 2: 1 | 62.829 |
+------+--------+

table[node,coeff] 100 similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 5: 1 | 100.0  |
| 3: 1 | 79.326 |
| 4: 1 | 76.359 |
| 1: 1 | 69.718 |
| 2: 1 | 52.732 |
+------+--------+

sa: then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] the |input>
1.0|the finish line>

-- finally, see what the ugly details look like:
sa: dump
----------------------------------------
|context> => |context: if-then machine learning a sequence>

the |sp1> => 12.363|x: 1> + 7.862|x: 2> + 4.541|x: 3> + 2.752|x: 4> + 15.782|x: 5> + 13.444|x: 6> + 8.522|x: 7> + 7.512|x: 8> + 11.056|x: 9> + 17.304|x: 10>
the |sp2> => 8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>
the |sp3> => 19.846|x: 1> + 19.852|x: 2> + 19.825|x: 3> + 6.605|x: 4> + 10.253|x: 5> + 8.096|x: 6> + 1.937|x: 7> + 7.358|x: 8> + 12.041|x: 9> + 1.345|x: 10>
the |sp4> => 14.787|x: 1> + 10.035|x: 2> + 3.728|x: 3> + 16.038|x: 4> + 5.647|x: 5> + 3.857|x: 6> + 3.552|x: 7> + 7.227|x: 8> + 16.747|x: 9> + 1.412|x: 10>
the |sp5> => 18.044|x: 1> + 18.396|x: 2> + 8.64|x: 3> + 14.424|x: 4> + 19.749|x: 5> + 6.61|x: 6> + 7.26|x: 7> + 4.446|x: 8> + 9.583|x: 9> + 1.272|x: 10>

seq |node: 1: 1> => 12.363|x: 1> + 7.862|x: 2> + 4.541|x: 3> + 2.752|x: 4> + 15.782|x: 5> + 13.444|x: 6> + 8.522|x: 7> + 7.512|x: 8> + 11.056|x: 9> + 17.304|x: 10>
then |node: 1: *> => 8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>

seq |node: 2: 1> => 8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>
then |node: 2: *> => 19.846|x: 1> + 19.852|x: 2> + 19.825|x: 3> + 6.605|x: 4> + 10.253|x: 5> + 8.096|x: 6> + 1.937|x: 7> + 7.358|x: 8> + 12.041|x: 9> + 1.345|x: 10>

seq |node: 3: 1> => 19.846|x: 1> + 19.852|x: 2> + 19.825|x: 3> + 6.605|x: 4> + 10.253|x: 5> + 8.096|x: 6> + 1.937|x: 7> + 7.358|x: 8> + 12.041|x: 9> + 1.345|x: 10>
then |node: 3: *> => 14.787|x: 1> + 10.035|x: 2> + 3.728|x: 3> + 16.038|x: 4> + 5.647|x: 5> + 3.857|x: 6> + 3.552|x: 7> + 7.227|x: 8> + 16.747|x: 9> + 1.412|x: 10>

seq |node: 4: 1> => 14.787|x: 1> + 10.035|x: 2> + 3.728|x: 3> + 16.038|x: 4> + 5.647|x: 5> + 3.857|x: 6> + 3.552|x: 7> + 7.227|x: 8> + 16.747|x: 9> + 1.412|x: 10>
then |node: 4: *> => 18.044|x: 1> + 18.396|x: 2> + 8.64|x: 3> + 14.424|x: 4> + 19.749|x: 5> + 6.61|x: 6> + 7.26|x: 7> + 4.446|x: 8> + 9.583|x: 9> + 1.272|x: 10>

seq |node: 5: 1> => 18.044|x: 1> + 18.396|x: 2> + 8.64|x: 3> + 14.424|x: 4> + 19.749|x: 5> + 6.61|x: 6> + 7.26|x: 7> + 4.446|x: 8> + 9.583|x: 9> + 1.272|x: 10>
then |node: 5: *> => |the finish line>

the |input> => 12.363|x: 1> + 7.862|x: 2> + 4.541|x: 3> + 2.752|x: 4> + 15.782|x: 5> + 13.444|x: 6> + 8.522|x: 7> + 7.512|x: 8> + 11.056|x: 9> + 17.304|x: 10>
----------------------------------------
And if you follow the tables, it works exactly as expected. Note though that we chose our if-then machines to be exact matches (ie, 100%) to the input superposition. I did that for demonstration purposes. How about a tweak on the above, where that is not the case. We will use absolute-noise[1] to nosiy up our superpositions:
-- define a new layer of input patterns seq2 (note, we don't need to (re)define the "then" operator, since we are using the same ones as above):
seq2 |node: 1: 1> => absolute-noise[1] the |sp1>
seq2 |node: 2: 1> => absolute-noise[1] the |sp2>
seq2 |node: 3: 1> => absolute-noise[1] the |sp3>
seq2 |node: 4: 1> => absolute-noise[1] the |sp4>
seq2 |node: 5: 1> => absolute-noise[1] the |sp5>

-- now put it to use:
table[node,coeff] 100 similar-input[seq2] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 1: 1 | 98.604 |
| 5: 1 | 70.492 |
| 3: 1 | 65.944 |
| 2: 1 | 65.777 |
| 4: 1 | 63.699 |
+------+--------+

table[node,coeff] 100 similar-input[seq2] then drop-below[0.9] similar-input[seq2] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 2: 1 | 98.698 |
| 1: 1 | 66.038 |
| 3: 1 | 65.322 |
| 4: 1 | 63.723 |
| 5: 1 | 53.78  |
+------+--------+

table[node,coeff] 100 similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 3: 1 | 98.775 |
| 5: 1 | 79.902 |
| 4: 1 | 73.249 |
| 1: 1 | 66.02  |
| 2: 1 | 65.681 |
+------+--------+

table[node,coeff] 100 similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 4: 1 | 98.632 |
| 5: 1 | 76.244 |
| 3: 1 | 74.323 |
| 1: 1 | 64.251 |
| 2: 1 | 63.981 |
+------+--------+

table[node,coeff] 100 similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 5: 1 | 98.495 |
| 3: 1 | 80.222 |
| 4: 1 | 76.313 |
| 1: 1 | 70.211 |
| 2: 1 | 53.996 |
+------+--------+

sa: then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] the |input>
0.985|the finish line>
Note that as desired we again find the |node: 1: 1>, |node: 2: 1> ... |node: 5: 1> sequence, but this time with a roughly 98% rather than a 100% match. Hopefully that makes my point.

A couple of comments:
1) if-then machines work with any superpositions.
2) the then operator can also have side effects. eg using stored rules. This is a big deal! And makes if-then machines even more powerful.
3) the above are quite simple if-then machines in that there is no pooling. ie, only one input superposition triggers each machine. A full if-then machine can have many "pooled" inputs.
4) once again, a whinge about my parser. If that was finished, we could short-cut the above using:
next (*) #=> then drop-below[0.9] similar-input[seq] |_self>
next2 (*) #=> then drop-below[0.9] similar-input[seq2] |_self>

-- after which we would use:
table[node,coeff] 100 similar-input[seq] next^k the |input>
table[node,coeff] 100 similar-input[seq2] next2^k the |input>
5) for any sufficiently long if-then machine sequence with the matches below 100%, eventually you will reach a point where the result is less than the drop-below threshold t leaving you with the empty ket |>. Which kind of makes sense. If you are reasoning with probabilities, not certainties, for a long enough chain you can't be sure of your conclusion. On the flip side, if your matches are pretty much 100%, as in the world of mathematics, then you should be able to have long sequences and still be above the drop-below threshold.
6) I'm pretty sure temporal pooling and spatial pooling look the same, as far as if-then machines are concerned. In that case the difference is where the superpositions come from, not the structure of the if-then machine.

Finally, this is what we have now:
----------------------------------------
|context> => |context: fixed if-then machine learning a sequence>

the |sp1> => 12.363|x: 1> + 7.862|x: 2> + 4.541|x: 3> + 2.752|x: 4> + 15.782|x: 5> + 13.444|x: 6> + 8.522|x: 7> + 7.512|x: 8> + 11.056|x: 9> + 17.304|x: 10>
the |sp2> => 8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>
the |sp3> => 19.846|x: 1> + 19.852|x: 2> + 19.825|x: 3> + 6.605|x: 4> + 10.253|x: 5> + 8.096|x: 6> + 1.937|x: 7> + 7.358|x: 8> + 12.041|x: 9> + 1.345|x: 10>
the |sp4> => 14.787|x: 1> + 10.035|x: 2> + 3.728|x: 3> + 16.038|x: 4> + 5.647|x: 5> + 3.857|x: 6> + 3.552|x: 7> + 7.227|x: 8> + 16.747|x: 9> + 1.412|x: 10>
the |sp5> => 18.044|x: 1> + 18.396|x: 2> + 8.64|x: 3> + 14.424|x: 4> + 19.749|x: 5> + 6.61|x: 6> + 7.26|x: 7> + 4.446|x: 8> + 9.583|x: 9> + 1.272|x: 10>

seq |node: 1: 1> => 12.363|x: 1> + 7.862|x: 2> + 4.541|x: 3> + 2.752|x: 4> + 15.782|x: 5> + 13.444|x: 6> + 8.522|x: 7> + 7.512|x: 8> + 11.056|x: 9> + 17.304|x: 10>
seq2 |node: 1: 1> => 13.351|x: 1> + 8.69|x: 2> + 4.772|x: 3> + 3.054|x: 4> + 16.642|x: 5> + 13.708|x: 6> + 9.148|x: 7> + 8.439|x: 8> + 11.983|x: 9> + 17.609|x: 10>
then |node: 1: *> => 8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>

seq |node: 2: 1> => 8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>
seq2 |node: 2: 1> => 8.568|x: 1> + 4.859|x: 2> + 15.537|x: 3> + 3.768|x: 4> + 5.416|x: 5> + 1.809|x: 6> + 4.228|x: 7> + 18.566|x: 8> + 15.799|x: 9> + 11.21|x: 10>
then |node: 2: *> => 19.846|x: 1> + 19.852|x: 2> + 19.825|x: 3> + 6.605|x: 4> + 10.253|x: 5> + 8.096|x: 6> + 1.937|x: 7> + 7.358|x: 8> + 12.041|x: 9> + 1.345|x: 10>

seq |node: 3: 1> => 19.846|x: 1> + 19.852|x: 2> + 19.825|x: 3> + 6.605|x: 4> + 10.253|x: 5> + 8.096|x: 6> + 1.937|x: 7> + 7.358|x: 8> + 12.041|x: 9> + 1.345|x: 10>
seq2 |node: 3: 1> => 20.38|x: 1> + 20.207|x: 2> + 20.263|x: 3> + 7.372|x: 4> + 10.786|x: 5> + 8.449|x: 6> + 2.488|x: 7> + 8.009|x: 8> + 12.633|x: 9> + 1.406|x: 10>
then |node: 3: *> => 14.787|x: 1> + 10.035|x: 2> + 3.728|x: 3> + 16.038|x: 4> + 5.647|x: 5> + 3.857|x: 6> + 3.552|x: 7> + 7.227|x: 8> + 16.747|x: 9> + 1.412|x: 10>

seq |node: 4: 1> => 14.787|x: 1> + 10.035|x: 2> + 3.728|x: 3> + 16.038|x: 4> + 5.647|x: 5> + 3.857|x: 6> + 3.552|x: 7> + 7.227|x: 8> + 16.747|x: 9> + 1.412|x: 10>
seq2 |node: 4: 1> => 15.785|x: 1> + 10.785|x: 2> + 3.955|x: 3> + 16.93|x: 4> + 5.953|x: 5> + 4.312|x: 6> + 3.647|x: 7> + 7.997|x: 8> + 17.241|x: 9> + 2.228|x: 10>
then |node: 4: *> => 18.044|x: 1> + 18.396|x: 2> + 8.64|x: 3> + 14.424|x: 4> + 19.749|x: 5> + 6.61|x: 6> + 7.26|x: 7> + 4.446|x: 8> + 9.583|x: 9> + 1.272|x: 10>

seq |node: 5: 1> => 18.044|x: 1> + 18.396|x: 2> + 8.64|x: 3> + 14.424|x: 4> + 19.749|x: 5> + 6.61|x: 6> + 7.26|x: 7> + 4.446|x: 8> + 9.583|x: 9> + 1.272|x: 10>
seq2 |node: 5: 1> => 18.631|x: 1> + 19.0|x: 2> + 9.594|x: 3> + 14.643|x: 4> + 20.408|x: 5> + 7.457|x: 6> + 7.315|x: 7> + 4.685|x: 8> + 10.152|x: 9> + 1.88|x: 10>
then |node: 5: *> => |the finish line>

the |input> => 12.363|x: 1> + 7.862|x: 2> + 4.541|x: 3> + 2.752|x: 4> + 15.782|x: 5> + 13.444|x: 6> + 8.522|x: 7> + 7.512|x: 8> + 11.056|x: 9> + 17.304|x: 10>
----------------------------------------
Update: now we have all these superpositions, let's go on and show pooling.
-- define our if-then machine:
seq3 |node: 17: 1> => the |sp1>
seq3 |node: 17: 2> => the |sp2>
seq3 |node: 17: 3> => the |sp3>
seq3 |node: 17: 4> => the |sp4>
seq3 |node: 17: 5> => the |sp5>
then |node: 17: *> => |the SP sequence>

-- now, let's do some testing first.
-- randomly pick one of {sp1,sp2,sp3,sp4,sp5}, add some noise, and see what we have:
sa: table[node,coeff] 100 similar-input[seq3] absolute-noise[5] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
+-------+--------+
| node  | coeff  |
+-------+--------+
| 17: 5 | 92.791 |
| 17: 3 | 80.064 |
| 17: 4 | 76.8   |
| 17: 1 | 73.887 |
| 17: 2 | 59.433 |
+-------+--------+
-- so the input must have been sp5

sa: table[node,coeff] 100 similar-input[seq3] absolute-noise[5] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
+-------+--------+
| node  | coeff  |
+-------+--------+
| 17: 4 | 93.223 |
| 17: 5 | 78.173 |
| 17: 3 | 72.651 |
| 17: 2 | 68.824 |
| 17: 1 | 67.369 |
+-------+--------+
-- the input must have been sp4

sa: table[node,coeff] 100 similar-input[seq3] absolute-noise[5] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
+-------+--------+
| node  | coeff  |
+-------+--------+
| 17: 4 | 91.116 |
| 17: 5 | 81.028 |
| 17: 3 | 77.603 |
| 17: 2 | 67.575 |
| 17: 1 | 66.832 |
+-------+--------+
-- sp4 again

sa: table[node,coeff] 100 similar-input[seq3] absolute-noise[5] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
+-------+--------+
| node  | coeff  |
+-------+--------+
| 17: 2 | 89.845 |
| 17: 1 | 74.979 |
| 17: 4 | 69.849 |
| 17: 3 | 68.51  |
| 17: 5 | 61.79  |
+-------+--------+
-- the input must have been sp2

-- now ramp up the noise from absolute-noise[5] to absolute-noise[10], and use the full if-then machine:
sa: then drop-below[0.9] similar-input[seq3] absolute-noise[10] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
|>

sa: then drop-below[0.9] similar-input[seq3] absolute-noise[10] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
|>

sa: then drop-below[0.9] similar-input[seq3] absolute-noise[10] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
0.901|the SP sequence>

sa: then drop-below[0.9] similar-input[seq3] absolute-noise[10] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
|>

sa: then drop-below[0.9] similar-input[seq3] absolute-noise[10] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
0.933|the SP sequence>
And there we have it! Pooling of {sp1,sp2,sp3,sp4,sp5}, and an output of "the SP sequence". And since we added so much noise, it is only sometimes above the drop-below threshold t (in this case 0.9). And since it is all so abstract, this thing is very general.

Note that pooling is a very important concept. Basically it means you can have multiple different representations of the same thing, even though they may look nothing alike. For example, a friends face from different angles in terms of pixels is very different, yet they all trigger the "hey, that's my friend". Another is, the lyrics or notes in a song. Despite being different, they all map to the same song name.

Next, instead of adding noise to the incoming superposition, we project down from 10 dimensions to 9 (using pick[9]):
sa: then drop-below[0.9] similar-input[seq3] pick[9] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
0.987|the SP sequence>

sa: then drop-below[0.9] similar-input[seq3] pick[9] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
0.957|the SP sequence>

sa: then drop-below[0.9] similar-input[seq3] pick[9] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
0.983|the SP sequence>

sa: then drop-below[0.9] similar-input[seq3] pick[9] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
|>

sa: then drop-below[0.9] similar-input[seq3] pick[9] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
0.973|the SP sequence>
Next, I tried projecting down to 8 dimensions (using pick[8]), but almost always the result was below threshold. BTW, the current pick[n] code changes the order of the superposition. This of course has no impact on similar-input[op], and hence if-then machines. Most of the time changing the ordering of a superposition does not change the meaning of that superposition. Though some of the time it is of course useful to sort superpositions, and we have operators for that (ket-sort, coeff-sort, sort-by[], and so on).

Also, I should note that if-then machines are, in general, fairly tolerant of adding noise (using absolute-noise[t]) and removing elements from the superposition (pick[n]). They become more so if you decrease t, and less so if you increase t. Though you don't want t too small, else it will match more than you would like. And if you increase t to 0.98 or higher, then you are in the maths world of black and white/true and false.

Update: note that if you have a long line of code, and don't fully understand it, then we can always decompose that sequence into smaller steps. eg, given:
then drop-below[0.9] similar-input[seq3] pick[9] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
we can work through it step by step:
sa: split |sp1 sp2 sp3 sp4 sp5>
|sp1> + |sp2> + |sp3> + |sp4> + |sp5>

sa: pick-elt split |sp1 sp2 sp3 sp4 sp5>
|sp2>

sa: the |sp2>
8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>

sa: pick[9] the |sp2>
4.74|x: 5> + 8.05|x: 1> + 14.833|x: 9> + 3.91|x: 7> + 4.543|x: 2> + 1.059|x: 6> + 11.074|x: 10> + 17.714|x: 8> + 3.443|x: 4>

sa: similar-input[seq3] (4.74|x: 5> + 8.05|x: 1> + 14.833|x: 9> + 3.91|x: 7> + 4.543|x: 2> + 1.059|x: 6> + 11.074|x: 10> + 17.714|x: 8> + 3.443|x: 4>)
0.826|node: 17: 2> + 0.692|node: 17: 1> + 0.663|node: 17: 4> + 0.526|node: 17: 3> + 0.512|node: 17: 5>

sa: drop-below[0.9] (0.826|node: 17: 2> + 0.692|node: 17: 1> + 0.663|node: 17: 4> + 0.526|node: 17: 3> + 0.512|node: 17: 5>)
|>

sa: then |>
|>
Noting that random steps like pick[n] and pick-elt complicate this, in that each time you use them you will get different answers. Hence why I copied the sp result from the previous line into the next step.

No comments:

Post a Comment