So a really simple one today. The idea is simply if you add noisy signals, then the noise cancels out somewhat. OK. No great revelation there! But how much, and how rapidly, it cancels out is less obvious. So some quick tables to look into this.
Let's jump in:

-- define some random pattern, with values in [0,10], of length 20:
sa: the |pattern> => absolute-noise[10] 0 range(|1>,|20>)
-- take a look at it:
sa: the |pattern>
6.673|1> + 4.725|2> + 6.098|3> + 6.479|4> + 5.812|5> + 0.854|6> + 3.759|7> + 5.746|8> + 4.53|9> + 8.784|10> + 6.403|11> + 9.019|12> + 9.699|13> + 1.499|14> + 0.249|15> + 0.341|16> + 5.219|17> + 3.309|18> + 3.826|19> + 1.682|20>
-- now, create some new superpositions that add random noise to our starting pattern:
sa: noise |sample 1> => absolute-noise[7] the |pattern>
sa: noise |sample 2> => absolute-noise[7] the |pattern>
...
sa: noise |sample 40> => absolute-noise[7] the |pattern>
-- now look how similar they are to the original pattern:
sa: similarity |*> #=> round[3] push-float 100 ket-simm(the |pattern>,noise |_self>)
sa: table[node,similarity] rel-kets[noise]
+-----------+------------+
| node | similarity |
+-----------+------------+
| sample 1 | 83.61 |
| sample 2 | 81.608 |
| sample 3 | 88.067 |
| sample 4 | 90.453 |
| sample 5 | 86.422 |
| sample 6 | 83.649 |
| sample 7 | 86.753 |
| sample 8 | 88.894 |
| sample 9 | 87.55 |
| sample 10 | 85.516 |
| sample 11 | 87.474 |
| sample 12 | 84.176 |
| sample 13 | 85.012 |
| sample 14 | 85.964 |
| sample 15 | 84.734 |
| sample 16 | 84.48 |
| sample 17 | 89.777 |
| sample 18 | 87.501 |
| sample 19 | 81.542 |
| sample 20 | 83.873 |
| sample 21 | 83.79 |
| sample 22 | 85.758 |
| sample 23 | 85.052 |
| sample 24 | 85.927 |
| sample 25 | 89.127 |
| sample 26 | 82.649 |
| sample 27 | 87.104 |
| sample 28 | 82.397 |
| sample 29 | 87.262 |
| sample 30 | 88.925 |
| sample 31 | 86.829 |
| sample 32 | 84.271 |
| sample 33 | 89.35 |
| sample 34 | 91.182 |
| sample 35 | 88.751 |
| sample 36 | 87.211 |
| sample 37 | 87.2 |
| sample 38 | 85.382 |
| sample 39 | 86.935 |
| sample 40 | 88.285 |
+-----------+------------+
-- now average the first k of them using, where k is a parameter to be filled in:
ave |result k> => noise select[1,k] rel-kets[noise] |>
-- more concretely:
ave |result 1> => noise select[1,1] rel-kets[noise] |>
ave |result 2> => noise select[1,2] rel-kets[noise] |>
...
ave |result 40> => noise select[1,40] rel-kets[noise] |>
-- now look how similar they are to the original pattern:
sa: averaged-similarity |*> #=> round[3] push-float 100 ket-simm(the |pattern>,ave |_self>)
sa: table[node,averaged-similarity] rel-kets[ave]
+-----------+---------------------+
| node | averaged-similarity |
+-----------+---------------------+
| result 1 | 83.61 |
| result 2 | 83.187 |
| result 3 | 85.838 |
| result 4 | 87.289 |
| result 5 | 88.01 |
| result 6 | 88.114 |
| result 7 | 88.402 |
| result 8 | 88.922 |
| result 9 | 89.445 |
| result 10 | 89.378 |
| result 11 | 89.484 |
| result 12 | 89.174 |
| result 13 | 89.083 |
| result 14 | 89.284 |
| result 15 | 89.079 |
| result 16 | 88.872 |
| result 17 | 89.173 |
| result 18 | 89.288 |
| result 19 | 89.139 |
| result 20 | 88.988 |
| result 21 | 88.885 |
| result 22 | 89.075 |
| result 23 | 89.038 |
| result 24 | 89.017 |
| result 25 | 89.138 |
| result 26 | 89.015 |
| result 27 | 89.172 |
| result 28 | 89.126 |
| result 29 | 89.211 |
| result 30 | 89.293 |
| result 31 | 89.367 |
| result 32 | 89.31 |
| result 33 | 89.467 |
| result 34 | 89.643 |
| result 35 | 89.769 |
| result 36 | 89.847 |
| result 37 | 89.933 |
| result 38 | 89.879 |
| result 39 | 89.935 |
| result 40 | 90.035 |
+-----------+---------------------+

So there we have it. Averaging noisy patterns only very slowly approaches the original pattern. I was hoping for 95% or something, but seems with averaging 40 patterns, we only get roughly 90%. And even that number is probably highly dependent on our noisy patterns, and change each time we do another trial.

Now for a couple of notes:

1) this has some relevance to my average-categorize code which has the line:

out_list[k] = out_list[k] + r*best_simm

ie, it essentially just averages the patterns that are close enough, with respect to the simm().

2) is there some method that extracts out the pattern from the noise more efficiently? And the only data you have is the list of noisy superpositions? In which case we might be able to test that out in average-categorize.

OK. Here is a quick version using union and intersection (ie, max and min respectively):

intersection |result 1> => common[noise] select[1,1] rel-kets[noise] |>
intersection |result 2> => common[noise] select[1,2] rel-kets[noise] |>
...
intersection |result 40> => common[noise] select[1,40] rel-kets[noise] |>
union |result 1> => union[noise] select[1,1] rel-kets[noise] |>
union |result 2> => union[noise] select[1,2] rel-kets[noise] |>
...
union |result 40> => union[noise] select[1,40] rel-kets[noise] |>
intersection-similarity |*> #=> round[3] push-float 100 ket-simm(the |pattern>,intersection |_self>)
union-similarity |*> #=> round[3] push-float 100 ket-simm(the |pattern>,union |_self>)
-- now, let's compare the three methods:
sa: table[node,averaged-similarity,union-similarity,intersection-similarity] rel-kets[ave]
+-----------+---------------------+------------------+-------------------------+
| node | averaged-similarity | union-similarity | intersection-similarity |
+-----------+---------------------+------------------+-------------------------+
| result 1 | 83.61 | 83.61 | 83.61 |
| result 2 | 83.187 | 83.27 | 82.812 |
| result 3 | 85.838 | 85.611 | 86.204 |
| result 4 | 87.289 | 86.062 | 90.052 |
| result 5 | 88.01 | 85.73 | 90.745 |
| result 6 | 88.114 | 85.989 | 92.401 |
| result 7 | 88.402 | 86.492 | 93.643 |
| result 8 | 88.922 | 86.668 | 94.909 |
| result 9 | 89.445 | 86.486 | 95.476 |
| result 10 | 89.378 | 86.072 | 96.314 |
| result 11 | 89.484 | 85.909 | 96.246 |
| result 12 | 89.174 | 85.7 | 96.246 |
| result 13 | 89.083 | 85.715 | 96.172 |
| result 14 | 89.284 | 85.782 | 97.877 |
| result 15 | 89.079 | 85.782 | 97.992 |
| result 16 | 88.872 | 85.782 | 97.982 |
| result 17 | 89.173 | 85.82 | 98.43 |
| result 18 | 89.288 | 85.997 | 98.43 |
| result 19 | 89.139 | 86.177 | 98.473 |
| result 20 | 88.988 | 85.927 | 98.473 |
| result 21 | 88.885 | 85.917 | 98.473 |
| result 22 | 89.075 | 86.008 | 98.404 |
| result 23 | 89.038 | 86.046 | 98.404 |
| result 24 | 89.017 | 86.049 | 98.404 |
| result 25 | 89.138 | 85.98 | 98.464 |
| result 26 | 89.015 | 85.978 | 98.704 |
| result 27 | 89.172 | 86.004 | 98.877 |
| result 28 | 89.126 | 86.004 | 98.973 |
| result 29 | 89.211 | 86.004 | 98.973 |
| result 30 | 89.293 | 85.903 | 99.063 |
| result 31 | 89.367 | 85.903 | 99.063 |
| result 32 | 89.31 | 85.903 | 99.063 |
| result 33 | 89.467 | 85.849 | 99.063 |
| result 34 | 89.643 | 85.849 | 99.162 |
| result 35 | 89.769 | 85.849 | 99.162 |
| result 36 | 89.847 | 85.849 | 99.208 |
| result 37 | 89.933 | 86.03 | 99.29 |
| result 38 | 89.879 | 85.965 | 99.29 |
| result 39 | 89.935 | 85.965 | 99.29 |
| result 40 | 90.035 | 85.965 | 99.29 |
+-----------+---------------------+------------------+-------------------------+

And we see immediately that union-similarity is the worst, average-similarity not so bad, but intersection-similarity is great! But we expect intersection to be good anyway, since our noise is additive only. In the real world where noise can either add to a signal or subtract from it, intersection will presumably be roughly the same as union, and average will presumably be a bit better.

Anyway, nothing of real interest in this post! I just wanted to see some sample numbers.

Update: I tried simm-add[noise] (see next post for details on simm-add[op,p]), but it didn't help.

Here is the code:

-- learn the simm-add results:
sadd |result 1> => simm-add[noise,1] select[1,1] rel-kets[noise] |>
sadd |result 2> => simm-add[noise,1] select[1,2] rel-kets[noise] |>
...
sadd |result 40> => simm-add[noise,1] select[1,40] rel-kets[noise] |>
-- define the relevant operator:
sadd-similarity |*> #=> round[3] push-float 100 ket-simm(the |pattern>,sadd |_self>)
-- show the result:
table[node,averaged-similarity,sadd-similarity] rel-kets[sadd]
+-----------+---------------------+-----------------+
| node | averaged-similarity | sadd-similarity |
+-----------+---------------------+-----------------+
| result 1 | 83.61 | 83.61 |
| result 2 | 83.187 | 83.235 |
| result 3 | 85.838 | 85.716 |
| result 4 | 87.289 | 87.134 |
| result 5 | 88.01 | 87.889 |
| result 6 | 88.114 | 88.003 |
| result 7 | 88.402 | 88.326 |
| result 8 | 88.922 | 88.857 |
| result 9 | 89.445 | 89.359 |
| result 10 | 89.378 | 89.302 |
| result 11 | 89.484 | 89.415 |
| result 12 | 89.174 | 89.106 |
| result 13 | 89.083 | 89.02 |
| result 14 | 89.284 | 89.22 |
| result 15 | 89.079 | 89.016 |
| result 16 | 88.872 | 88.81 |
| result 17 | 89.173 | 89.115 |
| result 18 | 89.288 | 89.235 |
| result 19 | 89.139 | 89.094 |
| result 20 | 88.988 | 88.943 |
| result 21 | 88.885 | 88.842 |
| result 22 | 89.075 | 89.03 |
| result 23 | 89.038 | 88.995 |
| result 24 | 89.017 | 88.975 |
| result 25 | 89.138 | 89.1 |
| result 26 | 89.015 | 88.982 |
| result 27 | 89.172 | 89.137 |
| result 28 | 89.126 | 89.092 |
| result 29 | 89.211 | 89.178 |
| result 30 | 89.293 | 89.261 |
| result 31 | 89.367 | 89.336 |
| result 32 | 89.31 | 89.282 |
| result 33 | 89.467 | 89.441 |
| result 34 | 89.643 | 89.613 |
| result 35 | 89.769 | 89.74 |
| result 36 | 89.847 | 89.819 |
| result 37 | 89.933 | 89.903 |
| result 38 | 89.879 | 89.851 |
| result 39 | 89.935 | 89.906 |
| result 40 | 90.035 | 90.007 |
+-----------+---------------------+-----------------+

So in this case simm-add[op,p] is essentially useless. Though I don't expect that to always be the case. In the case where we are averaging over patterns with very different amounts of currency, then simm-add will give, hopefully, markedly different results. Where, recall, currency is the sum of the coefficients of all the kets in a superposition.

OK. While I'm here lets look a little into currency.

-- currency of our starting pattern:
sa: measure-currency the |pattern>
|number: 94.706>
-- currency of our noise samples:
sa: currency |*> #=> measure-currency noise |_self>
sa: table[sample,currency] rel-kets[noise]
+-----------+----------+
| sample | currency |
+-----------+----------+
| sample 1 | 167.999 |
| sample 2 | 172.956 |
| sample 3 | 160.047 |
| sample 4 | 152.672 |
| sample 5 | 172.701 |
| sample 6 | 155.887 |
| sample 7 | 167.372 |
| sample 8 | 155.599 |
| sample 9 | 168.741 |
| sample 10 | 164.362 |
| sample 11 | 154.837 |
| sample 12 | 172.713 |
| sample 13 | 158.58 |
| sample 14 | 162.794 |
| sample 15 | 164.182 |
| sample 16 | 163.241 |
| sample 17 | 159.071 |
| sample 18 | 166.364 |
| sample 19 | 171.454 |
| sample 20 | 181.715 |
| sample 21 | 165.119 |
| sample 22 | 159.125 |
| sample 23 | 171.61 |
| sample 24 | 161.908 |
| sample 25 | 171.42 |
| sample 26 | 156.008 |
| sample 27 | 164.333 |
| sample 28 | 153.859 |
| sample 29 | 166.588 |
| sample 30 | 155.331 |
| sample 31 | 151.047 |
| sample 32 | 165.557 |
| sample 33 | 169.732 |
| sample 34 | 149.886 |
| sample 35 | 164.184 |
| sample 36 | 156.08 |
| sample 37 | 165.818 |
| sample 38 | 173.588 |
| sample 39 | 155.877 |
| sample 40 | 152.572 |
+-----------+----------+
-- currency of our result superpositions:
sa: ave-currency |*> #=> measure-currency ave |_self>
sa: intersection-currency |*> #=> measure-currency intersection |_self>
sa: union-currency |*> #=> measure-currency union |_self>
sa: sadd-currency |*> #=> measure-currency sadd |_self>
sa: table[result,ave-currency,intersection-currency,union-currency,sadd-currency] rel-kets[ave]
+-----------+--------------+-----------------------+----------------+---------------+
| result | ave-currency | intersection-currency | union-currency | sadd-currency |
+-----------+--------------+-----------------------+----------------+---------------+
| result 1 | 167.999 | 167.999 | 167.999 | 167.999 |
| result 2 | 340.955 | 154.16 | 186.795 | 324.008 |
| result 3 | 501.003 | 136.63 | 197.991 | 463.304 |
| result 4 | 653.675 | 124.612 | 202.175 | 597.798 |
| result 5 | 826.376 | 121.733 | 207.534 | 753.802 |
| result 6 | 982.263 | 111.956 | 211.596 | 888.818 |
| result 7 | 1149.635 | 109.668 | 214.286 | 1037.844 |
| result 8 | 1305.235 | 106.742 | 215.125 | 1176.331 |
| result 9 | 1473.976 | 104.422 | 217.55 | 1328.532 |
| result 10 | 1638.338 | 103.111 | 219.204 | 1473.433 |
| result 11 | 1793.175 | 102.903 | 219.809 | 1611.589 |
| result 12 | 1965.888 | 102.903 | 220.581 | 1769.863 |
| result 13 | 2124.468 | 102.471 | 221.963 | 1912.522 |
| result 14 | 2287.262 | 99.923 | 222.33 | 2054.888 |
| result 15 | 2451.444 | 99.628 | 222.33 | 2204.07 |
| result 16 | 2614.685 | 99.585 | 222.33 | 2353.259 |
| result 17 | 2773.756 | 98.905 | 222.541 | 2497.065 |
| result 18 | 2940.12 | 98.905 | 223.524 | 2648.582 |
| result 19 | 3111.574 | 98.745 | 225.383 | 2798.261 |
| result 20 | 3293.289 | 98.745 | 226.327 | 2964.346 |
| result 21 | 3458.409 | 98.745 | 226.362 | 3112.555 |
| result 22 | 3617.534 | 98.499 | 226.873 | 3252.889 |
| result 23 | 3789.143 | 98.499 | 227.091 | 3407.895 |
| result 24 | 3951.051 | 98.499 | 227.108 | 3554.748 |
| result 25 | 4122.472 | 98.412 | 227.371 | 3710.721 |
| result 26 | 4278.48 | 97.804 | 227.378 | 3846.138 |
| result 27 | 4442.813 | 97.552 | 227.528 | 3990.011 |
| result 28 | 4596.672 | 97.367 | 227.528 | 4126.818 |
| result 29 | 4763.26 | 97.367 | 227.528 | 4275.894 |
| result 30 | 4918.59 | 97.253 | 227.914 | 4414.536 |
| result 31 | 5069.637 | 97.253 | 227.914 | 4551.732 |
| result 32 | 5235.194 | 97.253 | 227.914 | 4697.922 |
| result 33 | 5404.926 | 97.253 | 228.12 | 4851.832 |
| result 34 | 5554.812 | 97.107 | 228.12 | 4982.541 |
| result 35 | 5718.996 | 97.107 | 228.12 | 5130.646 |
| result 36 | 5875.076 | 97.032 | 228.12 | 5271.153 |
| result 37 | 6040.894 | 96.777 | 229.145 | 5414.854 |
| result 38 | 6214.482 | 96.777 | 229.391 | 5567.404 |
| result 39 | 6370.359 | 96.777 | 229.391 | 5703.832 |
| result 40 | 6522.931 | 96.777 | 229.391 | 5840.871 |
+-----------+--------------+-----------------------+----------------+---------------+

OK. Somewhat interesting. Though the only comment that comes to mind is that simm normalizes before doing its similarity calculation, so the fact that our patterns have very different currency is invisible to simm. So for example, one consequence is we don't need to divide our "average" by n. Simm does that for us. The point being the shape of a pattern is what is of interest, not the amplitude.