Thursday 8 September 2016

predicting a cat vs dog based on their noises

So, a toy one today, but it is easy to extend the idea to more interesting examples. The idea is, given some noises predict if it is a cat or a dog at the door. Let's say a cat can miaow, purr or scratch at the door, and a dog can pant, sniff or scratch at the door. That's easy enough to learn:
sa: dump
----------------------------------------
|context> => |context: animal sounds>

sounds-it-makes |cat> => |purring> + |miaowing> + |scratching at the door>
sounds-it-makes |dog> => |panting> + |sniffing> + |scratching at the door>
----------------------------------------
Now input some sounds and make a prediction:
-- from scratching predict dog and cat equally:
sa: normalize similar-input[sounds-it-makes] |scratching at the door>
0.5|cat> + 0.5|dog>

-- scratching and sniffing, so predict a dog is more likely:
sa: normalize similar-input[sounds-it-makes] (|scratching at the door> + |sniffing>)
0.667|dog> + 0.333|cat>
Simple example, but yeah it works, and is trivial to extend to larger examples.

Next, extend our predictions. Given a noise, predict what other noises may follow:
-- we hear panting, so predict other dog noises:
sa: sounds-it-makes similar-input[sounds-it-makes] |panting>
0.333|panting> + 0.333|sniffing> + 0.333|scratching at the door>

-- we hear purring, so predict other cat noises:
sa: sounds-it-makes similar-input[sounds-it-makes] |purring>
0.333|purring> + 0.333|miaowing> + 0.333|scratching at the door>

-- we hear scratching, so predict both cat and dog noises:
sa: sounds-it-makes similar-input[sounds-it-makes] |scratching at the door>
0.333|purring> + 0.333|miaowing> + 0.667|scratching at the door> + 0.333|panting> + 0.333|sniffing>
So it all works as you would expect and hope. And again we are at a point that my notation can encode an example, but we need a way to automate it. The CYC route of having humans input it all is not practical for my project!

Now, what if you object to concepts being represented by single kets? What if you want your concepts to be more robust to noise, and represented by SDR's (Sparse Distributed Representations, ie, superpositions with all coefficients equal to 1)? Well, with a little more work we can reproduce the above example using SDR's.

Build up some knowledge:
-- define our context:
  context animal sound SDR's

-- encode our concepts to random 5 bits on out of 100 SDR's:
-- 5/100 are themselves randomly chosen. Other values should work fine too.
  full |range> => range(|1>,|100>)
  encode |purring> => pick[5] full |range>
  encode |miaowing> => pick[5] full |range>
  encode |scratching at the door> => pick[5] full |range>
  encode |panting> => pick[5] full |range>
  encode |sniffing> => pick[5] full |range>

-- generate cat and dog noise SDR's by taking the union of the concept SDR's:
  sounds-it-makes |cat> => union[encode] (|purring> + |miaowing> + |scratching at the door>)
  sounds-it-makes |dog> => union[encode] (|panting> + |sniffing> + |scratching at the door>)
After loading that into the console we have:
sa: dump
----------------------------------------
|context> => |context: animal sound SDR's>

full |range> => |1> + |2> + |3> + |4> + |5> + |6> + |7> + |8> + |9> + |10> + |11> + |12> + |13> + |14> + |15> + |16> + |17> + |18> + |19> + |20> + |21> + |22> + |23> + |24> + |25> + |26> + |27> + |28> + |29> + |30> + |31> + |32> + |33> + |34> + |35> + |36> + |37> + |38> + |39> + |40> + |41> + |42> + |43> + |44> + |45> + |46> + |47> + |48> + |49> + |50> + |51> + |52> + |53> + |54> + |55> + |56> + |57> + |58> + |59> + |60> + |61> + |62> + |63> + |64> + |65> + |66> + |67> + |68> + |69> + |70> + |71> + |72> + |73> + |74> + |75> + |76> + |77> + |78> + |79> + |80> + |81> + |82> + |83> + |84> + |85> + |86> + |87> + |88> + |89> + |90> + |91> + |92> + |93> + |94> + |95> + |96> + |97> + |98> + |99> + |100>

encode |purring> => |43> + |75> + |38> + |20> + |26>
encode |miaowing> => |26> + |14> + |42> + |90> + |73>
encode |scratching at the door> => |97> + |89> + |58> + |65> + |82>
encode |panting> => |32> + |25> + |83> + |99> + |50>
encode |sniffing> => |20> + |8> + |4> + |24> + |84>

sounds-it-makes |cat> => |43> + |75> + |38> + |20> + |26> + |14> + |42> + |90> + |73> + |97> + |89> + |58> + |65> + |82>
sounds-it-makes |dog> => |32> + |25> + |83> + |99> + |50> + |20> + |8> + |4> + |24> + |84> + |97> + |89> + |58> + |65> + |82>
----------------------------------------
Let's now redo the predictions, this time using the SDR representation:
-- from scratching predict dog and cat equally:
sa: normalize similar-input[sounds-it-makes] encode |scratching at the door>
0.517|cat> + 0.483|dog>

-- scratching and sniffing, so predict a dog is more likely:
sa: normalize similar-input[sounds-it-makes] encode (|scratching at the door> + |sniffing>)
0.609|dog> + 0.391|cat>
Noting that the SDR's are noisier than the clean ket version, and so the probabilities are not as perfect. But we expect this from a brain, to not be perfect.

Now redo the predict related sounds example:
sa: similar-input[encode] sounds-it-makes similar-input[sounds-it-makes] encode |panting>
0.333|scratching at the door> + 0.333|panting> + 0.333|sniffing> + 0.067|purring>

sa: similar-input[encode] sounds-it-makes similar-input[sounds-it-makes] encode |purring>
0.353|scratching at the door> + 0.309|purring> + 0.298|miaowing> + 0.115|sniffing> + 0.056|panting>

sa: similar-input[encode] sounds-it-makes similar-input[sounds-it-makes] encode |scratching at the door>
0.345|scratching at the door> + 0.212|purring> + 0.202|sniffing> + 0.179|miaowing> + 0.167|panting>
So it pretty much works, just a bit noisier. But we can filter out the noise using drop-below[t], just like a brain filters out noise with thresholds. And wrap it up into a natural language like operator:
I-predict-from |*> #=> list-to-words drop-below[0.15] (similar-input[encode] sounds-it-makes similar-input[sounds-it-makes] encode |_self> + -|_self>)

sa: I-predict-from |panting>
|scratching at the door and sniffing>

sa: I-predict-from |purring>
|scratching at the door and miaowing>

sa: I-predict-from |scratching at the door>
|purring, sniffing, miaowing and panting>
That is kind of pretty. It reads like natural language. But really we are a long, long, long way from full natural language capabilities and of course full AGI. But I take our current results as a big hint that we are at least on the right path. Heh, we just need to scale it up a billion or trillion fold!

And I still think it is cool that notation that originated from quantum mechanics seems to be a good fit for describing what is going on in brains. And there seems to be hints of deeper overlap between the two systems. Just hints though. Let's try to list some of them:

1) the shared notation of operators, kets and superpositions.
 
2) Wavefunction collapse and measurement in my notation:
some-measurement |some object/system> !=> normalize weighted-pick-elt (P1|state 1> + P2|state 2> + ... + Pn|state n>)
3) Path integrals. A particle takes all possible pathways through space-time from its' starting point to its ending point, vs a spike train in a brain takes all possible brain pathways from the starting neuron to the end point neuron. For this reason maybe call brains "brain-space".

4) Quantum foam. At the deepest level space-time is full of noisy virtual particles, likewise, at the deepest levels the brain is deeply noisy too.

But for now don't take these too seriously. They are just interesting hints of similarities. The above is hinting that there might be some similarity between particles in space-time and neural spike trains in brain-space. OK. But what is the brain-space equivalent of mass, charge, spin, particle type, momentum, energy etc? So it would be extremely bold to suggest space time is a giant neural network, but it is more plausible that we can treat spike trains as some kind of quasi-particle. And has particle-like interactions with other spike trains as it propagates through a brain.

No comments:

Post a Comment