How we use coherence of concepts to build ideologies and make sense of our world.

Much of human cognition can be though of as ‘constraint satisfaction’ according to philosopher Paul Thagard. For example, think of applying to a university. One college is in a beautiful setting, but another college has a professor who is an expert in your desired major. The first college is in a quaint town with a low crime rate, the second one is in an city with a high crime rate. You have a scholarship to the first college, but the second college charges less for tuition. And so forth.
Or suppose you are a detective in a murder case where the prime suspect is the daughter of the victim, a rich industrialist. The daughter was in line to inherit the family fortune. You interview the daughter, and find out she dedicates her spare time to helping the needy. Then you find out that her boyfriend is a fellow she rescued from jail. So again, there is information that leads you in conflicting directions.
One way to manage all the conflicts (or even just priorities) is by constraint satisfaction.

The following is a diagram of a simple situation. You are thinking of hiring a local carpenter named Karl, but you need to know whether you can trust him alone in your house. You know he’s a gypsy, and that the gypsy culture has allowed thievery from outsiders. So that knowledge would push you in one direction. But then you hear that he returned a lost wallet to your neighbor. So that pushes you in another direction. (I scanned the next figure, which illustrates the Karl scenario) from a small paperback, so the orientation is disturbing my coherence, but here it is)

cohere1

The dotted lines are inhibitory, and connect incompatible nodes or hypotheses. The normal lines are excitatory. All connections are bidirectional – so that if node-A reinforces node-B, then node-B reinforces node-A also.

In this picture, the hypothesis of being honest is incompatible with being dishonest, so there is a dotted line between them. The action of returning the wallet is compatible – in fact is evidence for – honesty, and so there is a full line – an excitatory connection between them.

But decisions aren’t just made based on evidence, there is often an emotional component. Another diagram, a cognitive affective map, can show the influence of emotions:

cohere8

Ovals are used for positive valences (a positive emotion) so in this example the oval around ‘food’ indicates that food is a desirable concept’. Hexagons have negative valences (and so the shape used in the diagram for hunger is a hexagon). Rectangles are neutral – you are not pro-or anti-broccoli in this example.

The diagrams can apply to political attitudes. For instance, in Canada, the law says you should refer to ‘trans’ people by their preferred pronoun (which might be neither ‘he’ nor ‘she’). Some Canadian libertarians, notably Jordan Peterson, have objected to this. Here are two diagrams from a 2018 article by Paul Thagard showing how a liberal, for whom equality ia a paramount value, might look at the issue, versus how a libertarian might look at the issue..

cohere9

The green ovals with the strong borders show what the liberal prioritizes (equality) versus what the libertarian prizes (freedom).  In the lower diagram, the libertarian considers freedom as somewhat incompatible with regulation, and with taxation, but compatible with private property and economic development.    As a libertarian, you may take it as inevitable that economic development will result in income inequality, which is why the desirable value of ‘economic development’ has a inhibitory link with ‘income equality’ in the second diagram.   As a liberal, prioritizing equality, you might see the positive links between capitalism and the negative nodes of ‘exploitation’ and ‘inequality’, so even though there is a positive link between ‘capitalism’ and ‘freedom’ in the first diagram, you might, after the various constraints interact and settle down on a solution, want to modify Capitalism.

One way of learning about an opponent’s perspective is to draw the diagrams of how you believe your opponent he thinks -and then have him critique it and redraw it.

One advantage of such diagrams is that you can use an iterative (repetitive) process to spread the activations and find out, after the dust settles, which nodes are strongly activated.

You start by assigning activations to each node. We can assign all of the nodes an initial activation of .01, for example, except for nodes such as ‘evidence nodes’ that could be clamped at the maximum value (which is 1, the minimum value it can take is -1 ).  Evidence might be an experimental finding, or an item in the newspaper or an experience you had.

The next step is to construct a symmetric excitatory link for every positive constraint between two nodes (they are compatible) . For every negative constraint, construct a symmetric inhibitory link.

Then update every node’s activations based on the weights on links to other units, the activations of those other units, and the current activation of the node itself. Here is an equation to do that:

cohere7

Here d is a decay parameter (say 0.05) that decrements each unit at every cycle, min is a minimum activation (-1) and max is (1). ‘net’ is the net input to a unit, it is a sum of the product of weights * activations of the nodes that the unit links to.

The net updates for several cycles, and after enough cycles have occurred, we can say that all nodes with an activation above a certain threshold are accepted. You could end up with the net telling you to go to that urban college, or the net telling you that the daughter of the industrialist is innocent, or that a diagnosis of Lyme disease is unwarranted, or that you should not trust Karl.

There are several types of coherence, and they often interact. Professor Thagard gives an example:

In 1997 my wife and I needed to find someone to drive our six-year-old son, Adam, from morning kindergarten to afternoon day care. One solution recommended to us was to send him by taxi every day, but our mental associations for taxi drivers, largely shaped by some bizarre experiences in New York City, put a very negative emotional appraisal on this option. We did not feel that we could trust an unknown taxi driver, even though I have several times trusted perfectly nice Waterloo taxi drivers to drive me around town.
So I asked around my department to see if there were any graduate students who might be interested in a part-time job. The department secretary suggested a student, Christine, who was looking for work, and I arranged an interview with her. Very quickly, I felt that Christine was someone whom I could trust with Adam. She was intelligent, enthusiastic, interested in children, and motivated to be reliable, and she reminded me of a good baby-sitter, Jennifer, who had worked for us some years before. My wife also met her and had a similar reaction. Explanatory, conceptual, and analogical coherence all supported a positive emotional appraisal, as shown in this figure:

cohere3

Conceptual coherence encouraged such inferences as from smiles to friendly, from articulate to intelligent, and from philosophy graduate student to responsible. Explanatory coherence evaluated competing explanations of why she says she likes children, comparing the hypothesis that she is a friendly person who really does like kids with the hypothesis that she has sinister motives for wanting the job. Finally, analogical coherence enters the picture because of her similarity with our former baby-sitter Jennifer with respect to enthusiasm and similar dimensions. A fuller version of the figure would show the features of Jennifer that were transferred analogically to Christine, along with the positive valence associated with Jennifer.

If we leave out ’emotion’ then we just spread activations and compute new ones. To include emotions, we assign a ‘valence’ (positive or negative) to the nodes as well, and those valences are like the activations, in that they can spread over links, but with a difference – their spread is partly dependent on the activation spread.

Take a look at this diagram:

cohere2

There is now a valence node at the top, that sends positive valence to honest’ and ‘negative’ valence to ‘dishonest’. When the net is run, first the Karl node is activated, which then passes activations to the two facts about him, that he is a gypsy, and he also returned a wallet. If ‘honest’ ends up with a large activation, then it will spread its positive valence to ‘returned wallet’ and then to Karl.

The equation for updating valences is just like updating activations, plus the inclusion of multiplying by valence.

Some interesting ideas emerge from this. One is the concept of ‘meta-coherence’. You could get a result with a high positive valence, but it is just above threshold, and you therefore not sure of it, which could cause you distress. You might have to make a decision that is momentous, which you really can’t fully be confident is the right one.
Another emotion, surprise, could result from many nodes switching from accepted to rejected or vice versa as the cycles progress. You may find that you had to revise many assumptions.
Humor is often based on a joke leading you toward one interpretation, and then ending up with a different one at the punch line. Professor Thagard says that the punch line of the joke shifts the system into another stable state distant from the original one.

In an actual brain, concepts are not likely to be represented by a single neuron, it is more likely that population codes (such as semantic pointers) would be used. So an implementation of the above relationships between concepts would be more complicated. Moreover, the model doesn’t explain how the original constraints between concepts are learned. I would guess that implementation details might modify the model somewhat. Coherence doesn’t ‘mean that multiple rational people will come to the same conclusions on issues – even scientists who prize rationality often disagree with each other. Sometimes, even the evidence you will accept depends on a large network of assumptions and beliefs. What nodes do you include? What weights to you assign to the constraints?

Still, the model is intuitive and makes sense.

You can get a link to the various programs mentioned at http://cogsci.uwaterloo.ca/JavaECHO/jecho.html.   There is also  more info at PaulThagard.com.

Sources:
Coherence in thought and action – Paul thagard 2000 MIT press
Emotional Consciousness: A neural model of how cognitive appraisal and somatic perception interact to produce qualitative experience
Thagard, P. (2018). Social equality: Cognitive modeling based on emotional coherence explains attitude change. Policy Insights from Behavioral and Brain Sciences., 5(2), 247-256.

 

Advertisements

Aha Moments, Creative Insight, and the Brain

In “The Eureka Factor – Aha Moments, Creative Insight, and the Brain”, authors John Kounios and Mark Beeman discuss insight – the kind of insight that might occur to you when taking a walk or taking a shower as opposed to trying to force a solution to a problem in your office under a deadline. (One creative inventor that they mention sets up his environment to encourage insights – at night he will sit on his armchair on his porch looking at the stars, with nondescript music in the background to drown out distracting noises.)
MRI experiments have shown that insight really does happen suddenly, its not just an illusion. (when it happens, there is a ‘gamma’ burst of activity in a part of the brain in the right hemisphere). While ‘analytical thinking’ is a process that builds systematically to a conclusion, insight doesn’t work that way, though it benefits from the thinker having looked at the problem from all angles.

Here are a few conclusions by the authors:

  1. …perceptual attention is closely linked to conceptual attention. Factors that broaden your attention to your surroundings, such as positive mood, have the same effect on the scope of your thinking. Besides taking in lots of seemingly unrelated things, the diffuse mind also entertains seemingly unrelated ideas.

  2. if you question people, you’ll find that some see meaning everywhere, in events like the Japanese tsunami and in cryptic sayings like those above. They will give you impassioned explanations of the significance of such things. Other people deny any inherent meaning. “Stuff just happens. Live with it.

    It was found that people who see meaning in so many life events are also people who trust their hunches and their intuition. Intuition is related to creative thought.

  3. Creative people can be odd:
    The book contains a quote by Ed Catmull, president of Walt Disney Animation Studios and Pixar Animation Studios. He said:

    “There’s very high tolerance for eccentricity; there are some people who are very much out there, very creative, to the point where some are strange.” He values that creative eccentricity and is willing to tolerate a lot of the weirdness that often accompanies it. But movies are made by teams of people and not by a single person, so he has to draw a line. “There are a small number of people who are, I would say, socially dysfunctional, very creative,” he said. “We get rid of them.”

So what are the neural underpinnings to the creative – insightful type?

The authors think there is a reduced inhibition.

Inhibition, as a cognitive psychologist thinks of it, regulates emotion, thought, and attention. It’s a basic property of the brain.
…when you purposely ignore something, even briefly, it’s difficult to immediately shift mental gears and pay full attention to it, a phenomenon called “negative priming.” This can sometimes be a minor inconvenience, but it occurs for a reason. When you ignore something, it’s because you deemed it to be unimportant. By inhibiting something that you’ve already labeled as irrelevant, you don’t have to waste time or energy reconsidering it. More generally, inhibition protects you from unimportant, distracting stimuli.

To me, (the blogger), it doesn’t make much sense that creative people would be more distractible. Or at least, I would think that creativity is not just a matter of casting a wide net to gather associations of little relevance to the problem at hand. That could be a part of it, of course.

Supporting that idea is the fact that insightfuls, in a resting state (when not solving problems) have more right-hemisphere activity and less left-hemisphere activity than normals. The right hemisphere differs from the left in that in many of its association areas, the neurons have larger input fields than do left hemisphere neurons. Specifically, right hemisphere pyramidal neurons have more synapses overall and especially more synapses far from the cell body. This indicates that they have larger input fields than corresponding left hemisphere pyramidal neurons. Because cortical connections are spatially organized, the right hemisphere’s larger input fields collect more differentiated inputs, perhaps requiring a variety of inputs to fire. The left hemisphere’s smaller input fields collect similar inputs, likely causing the neuron to respond best to somewhat redundant inputs.

Even the axons in the right hemisphere are longer, suggesting that more far-flung information is used.

Both hemispheres can work together to solve a problem, so you can have the best of the both worlds – a narrow focused approach, and a diffuser, more creative approach.

If you want to increase your own insights, the authors have various suggestions.

  1. Expansive surroundings will help you to induce the creative state. The sense of psychological distance conveyed by spaciousness not only broadens thought to include remote associations, it also weakens the prevention orientation resulting from a feeling of confinement. Even high ceilings have been shown to broaden attention. Small, windowless offices, low ceilings, and narrow corridors may reduce expenses, but if your goal is flexible, creative thought, then you get what you pay for.

  2. You should interact with diverse individuals, including some (nonthreatening) nonconformists.
  3. You should periodically consider your larger goals and how to accomplish them, merely thinking about this will induce a promotion mind-set. Reserve time for long-range planning. Thinking about the distant future stimulates broad, creative thought.
  4. Cultivate a positive mood…To put a twist on Pasteur’s famous saying, chance favors the happy mind.

So if you are tired of working at your desk, wave “The Eureka Factor” at your boss, and tell him that you need to hike in an alpine meadow with your eccentric friend with the guitar who never graduated high school, and maybe he’ll let you do it!

Sources:
The Cognitive Neuroscience of Insight – John Kounios and Mark Beeman
The Eureka Factor – John Kounios and Mark Beeman (2015)

The frustrating insula – or why Brain Books can’t match Shakespeare

Often popular books on the brain will tell you that a particular part of the brain is responsible for various human attributes, but there is no common thread that jumps out at you. You learn more about people reading a good novel than you do after reading 100 pages of bewildering functions of grey matter..

The Insula (see diagram below) is an example. I’ll list a few tantalizing conclusions from various studies, and if you find a common thread, add a comment and let me know.

insula

According to neuroscientists who study it, the insula is crucial to understanding what it feels like to be human.

They say it is the wellspring of social emotions, things like lust and disgust, pride and humiliation, guilt and atonement. It helps give rise to moral intuition, empathy and the capacity to respond emotionally to music.

So here are a few findings on this part of the brain:

  1. A conservative or left-wing brain? – liberals have higher insula activation:
    Researchers have long wondered if some people can’t help but be an extreme left-winger or right-winger, based on innate biology. To an extent, studies of the brains of self-identified liberals and conservatives have yielded some consistent trends.Two of these trends are that liberals tend to have more the insula and anterior cingulate cortex. Among other functions, the two regions overlap to an extent by dealing with cognitive conflict, in the insula’s case, while the anterior cingulate cortex helps in processing conflicting information.Conservatives, on the other hand, have demonstrated more activity in the amygdala, known as the brain’s “fear center.” “If you see a snake or a picture of a snake, the amygdala will light up.
  2. Higher insula activation when thinking about risk is associated with criminality. In fact criminals think about risk in an opposite way to law-abiding citizens:
    A study has shown a distinction between how risk is cognitively processed by law-abiding citizens and how that differs from lawbreakers, allowing researchers to better understand the criminal mind.“We have found that criminal behavior is associated with a particular kind of thinking about risk,” said Valerie Reyna, the Lois and Melvin Tukman Professor of Human Development and director of the Cornell University Magnetic Resonance Imaging Facility. “And we have found, through our fMRI capabilities, that there is a correlate in the brain that corresponds to it.”In the study, published recently in the Journal of Experimental Psychology, Reyna and her team took a new approach. They applied fuzzy-trace theory, originally developed by Reyna to help explain memory and reasoning, to examine neural substrates of risk preferences and criminality. They extended ideas about gist (simple meaning) and verbatim (precise risk-reward tradeoffs), both core aspects of the theory, to uncover neural correlates of risk-taking in adults.

    Participants who anonymously self-reported criminal or noncriminal tendencies were offered two choices: $20 guaranteed, or to gamble on a coin flip for double or nothing. Prior research shows that the vast majority of people would chose the $20 – the sure thing. This study found that individuals who are higher in criminal tendencies choose the gamble. Even though they know there is a risk of getting nothing, they delve into verbatim-based decision-making and the details around how $40 is more than $20.

    The same thing happens with losses, but in reverse.

    Given the option to lose $20 or flip a coin and either lose $40 or lose nothing, the majority of people this time would actually choose the gamble because losing nothing is better than losing something. This is the “gist” that determines most people’s preferences.

    Those who have self-reported criminal tendencies do the opposite through a calculating verbatim mindset, taking a sure loss over the gamble.

    “This is different because it is cognitive,” Reyna said. “It tells us that the way people think is different, and that is a very new and kind of revolutionary approach – helping to add to other factors that help explain the criminal brain.

    As these tasks were being completed, the researchers looked at brain activation through fMRI to see any correlations. They found that criminal behavior was associated with greater activation in temporal and parietal cortices, their junction and insula – brain areas involved in cognitive analysis and reasoning.

    “When participants made reverse-framing choices, which is the opposite of what you and I would do, their brain activation correlated or covaried with the score on the self-reported criminal activity,” said Reyna. “The higher the self-reported criminal behavior, the more activation we saw in the reasoning areas of the brain when they were making these decisions.”

    Noncriminal risk-taking was different: Ordinary risk-taking that did not break the law was associated with emotional reactivity (amygdala) and reward motivation (striatal) areas, she said.

    Not all criminals are psychopaths, but psychopaths show differences as well.
    A study of 80 prisoners used functional MRI technology to determine their responses to a series of scenarios depicting intentional harm or faces expressing pain. It found that psychopaths showed no activity in areas of the brain linked to empathic concern. The participants in the high psychopathy group exhibited significantly less activation in the ventromedial prefrontal cortex, lateral orbitofrontal cortex, amygdala and periaqueductal gray parts of the brain, but more activity in the striatum and the insula when compared to control participants, the study found.The high response in the insula in psychopaths was an unexpected finding, as this region is critically involved in emotion and somatic resonance. Conversely, the diminished response in the ventromedial prefrontal cortex and amygdala is consistent with the affective neuroscience literature on psychopathy. (This latter region is important for monitoring ongoing behavior, estimating consequences and incorporating emotional learning into moral decision-making, and plays a fundamental role in empathic concern and valuing the well-being of others.)

  3. Damaging the insula can cure addiction:
    The recent news about smoking was sensational: some people with damage to a prune-size slab of brain tissue called the insula were able to give up cigarettes instantly.
  4. The insula is responsible for the feeling of disgust:
    Insula activation was only significantly correlated with ratings of disgust, pointing to a specific role of this brain structure in the processing of disgust. This ties in somehow to what I cited before on political leanings. In one study, people of differing political persuasions were shown disgusting images in a brain scanner. In conservatives, the basal ganglia and amygdala and several other regions showed increased activity, while in liberals other regions of the brain increased in activity. Both groups reported similar conscious reactions to the images. The difference in activity patterns was large: the reaction to a single image could predict a person’s political leanings with 95% accuracy (this may be hard to believe, but it is according to Neuroscientist Read Montague, who works at Virginia Tech in Roanoke. It is reported in newscientist.com which in turn cites his research article).

I’ve listed all these items, many very interesting, but at the end of the day, what is going on?

Sources:

http://news.cornell.edu/stories/2018/09/criminal-behavior-linked-thinking-about-risk-study-finds

https://www.livescience.com/17534-life-extremes-democrat-republican.html

https://news.uchicago.edu/story/psychopaths-are-not-neurally-equipped-have-concern-others

Structural and Functional Cerebral Correlates of Hypnotic Suggestibility – Alexa Huber, Fausta Lui, Davide Duzzi, Giuseppe Pagnoni, Carlo Adolfo Porro

https://www.nyyyytimes.com/2007/02/06/health/psychology/06brain.html

Neural Arithmetic Logic Units – getting backpropagation nets to extrapolate

Backpropagation nets have a problem doing math. You can get them to learn a multiplication table, but when you try to use the net on problems where the answers are higher or lower than the ones used in training, they fail. In theory, they should be able to extrapolate, but in practice, they memorize, instead of learning the principles behind addition, multiplication, division, etc.

A group at Google DeepMind in England solved this problem.
They did this by modifying the typical backprop neuron as follows:

  1. They removed the bias input
  2. They removed the nonlinear activation function
  3. Instead of just using one weight on each incoming connection to the neuron, they use two. Both weights are learned by gradient descent, but a sigmoid function is applied to one, a hypertangent function is applied to the other, and then they are multiplied together. In standard nets, a sigmoid or hypertangent function is not used on weights at all, instead these types of functions are used on activation.  The opposite is true here.

Here is the equation for computing the weight matrix.  W is the final weight, and the variables M and W with the hat symbols are values that are combined to create that final composite weight:

nalu2b

So what is the rationale behind all this?

First lets look at what a sigmoid function looks like:

sigmoid2

And now a hypertangent function (also known as ‘tanh’):

hypertangent2

We see that the sigmoid function ranges (on the Y axis) between 0 and 1. The hypertangent ranges from -1 to 1. Both functions have a high rate of change when their x-values are fairly close to zero, but that rate of change flattens out the farther they get from that point.

So if you multiply these two functions together, the most the product can be is 1, the least is -1, and there is a bias to the composite weight result – its less likely to be fractional, and more likely to be -1, 1, or zero.
Why the bias?
The reason is that near x = zero, the derivative being large actually indicates that the neuron would be biased to learn numbers other than that point (because it will take the biggest step sizes when the derivative is highest). Thus, tanh is biased to learn its saturation points (-1 and 1) and sigmoid is biased to learn its saturation points (0 and 1). The elementwise product of them thus has saturation points at -1, 1, and 0.

So why have a bias? As they explain:

Our first model is the neural accumulator (NAC), which is a special case of a linear (affine) layer whose transformation matrix W consists just of -1’s, 0’s, and 1’s; that is, its outputs are additions or subtractions (rather than arbitrary rescalings) of rows in the input vector. This prevents the layer from changing the scale of the representations of the numbers when mapping the input to the output, meaning that they are consistent throughout the model, no matter how many operations are chained together.

As an example, if you want the neuron to realize it has to add 5 and -7, you don’t want those numbers multiplied by fractions, rather in this case, you prefer 1 and -1. Likewise, the result of this neuron’s addition could be fed into another neuron, and again, you don’t want it multiplied by a fraction before it is combined with that neuron’s other inputs.

This isn’t always true though, one of their experiments was learning to calculate the square root, which required a weight training to the value of 0.5.

On my first read of the paper, I was sure of why the net worked, and so I asked one author: Andrew Trask, who replied that it works because:

 

  1. because it encodes numbers as real values (instead of as distributed representations)
  2. because the functions it learns over numbers extrapolate inherently (aka… addition/multiplication/division/subtraction) – so learning an attention mechanism over these functions leads to neural nets which extrapolate

 

The first point is important because many models assume that any particular number is coded by many neurons, each with different weights. In this model, one neuron, without any nonlinear function applied to its result, does math such as addition and subtraction.

It is true that real neurons are limited in the values they can represent. In fact, neurons fire at a constant, fixed amplitude and its just the frequency of pulses that increase when they get a higher input.

But ignoring that point, the units they have can extrapolate, because they do simple addition and subtraction (point #2).

But wait a minute – what about multiplication and division?

For those operations they make use of a mathematical property of logarithms. The log of (X * Y) is equal to log(X) + log(Y). So if you take logarithms of values before you feed them into an addition neuron, and then the inverse of the log of the result, you have the equivalent of multiplication.

The log is differentiable, so the net can still learn by gradient descent.

So they now need to combine the addition/subtraction neurons with the multiplication/division neurons, and this diagram shows their method:

nalu1

nalu2c

This fairly simple but clever idea is a breakthrough:

Experiments show that NALU-enhanced neural networks can learn to track time, perform arithmetic over images of numbers, translate numerical language into real-valued scalars, execute computer code, and count objects in images. In contrast to conventional architectures, we obtain substantially better generalization both inside and outside of the range of numerical values encountered during training, often extrapolating orders of magnitude beyond trained numerical ranges.

Source:
Neural Arithmetic Logic Units – Andrew Trask, Felix Hill, Scott Reed, Jack Rae, Chris Dyer, Phil Blunsom – Google DeepMind

Neurithmic System’s radical new way of thinking about memory and their program (Sparsey) that implements it

Professor Rod Rinkus  of Neurithmic Systems came up with a net (he calls it SPARSEY) that is about memory – storing memories and retrieving them.   No matter how many memories are already stored, the time to store a new memory, or to retrieve an old one stays the same.   There are some very promising aspects of his idea, and I will explain the general idea below.  If you want to delve further, his actual papers are at his website (sparsey.com).

Suppose you want to store a pattern, perhaps a number.   You could store it as follows;

radios

Here we will assume that only one bit can be ON at a time.   With this constraint, we could only present 5 different numbers to the neural net, and it might learn to associate them with a different positions of the green dot.

This is called a ‘localist’ representation.   One disadvantage of associating patterns this way is that similarity is lost.   The number ‘1’ might be associated with a green dot at the second position, or it might be associated with a green dot at the fifth position.   Also, you can’t store many patterns this way unless you have many neurons.

A more compact way to store data is with a type of number system.   For instance, in everyday math we use base 10 numbers,  (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10) where a new digit is added to represent 10, then 100, then 1000 etc.   If we only want to use ON and OFF, we can limit our digits to zero and one.   This gives us a base 2 number system.   In that case,

the number zero would be:

radioszero

The number one would be:

radiosone

The number two would require an additional digit just like 10 in base 10:

radiostwo

Three would be:

radiosthree

And four would be:

radiosfour

We have already represented 5 numbers, this time with only 3 places needed, and we can represent more, for instance, of all three dots are green (1,1,1) then we have the number ‘7’, using this binary system.

This system is compact, but still, similar numbers are not coded similarly.   Suppose we measured similarity by overlap: the number of dots in the same position that have the green color.   We would see that the number zero has no overlap with the number one, though it is close numerically.  We would see that four has no overlap with three, though three does have one item that overlaps with two.

An ideal memory would code similar items similarly.   The more different the items were, the more different their representations should be.

In the brain, a sofa and a chair should be represented more similarly than a sofa and a mailbox, for instance.   A robin should have a representation closer to a parrot than to a giraffe.

If this can be accomplished, there are several advantages, which I will list after showing one of Gerard Rinkus’s storage units below.

mac

The above is a representation of what he calls a MAC.   Each MAC is made up of several “Competitive Modules”.   In the brain, the Competitive Modules (which he abbreviates as CM) would correspond to cortical Minicolumns, and the MAC would correspond to a cortical MacroColumn.   In this illustration, we are looking at one MAC with three CMs in it.   Each CM has internal competition where only one neuron (in this case each CM has 3 neurons) can win the competition and turn on.   The others lose and turn off.

So what is the advantage of this?   First of all, since each CM can have 3 separate patterns, there are in total 3 * 3 * 3 patterns that can be represented – or 27.   This is more compact than a totally localist method (where of the 9 neurons only one can be on at a time).   It is not as compact as the example we gave of base 2 numbers.

A Sparse-Distributed Representation (SDR) is a representation where only a small fraction of neurons are on.   In the above Mac, only a third of neurons are ON, so it qualifies.   We can interpret the fraction of a feature’s SDR code that overlaps with the stored SDR of a memory (hypothesis) as the probability/likelihood of that feature (hypothesis).

Using Rinkus’s CMs and MACs, we can introduce similarity into our representations that reflects similarity in the world.

Take a look at these 2 MACs

mac mac2

Notice that they do overlap in 2 out of 3 of the CMs.   We could say that their similarity is 2/3.   If they were identical, their similarity would be 3/3.   And so forth.

What is the advantages of representations that reflect similarity?

Well if you have a net (such as Sparsey) that automatically represents similar inputs in a similar way, then you automatically learn hierarchies.   For example, by looking at the SDR for ‘cat’ and ‘dog’, versus the SDRs for ‘cat’ and ‘fish’, you would see that ‘cat’ and ‘dog’ are more similar.

Another interesting advantage is this.   Suppose the MAC on the left represents a cat, and the MAC  on the right represents a dog.    Now you are presented with a noisy input that doesn’t exactly match cat or dog, and gets a representation such as:

mac3

This MAC representation overlaps the MACs for dog and for cat equally.   We could say that the probability that what the net saw was a cat is equal to the probability that it saw a dog.

So the fact that similar inputs yield similar representations means that we can look at a SDR as a probability distribution over several possibilities.   This MAC overlaps the representation for dog and for cat by two CMs, but perhaps this MAC might overlap with just one CM a representation for ‘mouse’.

The next figure is from an article by Prof. Rinkus.   Unlike my depictions, his MACS have a hexagonal shape and each CM is a cluster of little circles (neurons).

 

radical2

A single MAC can learn a sequence of patterns over time  by having horizontal connections of every neuron in the MAC connect with a time delay to every neuron (including itself) in the same MAC.  Once it has learned these, then if an input pattern activates the first SDR, then in the next time step it leads to the SDR that represents the second pattern that it was taught, and that in turn leads to the next.   Let us assume that the MAC has many CMs (maybe 70) and each CM might have 20 neurons).  20 to the power of 70 is a large number, and with this large capacity you can store many sequences now without ‘crosstalk’ (or interference) and also, when presented with the start of a sequence that is ambiguous, you can keep multiple possibilities of the endings of that sequence in memory of the net a particular time.   (This reminds me of Quantum computing where multiple possibilities are tried at the same time).

Suppose Sparsey is trained on single words such as:

  1. THEN
  2. THERE
  3. THAT
  4. SAILBOAT

And so forth.

If the first four words it was trained on are the words above, then if we now want to retrieve a word and present ‘TH’ (the two letters are presented singly, (“T” at time t1, “H” at time t2), then in the above example there are three possibilities for the stored sequence we are attempting to retrieve (THEN, THERE and THAT).   If the next letter that comes in is ‘E’, then there are only two possibilities (THEN and THERE).   If the next letter that comes in is ‘R’, then we are left with just the possibility of the sequence of letters in the word “THERE”.   When we use the principle that similar inputs yield similar SDRs, and we also insist that when a pattern is learned every ON neuron in one MAC learns to increase weights on the same connections as every other neuron, then at any time, all learned sequences stored in memory that match so far are possible, until the ambiguity is resolved by the next input.

Think about the the letter ‘A’ in the above 4 words.   We see that ‘A’ occurs in THAT (one time) and SAILBOAT (in two places).   There are 3 instances of ‘A’ and they cannot be represented in the exact same way (if they were, then there would be no clue of what comes next).   ‘A’ in the context  of THAT does not have the same exact representation as the first ‘A’ in SAILBOAT, and neither have the same exact SDR as the second ‘A’ in SAILBOAT.    Nonetheless, they will have representations that are more similar to each other than to the letter ‘B’ for instance.

Remember that any sequence is represented at a series of time steps, with the position varying but not the sequence.   Think of your own brain.   Your past and your future are all compressed in the present moment.   The past can be retrieved, and the future can be predicted, but at the moment, all you have is the present: a  snapshot of neural firings in your brain.   The same is true in Sparsey of a sequence such as SAILBOAT.   When you reach the first ‘A’ of sailboat it has all the information to complete that word, assuming that only the above 4 words were learned.   There is no ambiguity.   But that is only true because the pattern for this ‘A’ is slightly different than for the other ‘A’s (such  as the ‘A’ in THAT’).  They don’t overlap completely.

So how does Sparsey achieve the property of similar inputs giving rise to similar memories?

First we need to know that each neuron in each CM of a particular MAC has exactly the same inputs.  It may not have the same weights applied to those inputs, but it has the same inputs.   The inputs come from a lower level, which might be a picture in pixels, or if we have a multilevel net, might be from another abstract level.   Initially, all weights are zero.

radical1

Sparsey’s core algorithm is called the Code Selection Algorithm (CSA).   We’ll say that in every CM there are K neurons.   In each MAC (there can be several per level in a multilevel net) there are Q CMs.

CSA Step 1 computes the input sums for all Q×K cells comprising the coding field.  Specifically, for each cell, a separate sum is computed for each of its major afferent synaptic projections.

The cells also have horizontal inputs from cells within some maximum perimeter around the MAC and may have signals also coming down on connections from a layer above them.   But we’ll focus on just the inputs coming from below. The following is a very simplified version of what happens:

As in typical neural nets, each neuron in a MAC has an activation ‘V’ equal to the sum of the product of weights on  a connection times the signal coming over that connection.

Then these sums are normalized so that none exceed one and none are less than zero, but they retain their relative magnitudes or ‘V’ values for each neuron.

Now find the Max V in each CM and tentatively pick the neuron with that value to be the ON neuron in that CM

Finally, a measure called G is computed as the average max-V across the Q CMs.  In the remaining CSA steps, G is used, in each CM, to transform the V distribution over the K cells into a final probability distribution from which a winner is picked.  G’s influence on the distributions can be summarized as follows.

  1. a) When high global familiarity is detected (G is close to 1), those distributions are exaggerated to bias the choice in favor of cells that have high input summations.
  2. b) When low global familiarity is detected (G is close to 0), those distributions are flattened so as to reduce bias due to local familiarity

G does this indirectly, by modifying a ‘sigmoid’ curve that is applied to each neuron’s output.

The lower level in the next picture has a sigmoid curve (the red shaped curve to the right) that has a normal height.   The upper level has a sigmoid curve that has been flattened.   We can see that in the lower level’s sigmoid function, Y-axis values are farther apart (at least in the middle of the ‘S’) than in the second.   The lower level here, we assume, had a larger G than the upper level did, so the CSA calculates a taller sigmoid to apply to the neurons in that level.   If a sigmoid is flattened, and the probability of the most likely neuron is thus made to be closer to the probability of the second most likely and the third most likely, then there is a greater chance that a neuron other than the one with the highest weighted input summation is the one that will fire, and be part of the memory for this neuron   Since low G means low confidence (or low familiarity) we do want the new SDR to have some differences from whatever SDR the collection of V’s seem closest too.   Having probabilities that are close together makes differences more likely.

Suppose you see a prototypical cat that is just like the pet cat owned by your neighbor.   You already have a memory that matches very closely (your G is high).   Now suppose you see an exotic breed of cat that you’ve never encountered.   It matches all stored traces of cats less well, and therefore the memory that the CSA creates for it should be somewhat different.   So even though the V’s may approximate a cat (or intersection of cats) that you’ve seen before, applying the flattened sigmoid and then using a toss of the dice on which neuron will win in each CM, will lead to at least some CMs with different neurons firing than in the prototypical cat representation.  The flatter the sigmoid, the more likely a CM is to have finally selected a different neuron than the favored one to be On.

The connections from the inputs in the receptive field of the MAC (in the lower level) will strengthen to those neurons finally chosen in the SDR in the level above it.   Synapses are basically binary, though their strengths can decay, and neuron activations are binary too.

radical12

Any finite net that stored many memories can run into a problem of interference, or “cross-talk”.   The problem is that there are so many learned links, that you can have similar patterns that differ by very few neurons and can be confused with each other.   You can also get patterns that are hybrids of others and never were actually encountered in real life.   The CSA actually freezes the number of SDRS a MAC can learn after a critical period, to attempt to avoid this problem.   In a multilevel net this is not necessarily a limitation.

I sent a few questions of about human mental abilities and weaknesses to Professor Rinkus and he had interesting replies.

I asked about memories that are false, or partly false, and he said this:

Let’s consider an episodic memory example, my 10th birthday, with different features, where it was, who was there, etc.  That episodic memory as a whole is spread out across many of my macro-columns (“macs”), across all sensory modalities.  But those macs have been involved in another 50 more years of other episodic memories as well.  In general, the rate at which new SDR codes, and thus the rate at which crosstalk accrues, may differ between them.  So, say one mac M1 where a visual image of one of my friends at the party, John, is stored has had many more images of John and other people stored over the years, and is quite full (specifically, ‘quite full’ means that so many SDRs have been stored that the average Hamming distance between all those stored codes has gotten low).  But suppose another mac, M2, where a memory trace of some other feature of the party, say, “number of presents I got”, say 10, was stored ended up having far fewer SDRs stored in it over the years, and so, much less crosstalk.  (After all, the number of instances where I saw a person is vastly greater than the number of instances where I got presents, so the hypothetical example has some plausibility).  So now, when I try to remember the party, which ideally would mean reactivating the entire original memory trace, across all the macs involved, as accurately as possible, including with their correct temporal orders of activation, the chance of activating the wrong SDR in M1 (e.g., remembering image of other friend, Bill, instead of John), is higher than activating the wrong trace in M2…so I remember (Bill, 10) instead of (John, 10).   The overall trace I remember is then a mix of things that actually happened in different instances, e.g., confabulation.

He also said this:

Whenever you recognize any new input as familiar, reactivation of the original trace must be happening.  So, the act of creating new memories involves reactivation of old memories. But reactivating old memory traces becomes increasingly subject to errors due to increasing crosstalk.  So, if my macs are already pretty full, then as I create brand new memory traces, they could include components that are confabulations…i.e., the memories are wrong from inception.

So Professor Rinkus is saying that a false memory can be wrong not only due to an oversupply of similar memories that affects the retrieval process, but can be wrong even at the time it was stored!

I would add that some memories are false because you don’t remember the source.   If you are told at one point that as a child, you were lost in a mall, even if that’s not true, years later you may have a memory that you were, and you may even fill in details of how it happened and how you felt.

Then I asked this question:

According to Wikipedia: “Eidetic memory sometimes called photographic memory) is an ability to vividly recall images from memory after only a few instances of exposure, with high precision for a brief time after exposure, without using a mnemonic device.”   In your theory it would seem that everyone should have this memory, since every experience leaves a trace.   Why then, do only a few people have this ability?

I include a part of his answer below:

My general answer is that when we are all infants/young and we have not stored much information (in the form of SDRs) in the macs comprising our cortex, and so the amount of crosstalk interference between memories (SDR codes, chains of SDRs, hierarchies of chains of SDRs) is low, we all have very good episodic memory, perhaps approaching eidetic to varying degrees and in various circumstances.  But as we accumulate experience, storing ever more SDRs into our macs, the level of crosstalk increases, and increasing mistakes (confabulations) are made.  From another point of view, since these confabulations are generally semantically reasonable, we can say that as we age, our growing semantic memory, i.e., knowledge of the similarity structure of the world, gradually becomes more dominant in determining our responses/behavior (we accumulate wisdom)….  I think those who retain extreme eidetic ability into their later years, and perhaps autistics, may have a brain difference  that makes the sigmoid stay much flatter than for normals, i.e., the sigmoid’s dependence on G is somehow muted.

His speculation makes sense because if the sigmoid is very flat, then new SDRs that are stored for new patterns will be less likely to overlap much with existing SDRs.   Every cat you encounter that is slightly different than an old cat, will have its own representation.

If you are interested in more details of the model (I’ve left out many), take a look at Professor Rinkus’s website (sparsey.com).

Sources:
(you can obtain both from the publications tab of Sparsey.com):
A Radically New Theory of how the Brain Represents and Computes with Probabilities – (2017)
Sparsey™: event recognition via deep hierarchical sparse distributed codes – (2014)

 

Making Neural Nets more decipherable and closer to Computers

In an article titled “Neural Turing Machines“, three researchers from ‘Google DeepMind’ Alex Graves, Greg Wayne, and Ivo Danihelka describe a neural net that has a new feature, a memory bank. The system is similar in this respect to a Turing Machine – which was originally proposed by Alan Turing in 1936. His hypothetical machine had a read/write head that wrote on squares on a tape, and could move to other squares and read from them as well. So it had a memory. In theory, it could compute anything that modern computers could compute, given enough time.

One advantage of making a Neural Net that is also a Turing machine is that it can be trained with gradient descent algorithms.   That means it doesn’t just execute algorithms, it learns algorithms (though, if you want to be fanatical, you might note that since a Turing machine can simulate any recipe that a computer can execute, it could simulate a neural net that learns as well).

The authors say this:

Computer programs make use of three fundamental mechanisms: elementary operations (e.g., arithmetic operations), logical flow control (branching), and external memory, which can be written to and read from in the course of computation. Despite its wide-ranging success in modelling complicated data, modern machine learning has largely neglected the use of logical flow control and external memory.

Recurrent neural networks (RNNs) …are Turing-Complete and therefore have the capacity to simulate arbitrary procedures, if properly wired. Yet what is possible in principle is not always what is simple in practice. We therefore enrich the capabilities of standard recurrent networks to simplify the solution of algorithmic tasks. This enrichment is primarily via a large, addressable memory, so, by analogy to Turing’s enrichment of finite-state machines by an infinite memory tape, we dub our device a “Neural Turing Machine” (NTM). Unlike a Turing machine, an NTM is a differentiable computer that can be trained by gradient descent, yielding a practical mechanism for learning programs.

They add that in humans, the closest analog to a Turing Machine is ‘working memory’ where information can be stored and rules applied to that information.

…In computational terms, these rules are simple programs, and the stored information constitutes the arguments of these programs.

A Neural Turing memory is designed

to solve tasks that require the application of approximate rules to “rapidly-created variables.” Rapidly-created variables are data that are quickly bound to memory slots, in the same way that the number 3 and the number 4 are put inside registers in a conventional computer and added to make 7.

… In [human] language, variable-binding is ubiquitous; for example, when one produces or interprets a sentence of the form, “Mary spoke to John,” one has assigned “Mary” the role of subject, “John” the role of object, and “spoke to” the role of the transitive verb.

A Neural Turing Machine (NTM) architecture contains two components: a neural network controller and a memory bank.
turing1

Like most neural networks, the controller interacts with the external world via input and output vectors. Unlike a standard network, it also interacts with a memory matrix…. By analogy to the Turing machine we refer to the network outputs that parametrize these operations as “heads.”
Crucially, every component of the architecture is differentiable, making it straightforward to train with gradient descent. We achieved this by defining ‘blurry’ read and write operations that interact to a greater or lesser degree with all the elements in memory (rather than addressing a single element, as in a normal Turing machine or digital computer).

In a regular computer, a number is retrieved by fetching it at a given address.

Their net has two differences in retrieval from a standard computer.   First of all, they retrieve  an entire vector of numbers from a particular address.   Think of a rectangular matrix, where each row number is an address, and the row itself is the vector that is retrieved.

Secondly, instead of retrieving at just one address, there is a vector of weights that controls the retrieval at multiple addresses.    The weights in that vector add up to ‘1’.   Think of a memory matrix consisting of 5 vectors.   There will be 5 corresponding weights.

If the weights were:

0,0,1,0,0

then only one vector will be retrieved, the vector at the third row of the matrix.  This is similar to ordinary location based addressing in computers or Turing machines. You can also shift that ‘1’ each cycle, so that it retrieves an adjacent number each time (to the number retrieved before).

Now think of the following vector of weights:

0,0.3,0.7,0,0

In this case two vectors are retrieved (one from the 2nd row, and one from the third).   The first one has all its elements multiplied by 0.3, the second has all its elements multiplied by 0.7, and then the two are added.   This gives one resultant vector.  They say this is a type of ‘blurry’ retrieval .

They use the same idea when writing to memory – a vector is used to relatively weight the different values written to memory.

This vector-multiplication method of retrieval allows the entire mechanism to be trained by gradient descent.  It also can be thought of as an ‘attentional mechanism” where the focus is on the vectors with relatively high corresponding weights.

Some other nets do a probabilistic type of addressing, where there is a probability distribution over all the vectors, and at each cycle the net uses  most probable (perhaps with a random component).   But since neural Turing machines learn by gradient descent, the designers had to use the distribution to obtain a weighted sum of memory vectors is retrieved.   This was not a bug, but a feature!

They say:

The degree of blurriness is determined by an attentional “focus” mechanism that constrains each read and write operation to interact with a small portion of the memory, while ignoring the rest… Each weighting, one per read or write head, defines the degree to which the head reads or writes at each location. A head can thereby attend sharply to the memory at a single location or weakly to the memory at many locations.

Writing to memory is done in two steps:

we decompose each write into two parts: an erase followed by an add.
Given a weighting wt emitted by a write head at time t, along with an erase vector et whose M elements all lie in the range (0,1), the memory vectors M{t-1}(i) from the previous time-step are modified as follows:

turing4

where 1 is a row-vector of all 1’s, and the multiplication against the memory location acts point-wise. Therefore, the elements of a memory location are reset to zero only if both the weighting at the location and the erase element are one; if either the weighting or the erase is zero, the memory is left unchanged.
Each write head also produces a length M add vector at, which is added to the memory after the erase step has been performed:

turing5

The combined erase and add operations of all the write heads produces the final content of the memory at time t. Since both erase and add are differentiable, the composite write operation is differentiable too.

The network that outputs these vectors and that reads and writes to memory, as well as taking inputs and producing outputs, can be a recurrent neural network, or a plain feedforward network.   In either case, the vector used to retrieve from memory locations is then fed back, along with the inputs, into the net.

The authors trained their net on various problems, such as copying a sequence of numbers, or retrieving the next number in an arbitrary sequence given the one before it. It came up with algorithms such as this one, for copying sequences of numbers: (in the following, a ‘head’ can be either a read-head or a ‘write-head’ and has a vector of weights associated with it to weight the various memory vectors for the process of combination and retrieval, or for writing.)

initialise: move head to start location
while input delimiter not seen do

receive input vector
write input to head location
increment head location by 1

end while
return head to start location
while true do

read output vector from head location
emit output
increment head location by 1

end while

This is essentially how a human programmer would perform the same task in a low-level programming language. In terms of data structures, we could say that NTM has learned how to create and iterate through arrays. Note that the algorithm combines both content-based addressing (to jump to start of the sequence) and location-based addressing (to move along the sequence).

The way the NTM solves problems is easier to understand than trying to decipher a standard recurrent neural net, because you can look at how memory is being addressed, and what is being retrieved and written to memory at any point.
There is more to the NTM than I have explained above as you can see from the following diagram from their paper:

turing6

Take home lesson: The Turing Net outperforms existing architectures such as LSTMs (neural nets where each unit has a memory cell, plus trainable gates to decide what to forget and what to remember), and it generalizes better as well. It is also easier to understand what the net is doing, especially if you use a ‘feedforward net’ as the ‘controller’. The net doesn’t just passively compute outputs, it decides what to write to memory, and what to retrieve from memory.

Sources:
Neural Turing Machines by Alex Graves, Greg Wayne and Ivo Danihelka – Google DeepMind, London, UK (https://arxiv.org/abs/1410.5401)

Making Recurrent neural net weights decipherable – new ideas.

One problem with neural nets is that after training, their inner workings are hard to interpret.
The problem is even worse with recurrent neural networks, where the hidden layer sends branches back to feed, along with the inputs in the next time step, back to itself.

Before I talk about how the problem has been tackled, I should mention an improvement to standard recurrent nets, which was called by its authors (Jurgen Shmidhuber and Sepp Hochreiter) LSTM (Long Short Term Memory). The inventors of this net realized that backpropagation isn’t limited to training a relation between two patterns, it can also be used to train gates that control the learning by the other gates.  One such gate is a ‘forget gate’. It uses a ‘sigmoid function’ on the weighted sum of its inputs. Sigmoid functions are shaped like a slanted letter ‘S’, and the bottom and top of the ‘S’ are at zero and 1 respectively. This means that if you multiply a signal by the output of a sigmoid function, at one extreme you could be multiplying by zero, which means that the product is zero too, which means no signal gets through the gate. At the other extreme, you would be multiplying by 1, so that the entire signal gets through. Since sigmoid gates are differentiable, backpropagation can be used on them. In an LSTM, you have a cell-state that holds a memory value, as well as having one or more outputs. In addition to the standard training, you also train a gate to decide how much of the past ‘memory’ to forget on each time-step as a sequence of inputs are presented to the net. A good explanation of LSTMS is at: http://colah.github.io/posts/2015-08-Understanding-LSTMs/, but the point to remember is that you can train gates to control the learning process of other gates.

So back to making sense of the weights of recurrent nets. One approach is the IndRNN (Independently Recurrent Neural Network). If you will recall, a recurrent net with 5 hidden nodes would not only feedforward 5 signals into each neuron of its output layer, but would send 5 branches with the signals from the 5 hidden nodes as 5 extra ‘inputs’ to join the normal inputs in the next time step.  If you had 8 inputs, then in total you would have 13 signals feeding into every hidden node. Once a net like this is trained, the actual intuitive meaning of the weights is hard to unravel, so the authors asked – why not just feed each hidden node into itself, this keeping the hidden nodes independent of each other. Each node still gets all the normal signals from inputs it would normally get, but in the above example, instead of getting 5 signals from the hidden layer’s previous time step as well, it gets just one extra signal instead – that of itself on the previous time step. This may seem to reduce the power of the net since there are fewer connections, but it makes the net more powerful.  One plus is that with this connectivity, the net is able to train on many layers in each time step. Another plus is that the neurons don’t have to use ‘S’ shaped functions, they can work with non-saturated activation functions such as RELU (rectified linear unit – which is a diagonal line when the weighted sum of neural inputs is zero and above and otherwise is a horizontal line with value zero).

susrectifiedlinearunit

It is easier to understand what a net like this is doing than a traditional recurrent net.

Another ingenious idea came from a paper titled Opening the Black Box: Low-dimensional dynamics in high-dimensional recurrent neural networks, by David Sussilo of Stanford and Omri Barak of the Technion.

A recurrent network is a non-linear dynamic system, in that at any time step, the output of a computation is used for the inputs of the next time-step, where the same computation is made. One the weights are learned, you can write the computation of the net as one large equation.  In the equation below the J matrix is the weights from the context (hidden units feeding back) and the B matrix is the weights for the regular inputs and h is a function such as hypertangent.  The symbol x is the union of u and r, where u are the signals from the input neurons.

susrecur

The systems described by these equations can have attractors, such as fixed points. You can think of fixed points as being at the bottom of a basin in a landscape. If you roll a marble anywhere into the valley, it will roll to the bottom. In the space of patterns, all patterns that are in the basin will evolve over time to the pattern at the bottom. Attractors do not have to be fixed points, they can be lines, or they can be a repeating sequence of points (the sequence repeats as time goes by), or they can never repeat but still be confined in a finite space – those trajectories in pattern space are called ‘strange attractors’. A fixed point can be a point where all neighboring patterns eventually evolve to end up, or it can be a repeller, so that all patterns in their neighborhood evolve to go away from it. Another interesting type of fixed point is a saddle. Here patterns in some directions evolve toward the point, but patterns in other directions evolve to go away from it. Think of a saddle of a horse. You can fall off sideways (that would be the ‘repeller’), but if you were jolted forward and upward in the saddle, you would slide back to the center (the attractor).

sussaddle

So Sussilo and Barak looked for fixed points in recurrent networks. They also looked for ‘slow points’ that is points that attract, but eventually drift. I should mention here that just like in a basin, the area around a fixed point is approximately linear (if you are looking at a small area). As patterns approach an attractor, usually they start off quickly, but the progress slows down the closer they get to it.

The authors write:

Finding stable fixed points is often as easy as running the system dynamics until it converges (ignoring limit cycles and strange attractors). Finding repellers is similarly done by running the dynamics backwards. Neither of these methods, however, will find saddles. The technique we introduce allows these saddle points to be found, along with both attractors and repellers. As we will demonstrate, saddle points that have mostly stable directions, with only a handful of unstable directions, appear to be of high significance when studying how RNNs accomplish their tasks.

Why is finding saddles valuable?

A saddle point with one unstable mode can funnel a large volume of phase space through its many stable modes, and then send them to two different attractors depending on which direction of the unstable mode is taken.

Consider a system of first-order differential equations
susfx

where x is an N-dimensional state vector and F is a vector function that defines the update rules (equations of motion) of the system. We wish to find values round which the system is approximately linear. Using a Taylor series expansion, we expand F(x) around a candidate point in phase space:
sustaylorseries

(A Taylor expansion uses the idea that if you know the value of  a function at a point X, you can find the value of the function function at a point (x + delta-x), using first order derivatives, second order derivatives, up to n’th order derivatives)

The authors say that “Because we are interested in the linear regime, we want the first derivative term of the right hand side to dominate the other terms, so that

susfxlinearizeseries

They say that his observation “motivated us to look for regions where the norm of the dynamics, |F(x)|, is either zero or small. To this end, we define an auxiliary scalar function.   In the caption of the equation, they explain that there is a intuitive correspondence to speed in the real physical world:
susqenergy

A picture that shows a saddle with attractors on either side follows:

suspicsaddlesattracsimple

The authors trained recurrent nets on several problems, and found saddles between attractors, which allowed them to understand how the net was solving problems and representing data. One of the more difficult problems they tried was to train a recurrent net to produce a sine wave given an input that represented the desired frequency. They would present an amplitude that represented a frequency range, (the higher the amplitude of the input signal, the higher the frequency they wanted the net output to fire at) and they trained the output neuron to fire at a frequency proportional to that input. When they analyzed the dynamics, they found that, even though fixed points were not reached,

For the sine wave generator the oscillations could be explained by the slightly unstable oscillatory linear dynamics around each input-dependent saddle point.

I’m not clear on what the above means but it is known that you can have limit cycles around certain types of fixed points (unstable ones).  In the sine wave example the location of attractors and saddle points differ depending on what input is presented to the network. In other problems they trained the net with, the saddle point(s) was at the same place, no matter what inputs were presented  because the analysis was done in the absence of input – maybe because the, input was transient (applied for a short time), whereas in the sine wave it was always there.  So in the sine wave example, if you change the input, you changed the whole attractor landscape.

They also say that one reason studying slow points, as opposed to just fixed points was valuable, since

funneling network dynamics can be achieved by a slow point, and not a fixed point

(as shown in the next figure):

susghosts

A mathematician who I’ve corresponded with told me his opinion of attractors.  He wrote:

I think that:
• a memory is an activated attractor.
• when a person gets distracted, the current attractor is destroyed and gets replaced with another.
• the thought process is the process of one attractor triggering another, then another.
• memories are plastic and can be altered through suggestion, hypnosis, etc.  Eye witness accounts can be easily changed, simply by asking the right sequence of questions.
• some memories, once thought to be long forgotten, can be resurrected by odors, or a musical song.

One can speculate that emotions are a type of attractor.   When you depressed, the types of thoughts you have are sad ones, and when you are angry at a friend, you dredge up  the memories of the annoying things they did in the past.

In the next post, I’ll discuss a different approach to understanding a recurrent network. Its called a “Neural Turing Machine”. I’ll explain a bit about it here.

It had been found by Kurt Godel that certain problems could not be solved by any set of axioms.

There had been a half-century of attempts, before Gödel came along to find a set of axioms sufficient for all mathematics, but that ended when he proved the “incompleteness theorem”.

In hindsight, the basic idea at the heart of the incompleteness theorem is rather simple. Gödel essentially constructed a formula that claims that it is unprovable in a given formal system. If it were provable, it would be false. Thus there will always be at least one true but unprovable statement. That is, for any computably enumerable set of axioms for arithmetic (that is, a set that can in principle be printed out by an idealized computer with unlimited resources), there is a formula that is true of arithmetic, but which is not provable in that system.

In a paper published in 1936 Alan Turing reformulated Kurt Gödel’s 1931 results on the limits of proof and computation, replacing Gödel’s universal arithmetic-based formal language with hypothetical devices that became known as Turing machines. These devices wrote on a tape and then moved the tape, but they could compute anything (in theory) that any modern computer can compute. They needed a list of rules to know what to write on the tape in different conditions, and when and where to move it.
So Alex Graves, Greg Wayne and Ivo Danihelka of Google DeepMind in London came up with the idea to make a recurrent neural net with a separated memory section that could be looked at as a Turing machine with its tape. You can see their paper here: https://arxiv.org/abs/1410.5401. I’ve corresponded with one author, and hopefully can explain their project in my next post.

Sources:
Opening the Black Box: Low-dimensional dynamics in high-dimensional recurrent neural networks by David Sussilo and Omri Barak (https://barak.net.technion.ac.il/files/2012/11/sussillo_barak-neco.pdf)
and
Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN – by Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, Yanbo Gao (https://arxiv.org/abs/1803.04831)