CONNECTIONISM AND BEHAVIORISM
I suppose the typical approach of the recent machine learning, especially with Deep Neural Networks (DNN), is mostly based on Behaviorism. The problem is that Behaviorism had already failed in early psychology, showing that the direct connection between perception and action is not the necessary and sufficient conditions for demonstrating human cognition. There was something more involved in. Of course the advent of Connectionism as somehow alternative approach for Behaviorism has well figured out what we called 'something more complex process' behind the connections between perception and action, and gave inspiration to DNN which has achieved remarkable success in various area, but DNN still inherited the core paradigm of Behaviorism: an assumption of the direct connections between perception and action.
When having a look at this paradigm with more detail, the 'action' mentioned above indicates the firing patterns of motor neurons. Thereby, Behaviorists' perspective can be described as "the firing patterns of motor neurons are derived directly from sensory neurons". On the other hand, Connectionists' perspective, with a subtle difference, can be described as "the firing patterns of motor neurons are derived from interneurons, which are connected also with sensory neurons". DNN has inspired by Connectionism in many aspects such as complex connection structure, parameterized connection weights, and gradual learning process with parameter update. Nevertheless, current DNN's learning rule is based on the optimization of 'sensory input - motor output' matching, which is exactly in line with the paradigm of Behaviorism. Current DNN has failed to capture the essence of Connectionism paradigm. Strongly speaking, current DNN is just a bit more sophisticated version of Behaviorism model, by adding up multiple hidden layers.
MODIFIED BEHAVIORISM
Then what could be the alternative view for these all? Here, I suggest a new paradigm 'Modified Behaviorism'. There are only two key factors modified as follows:
1. Involving 'thinking' in 'action'
2. Involving 'thinking' in 'perception'
These properties of Modified Behaviorism can make DNN encompass much more area of Connectionism than before. As aforementioned DNN's learning rule, the problem was that the optimization process on 'sensory input - motor output' matching is Behavioristic in the context of 'perception - action'. Then what if we redefine the 'action' as 'thinking or motor output' not only 'motor output'? If so, the learning rule should also be modified: to optimize the 'thinking process and motor output', not only the 'motor output' as existing DNN does. In Modified Behaviorism, thinking processes are also regarded as an action, so the Modified Behaviorism based DNN will start to learn 'how to think' with thinking error feed-backward. Then what would be the input for the DNN? The external sensory input and the internal thinking (unlike existing DNN which comprehends sensory input only).
COMPATIBILITY WITH CONNECTIONISM
Now this point of view can be fully compatiable with Connectionism, because in Modified Behaviorism the whole optimization process can be described as 'sensory input and internal thinking (perception) - induced thinking and behavior (action)' matching. As the final behavior of the DNN will be induced by its own thinking and the sensory input (not directly by sensory input only), this optimization process of DNN well fits the aforementioned paradigm of Connectionism: "the firing patterns of motor neurons are derived from interneurons, which is connected also with sensory neurons". The 'thinking' here may feel similar with the mechanism of time-varying hidden state in Recurrent Neural Networks (RNN), but RNN's learning rules are Behavioristic (not Modified Behavioristic) because it doesn't generate any thinking errors, only generating behavior errors. In order to generate some kinds of thinking error signals, other than the current techniques would be required, and this may be the key challenge to implement Modified Behavioristic DNN.
Deep Reinforcement Learning With Respect to Neural Networks (0) | 2021.01.25 |
---|---|
Reinforcement Learning: Does 'Model-free' Really Means 'No model'? (6) | 2021.01.01 |
돌연변이(Mutation)의 중요성 (0) | 2020.10.11 |
교사가 움직일 것인가, 학생이 움직일 것인가 (0) | 2020.04.13 |
인간의 의사결정과 확률·통계적 추정 기법 (3) | 2020.02.26 |
댓글 영역