Home

The Brain, A Decoded Enigma Part 4

The Brain, A Decoded Enigma - novelonlinefull.com

You’re read light novel The Brain, A Decoded Enigma Part 4 online at NovelOnlineFull.com. Please use the follow button to get notification about the latest chapter next time when you visit NovelOnlineFull.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

As we already emphasized, MDT is just a tool, which is used here to see if there is any possibility to evolve from an animal brain to a human brain. The theory does neither support, nor reject such a possibility.

Based on MDT, the main difference between a human brain and an animal brain is the facility of the human brain only, to make and operate symbolic models. The common part of the two types of brains is the facility to make and operate image models.

The evolution problem is to see if there is any possibility to change some parameters in the structure of image-model devices to reach the capability of making and operating symbolic models. On the other hand, a new hardware that should be added to the animal brain is considered as not compatible with an evolutive process.

As we saw in the previous section, the highest level reached by the animal brain is level 2. With a peak on level 5, the superiority of the human brain is huge.

Let's see some arguments that support the evolutive process. For instance, let's a.n.a.lyze whether by increasing the level of conceptualization of the models, it will be possible to get closer to the ability to make and operate symbolic models. Thus, if a cla.s.s of models is more and more conceptualized, such models should be so simplified that they could be very close to a symbolic definition. Therefore, a change from level 2 to level 3 could be reached by evolution.

But, let's a.n.a.lyze an example. So, we have "this apple", "an apple", "a fruit", "food". This is an example of increasing level of conceptualization with the last two items as symbolic elements. The animals have a shortcut by making a model to tell them if what they meet is or not food. In this way, the animals have a fast solution for problems based on image models. There is no advantage to increase the level of conceptualization. Thus the evolution could be blocked by a fast solution, based on image-models.

The advanced conceptualization should be supported in a group of vulnerable animals. To survive, the communication could be decisive. By increasing the level of conceptualization, the communication could be more and more precise. This seems to be the only serious argument for increasing the level of conceptualization. On the other hand, there is already a system of communication on level 2. Thus, a sound or a combination of sounds is a.s.sociated with a label-type model. It can activate any ZM-model. This type of communication is faster than that based on symbolic models and usually precise enough for the normal necessities of a group of animals. Unfortunately, here we did not see again any advantage from increasing the level of conceptualization.

But, if, for a group of animals, there is a lot of information which comes in fast succession, then the animals will be forced to make more and more simplified models and this should force them to increase the level of conceptualization.

Let's see another example. A person goes somewhere in the desert. Without special equipment, his chance to survive should be very low. But, around him, could be some animals which survive without special efforts. For animals, it is more important "to invest" in "equipment" then to increase the level of conceptualization of the models.

Anyways, at least in theory, it is possible to evolve from an animal brain to a human brain based on an increase in the level of conceptualization. If the animals have or not the tendency to do this, is another issue.

Let's a.n.a.lyze again the evolution of the brain. A concept model is a model which fits a large number of ent.i.ties. It has to be recorded, maybe, by the same hardware as the hardware that records a normal image-model. Also, there must be a connection between a concept model and every particular model covered by it.

By increasing the level of conceptualization (e.g. from "apple" to "fruit") the structure becomes very complex. The structure becomes even more complex when it evolves from "fruit" to "food". In theory, an evolutive process could produce this process but the increase of the complexity is so huge that it is difficult to believe that this could be produced without specialized hardware.

Level 2 is very close to level 3, but, as we see, no animal was able to reach level 3. Even the most advanced animals, like dolphins, have no tendency towards level 3.

The first drawings on cave walls were dated back to about 150000 years ago. Such drawings must be produced by some long-range image-models. But, such drawings are of no use without some explanations (symbolic messages). The reason is that the same drawing can be a.s.sociated with a lot of situations. It is fair to consider that, at that moment, the primitive human beings were able to use a symbolic model for communication (a primitive language).

One idea is that the increasing capacity of the brain to make long range image-models was a support to make also symbolic models. This idea cannot be supported, based on MDT.

Indeed, the drawings made by 5 to 12 year old children are rather primitive drawings. At such age, children have very few long-range models. But they are able to make and operate symbolic models, including languages to communicate with computers.

Thus, it seems that the long-range image models are not necessary to make symbolic models. Also, this supports the idea that the symbolic models are made by a special hardware.

The existence of a specialized hardware is based on the following:

There is an image model and the a.s.sociated label-model (a word). The word has a definition (based on other words). It is clear that there must be a hardware to record the image-model and another (a.s.sociated) hardware to record the definition. On level 4, the image model does not exist anymore.

If this new hardware should be build based on evolution, it is difficult to understand why we have no intermediate stages. The dolphins, which are considered as the most advanced animals, have no tendency to build symbolic models.

There are some experiments with monkeys, which can be understood as support that some monkeys are able to make symbolic models. Such cases can be generated by a software implementation of the function to build and operate symbolic models.

As we already know, a model in PSM is very efficient but it blocks the evolution (the model is transmitted unchanged or with small changes, from a generation to another). If an animal builds, e.g. by accident, an advanced model of interaction with the external reality, such a model cannot be transmitted to the next generation. Only if a hardware implementation exists, a new model will be transmitted to the next generation. This seems to be a big problem for the evolution of the beings.

Without a hardware implementation, the solution is to transmit such models based on education. If there were groups of monkeys which lived together for a very long time, then a good model could be transmitted from a generation to another by education. In this way, a hardware implementation is built up also if the time available is long enough.

After many generations of monkeys who are forced to build symbolic models, it is possible, theoretically, that some hardware occurs to support the symbolic model building. This could be the process that generated the human brain by an evolution process.

The main argument against evolution from animals to humans is the fact that the 2 years old children are able to build and operate symbolic models. At that age they haven't either enough long-range models to understand the external reality and they are not capable to build such models. The maturity of a human being is reached around the age of 18, and thus the facility to build symbolic models is clearly a hardware facility.

Conclusions: 1. Long-range image-models are not an explanation for the occurrence of symbolic models. 2. The symbolic-models could occur from image-models by a huge increase in the level of conceptualization in very special conditions (e.g. large groups of monkeys which live together for a very long time). 3. The symbolic-models are built and operated by a specialized hardware.

There are two possibilities: either evolution if statement 2 is valid or external intervention if not.

BASIC DESIGN DEFFICIENCIES OF THE HUMAN BRAIN

The theory treats the brain as a technological product. So, the theory considers that a designer existed. He had to fulfil some design requirements. Any technological design has some deficiencies. We shall guess them in this section.

This theoretical and abstract designer is outside of the theory and we are not interested by it. It could be "Mother Nature" or G.o.d or an extraterrestrial civilization or anything else.

These deficiencies are described here mainly for the human brain, but some can be met also in the animal brain. The design deficiencies as MDT can detect them, are:

XD1: The tendency to a.s.sociate an image-model to any situation met by a person. This deficiency is explained due to the "image nature" of the brain. This deficiency explains why so many persons "stay" on level 3, when level 5 is accessible since 100 years ago. This deficiency can be corrected by education.

XD2: There is no hardware protection to prevent the uncontrolled jump from a model to another, in interaction with a complex external reality. The stability in a model is a quality parameter of a brain.

Long-range models can stabilize a person. The XD2 deficiency is not related to them. XD2 is related to the capacity to stay in a model, when faced with a complex external reality. This deficiency can be corrected by software (education, for instance).

The lack of stability in a model can induce the illness called schizophrenia because this lack of stability has the tendency to favor short-range models. Indeed, when there is no stability in a model, the brain will make a specialized model for any particular situation met in the external reality. Such models are not able to see that some different facts can be correlated. Only a long-range model can detect such correlation. So, the stability in a model is a parameter of quality for a brain and the lack of stability indicates a low quality brain.

This deficiency can be met in the animal world too. For example, a dog has to watch a perimeter. That dog can jump from watch-model to food-model, if it gets food from strangers. Such a dog is a low quality dog, due to the lack of stability in the model.

The dolphins have a good stability in a model, and so, we consider them as advanced animals.

For human beings, the lack of stability in a model is a major drawback. Such persons are not good for any complex activity.

XD3: This is a basic deficiency. Let's start with its description, based on examples.

So, the brain interacts with an external reality and makes a harmonic model with 3 elements. If, that external reality has, in fact, 4 elements, the missing element cannot be discovered based on the 3-element model. As a 3- element model has a number of wrong predictions, it is not easy to see what is the problem from the a.n.a.lysis of the mistakes. The reason is that, once the 3- element model is activated, the reality is just that one which is generated by this model. There is no other reality! We cannot be outside of our active model. In such a case, the brain tries to correct the model. Usually, it will try to correct the model by changing the importance of some elements or relations. Sometimes this procedure works, and the brain will continue to use the 3-element model.

Such a situation occurs when we have not enough long-range models. In the above example, the situation can be corrected if there is a long-range model, which contains a 3-element model as an element of it. But even so, by a.n.a.lyzing the mistakes, it is not easy to understand what is the problem.

A brain affected by XD3A is not able to predict that a model might be missing some elements. A person, who can fight XD3A, can predict such a situation and will treat any model as preliminary.

The brain makes models based on the available data. Such models are made in a harmonic/logic way, but the stability of a model is not a guarantee that the model is good in interaction with a complex external reality.

We define XD3A as a design deficiency, which means that a brain is not able to predict the possibility of a missing element or relation in a stable (harmonic or logic) model.

Another case: a brain has a stabilized model with 100 elements. This model already generated a big number of correct predictions. At one moment, the external reality is changed, and now there are 101 elements. As we know, to correct a model means to reconstruct everything from scratch, using or not components from the old model. This task could be so difficult that it exceeds the technical capacity of the brain. In such a situation the old model is fragmented, and the brain uses it in this way. Of course, this can produce a lot of negative effects, including induced psychiatric disorders.

We define XD3B as a design deficiency, which means that a brain is not able to reconstruct a model, once the model is detected as a wrong model in a.s.sociation with a new external reality. We can express this also as the impossibility of a brain to correct a XD3A deficiency, once it was discovered.

XD3-deficiencies are widespread in the current activity of human beings. There is no reference to know that all the ent.i.ties of the external reality are a.s.sociated with the right YMs in the a.s.sociated model. For us, the external reality exists only if it is a.s.sociated with a model. Once we activated such a model, the reality is what the model says. We cannot be outside of our active model.

Once we have a model a.s.sociated with a specific external reality, the model is considered as a good model based on the predictions which are already done. There is no guarantee that the model will continue to be good in any situation and any time. A good quality brain has to know this and to predict some negative effects a.s.sociated with such a situation. So, this deficiency can be controlled by software (education, for instance).

XD4: This is a deficiency a.s.sociated only with image-models. It does not exist in a symbolic-model environment.

For an image-model there is no possibility to know the importance of an element or relation. The brain will choose in a more or less arbitrary way the importance. A model can be harmonic (stable) for any importance which is a.s.sociated with its elements and relations.

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

Cultivation Chat Group

Cultivation Chat Group

Cultivation Chat Group Chapter 3056: Chapter 3054: Lady Kunna's Side Hustle Author(s) : 圣骑士的传说, Legend Of The Paladin View : 4,369,342
The Divine Urban Physician

The Divine Urban Physician

The Divine Urban Physician Chapter 1003: Die! Author(s) : The Wind Laughs, 风会笑 View : 223,516

The Brain, A Decoded Enigma Part 4 summary

You're reading The Brain, A Decoded Enigma. This manga has been translated by Updating. Author(s): Dorin Teodor Moisa. Already has 549 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

NovelOnlineFull.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to NovelOnlineFull.com