Psychology & Neuroscience Asked on March 4, 2021
Based on an article by UK Essays, we are nothing more than robots that operate based on our past experience and other factors like amount of neurotransmitters, hormones, and other chemicals, trying to get optimal outcomes for ourselves.
Does this mean that every human mind can be reduced to (a very complex and nonperfect) function?
I think the key concept to tackle this question is to consider the concept of abstraction.
Abstract models are generalized models of some kind "reality" that we are interested in, with the aim to describe some behavior of the system in question reasonably well. Often the abstraction should also be relevant to many instances of the entity that we would like to describe: If you want a abstract model to describe a human, you would probably want it to be useful to describe many humans, and not just a specific human. This might provide less detailed predictive power for individuals, but might be more useful to apply to populations.
When talking about the world or the reality in any kind of descriptive way, we are really always using abstract models to think, rationalize and communicate about it.
Now for your question, of course you can make an abstract model of a human being, but you would have to ask yourself on what level of detail you want. Ignoring the practical difficulties in constructing this, we have to consider that the path to a model that matches "reality" as good as possible is endless. For example, for you to successfully model a brain, you would in the end not only have to describe every atom of the brain, but the quarks and maybe end up all the way in string theory. And in the same time capturing all the emergent behavior down this chain and not only being a reductionist.
Theoretically, I'd say that the answer to your question is definitely yes, it would be theoretically possible to make a abstract model for a human mind that has "good enough" predictive power.
Everything we know about the real world is deterministic (except for some uncertainties if we get down to quantum mechanics, that probably would not have any significant influence on a very high-level system like a brain anyway). There is no scientific support that imaginal non-deterministic things like "free will" either exists or have any practical influence, so there is nothing except the pure complexity of the matter, our knowledge and our scientific processes that sets the limits for how detailed we can describe something.
Correct answer by Alex on March 4, 2021
The final two paragraphs of that piece address this exact question.
Although understanding how neurons communicate with each other contributes to our understanding of behaviour at the level of biology, behaviour cannot be reduced to biological explanations.
In conclusion, the communication of neurons within the nervous system assists our understanding of behaviour however, is not the only contributing factor. Reducing explanations of behaviour to a biological level suggests that we are all robots.
The article states that if we reduced human behavior to only biological factors we would essentially be robots, but that we can't reduce behavior to biology alone. So, we cannot be essentially robots.
If we could develop a way to hold all other factors same, than biology would work as a way to program humans.
Answered by Reed Rawlings on March 4, 2021
For a review of how this question is debated in Cognitive Science, search for Searle's Chinese Room Thought Experiment.
In the Chinese Room Thought Experiment, Searle argues there is something fundamentally meaningful (semantic-holding-preserving) about the internal state of a living being. Additionally, this meaning cannot be approximated by a computer. Consequently, since this meaning cannot be approximated, the human being cannot be reduced to a function. Any attempt at this will fail to approximate this "meaning" and be incomplete in some respect.
To justify this stance, he makes the analogy between a person in a room that receives commands through a door in Chinese. Although the person does not understand Chinese, thanks to reference books, he can give the appropriate response on a piece of paper and slip it back under the door. In the analogy, the reference books are the database of rules, the person is the program and the door is the input and output. Obviously the program (person) does not understand Chinese or anything about the meaning of the input and output. How can any computer program (or as you say in your question "function") be said to approximate the human mind if it doesn't understand anything?
There are many replies to this thought experiment. The most potent reply I've come across to this is the System's Reply, which claims that although the person does not understand Chinese, the system as a whole is what understands Chinese.
Searle replies to this argument by letting the man internalize the books and let him wander outside into China. Searle claims that the man will still not understand Chinese in any "meaningful" manner.
I don't find this reply convincing since, when you look at what "meaningful" means to Searle and where it comes from biologically, Searle justifies that there is something in the synapses that can't be captured by a computer. This is an ineffective argument since it can't be proven or disproven by measurement or experiment. Additionally, it is easily defeated by Pylyshyn's reductionist argument. Pylyshyn argues that according to Searle, if you individually replaced every synapse with an identically functioning silicon component, eventually you would cease to have meaningful thought, which seems absurd.
Personally, I would argue that Searle's counter-argument to the System's Reply is a limited metaphor. Give the person a chance to modify the books in their head as well as interact with Chinese in the wild and undoubtedly they will understand Chinese! That's literally how people learn language!
Although I have clearly chosen a side in this debate, it's far from settled, as most things remain in philosophy, but hopefully this gives you a good starting point.
Answered by Seanny123 on March 4, 2021
I've recently became aware of the field of Evolutionary Psychology and read the book "Why beautiful people have more daughters" by Satoshi Kanazawa. It is quite enlightening - this new field suggests that humans have built-in psychological programs and preferences.
Kanazawa suggests - when looking for a potential mate in the african savanna, without the concept of age and calendar, the human male brain has evolved to prefer traits associated with high fertility and youth:
The factors above combine into the universal model of beauty, because youth maximizes reproductive success (younger woman can conceive easier and will give birth to more children than a woman in her 30s). The primitive brain effortlessly and unconsciously reads traits above as "attractive". Yet one will have hard time rationally explaining why they are attractive.
To answer your question - I would say that these "lower order" cognitive processes can indeed be modeled like a robot or personal function.
Answered by Alex Stone on March 4, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP