Sunday, December 18, 2011

My Problem With Modern Machine Functionalism (MORE BIG WORDS BELOW)

So my blog is called "I Suck and Philosophy" and have promptly so far written nothing about philosophy. Fitting right? Well, my original intention was to have a outlet for my philosophical musings and so people could feel free to critique them. The title's purpose is to create an air of openness, and that I'm not sitting here telling you how to view things, but to propose a topic that is open for adjustment/abandonment. I also am attempting to do so in a language that any of my follower can follow (This one's for you guys, Billy and Andrew). 

While I feel the prospect of layman accessible philosophy is feasible, I will also hyperlink major topics or ambiguous phrases with the link to Wikipedia. This is if you don't understand something, or want to read in greater depth! So, with this said, lets take a crack at it. 

The topic I would like to talk about is the current view of mind by the majority of philosophers, called machine functionalism. It stems from a broader theory that has been around for many years called functionalism, which was introduced by Hillary Putnam. This basic propositions of functionalism are as follows. The physical material that makes up our mind is not what gives rise to our consciousness, but the structuring of it. As long as a material can function as a brain, who are we to say that it is not conscious? Further, emotions and feelings serve only as functions to respond to input our body receives. Be that pain, love, taste, or any other physical input. The simplified version of this is we receive input, our brain processes how to interpret it and how to react to it, and we then exhibit behavior dictated by our brain. The important part about this theory is how it believes our brain does this, and says it works exactly like computer software does. That is, a predetermined set of rules are in place in our brain that sorts out incoming information, and determines the responses to said information. *Disclaimer: Please don't mistake me for believing this. Just illustrating my points.* 

There are obvious objections to this, such as "and how does this account for consciousness?", "well then why aren't computers sentient?", and "yeah but the amount of space and power to exhibit consciousness anything close to that of a human seems infinite." While addressing these philosophical objections would not bore me one bit, I have a feeling it may bore you slightly. PLEASE let me know if you want to hear them though. I could talk forever about this. Instead of addressing the issues, we can just look around right now. Computers have processors that are faster than our brains. Much faster. And storage space is no longer a problem. So what gives? Why no sentience? Probably because the classical model is wrong. Just putting that out there. It just is. No debate. The modern model on the other hand...well, here's where it starts to get scary. 

Our current knowledge of how the brain works is impressive compared to when functionalist theory was introduced. We know how our brain communicates (synapses between neurons) and what it uses to do so (neurotransmitters and electricity via polarity of molecules ex. sodium and potassium). With this knowledge we have started the creation of neural network computers. They are essentially machines with synthetic synapses. We do actually have them around too. One is a face detection system, that initially was very poor, but after training it has become as effective at facial recognition as us. Humans. The scary thing, is that it learned. The computer learned. How you say? Well.....we don't know. My sentiments are as follows: "Way to go computer science. Create a machine so advanced, we don't know how it works. Real good stuff."

Yet I am still not convinced that we have figured out what gives rise to consciousness, and furthermore, I don't think these neural network computers will be able to exhibit any type of human behavior any time soon, and here's why. We still need software to work these machines. Software that has to somehow be able to adapt to new situations, to process information and rewrite itself. To be able to reflect upon itself. To derive new rules and information from it's environment and not from a programmer. How are we going to make this kind of autonomous software? That's really not the question I feel is facing current machine functionalism theory. I think what needs to be straightened out is if it is even theoretically possible. And I really don't think it is.

What I have heard so much of in my philosophy classes and in reading philosophical papers on this issue is that consciousness is only an engineering problem. Well, I submit that we do have the possibility of making hardware that can exhibit consciousness. But this is useless for me to say. Because the hardware is such a small part of the battle. We have no clue how our brain actually does what it does. Even if we can figure out our brain and acquire a full understanding, we still will have to then be able to replicate this into a computer program that is accessible by a system composed of completely different structures than ours. 

While computers are getting incredibly fast, and can exhibit a robust display of utility, I still do not feel they can possibly be an accurate representation of consciousness.