Topic: Universal Morality - Trust/Truth
Redykeulous's photo
Tue 09/07/10 09:08 PM
Edited by Redykeulous on Tue 09/07/10 09:14 PM
Definition 1.) Belief is that which is accepted as being true (accurately corresponding to reality).


Definition 1.) I don’t see how this definition can be used in the discussion without another qualifier. Belief does not always correspond to reality.

For example: Our brains multitask continuously and they do so below the level of consciousness. Our entire body functions as a biological unit. After consuming a large meal, we often feel the need to relax and rest. That’s not accidental, because a great deal of our energy is required to assist in the digestive process.

The brain is no different. The majority of its power is reserved for and used to maintain awareness on the immediate present – the conscious moment.

To accommodate the necessity of remaining ‘in the moment’, our brain has evolved so that below the level of consciousness generalized thought processes continue. Much of this process presents itself as heuristic cognition. Heuristic ability is an innate quality that most scientists consider a survival mechanism.

The problem with heuristics is that it ‘programs’ (if you will) itself by utilizing data which the individual has categorized (tagged) as content on which survival may depend.

What happens is that individuals categorize incorrectly by allowing fear based bias gets tagged as survival information. One word here should explain the rest – stereotyping. It is not based on reality, yet it becomes a belief of individuals. This is not the only argument because superstition and fear of the unknown (fear of change) is miss-categorized.

Therefore,

Definition 1.) Belief is that which is accepted by the individual as having some basis in reality.

But it does necessarily ‘accurately correspond to reality’.

Definition 2.) Morality is innate and universally shared behavioral expectation.


The definition here is too far reaching, as it encompasses two completely different ideas.
Morality is innate and universally shared.
The second idea is - morality is universally shared behavioral expectations.

I don’t find these two ideas to be mutually inclusive.

I agree with Morality is innate and universally shared – to a point. For example, the survival of an infant is totally dependent on others. Therefore, innately, the infant MUST ‘trust’ whatever/whoever meets its needs.
But is that actually morality?

That ‘trust’ has limits as has been demonstrated with experiments in which a baby is placed on a floor, or low table top, designed to look like it has a drop off. On the other side of the drop off and several feet away is the baby’s mother trying to get that baby to crawl to her which means the baby must crawl into the drop off.

Almost all babies will stop, many will cry, and some can be coaxed enough so that they put their hand past the drop off. If they get that far, they ‘learn’ that the danger is not real and will usually no longer pay attention to the optical illusion.

As for a universally shared behavioral expectation, I can agree. Most morals are learned and instilled endemically, well before there is any concept that other cultures exist that adhere to their own set of morals.

So I would accept for
Definition 2) Morality is universally shared as a culturally acceptable set of behavioral expectations .

Definition 3.) Moral belief is belief regarding acceptable/unacceptable behavior.


No need for Definition three, one and two should suffice.


Definition 1.) Belief is that which is accepted by the individual as having some basis in reality.

Definition 2) Morality is universally shared as a culturally acceptable set of behavioral expectations .



p1.) We all have moral belief.
p2.) We do not all hold a belief in 'God'.
C.) 'God' does not entail moral belief.


Ok by me

We are newborns in a real sense concerning secular 'morality'. Morality is not expressed through an ought utterance, belief about morality is. There exists a universal constant which applies to all humans, regardless of particulars, that amounts to being a behavioral expectation(which ethics and morality both converge upon) that is not being met to the degree that it necessarily instantiates itself prior to our having the ability to acquire adopted belief about morality('sense' of ought). This I intend to show.


You must first show that morality is innate? The argument I presented above indicates morality is not innate.



p1.) We are necessarily social creatures.

p2.) We are born void of belief.

p3.) We are born rational creatures.


p1 – ok by me

p2 – assumes a blank slate – that goes about as far as the second or third time a baby cries and is picked up, fed, or otherwise comforted. We know that babies learn to cry for attention, because those who receive no attention, stop crying even when in pain or starving. They ‘learn’ to ‘expect’ (believe) that their cries will fulfill their needs. So yes to p3 - we are born rational creatures.


Axiom1: I believe 'X' means I believe 'X' is true.
C.) It is humanly impossible knowingly believe a falsehood.
C2.) It is humanly impossible to intentionally make a mistake.


Axiom1 – is that equivalent to “X is true only if I believe X is true” ? Meaning the truth of X is always based in a subjective belief? For an Axiom to be accepted it must be true in all cases for which it could apply. Because the ‘reality’ is that morality is also influenced by factual information, facts must be included in the Axiom.

I think Wux made reference to this. In other words, how do we fit ‘facts’ into this equation? Certainly morals are also guided by facts learned along the way, as much as they may be based in beliefs on which no facts have been made evident.

I can use C. and C2 as an example: Knowledge, based on fact (the ‘fact of the matter’) can be imparted to an individual. The facts can be denied for no other reason than the facts may prove a prior belief to be false. We have falsely categorized embarrassment as a point of survival – it is heuristic and thus we can ‘intentionally make a mistake” of ‘knowingly’ making a fact, false while continuing to believe a falsehood.

When Wux suggested changing C2 to “…intentionally making an unintentional mistake,” I agreed with it because we do intentionally make mistakes and often because we knowingly believe in a falsehood.

(Prime example: the young Earth theory when compared to facts.)

So - What if:

Axiom1: X is true if X is based on fact and not necessarily true when X is based on belief
C.) It is humanly impossible to knowingly base a belief on a known falsehood
C2) It is humanly impossible to intentionally make an unintentional mistake

Notice I have adjusted C.) I’m not sure I completely accept it but I’ve more palatable. Even though we can ‘continue’ to believe in a falsehood after enlightenment, I will accept that if an individual is shown facts and can relate, with understanding, to the facts that it would be impossible to base a belief on false claims about those facts.

I’ll stop here – well after one other comment.

I have only skimmed most of the responses, (I’m late getting here) so I may have missed it, but I don’t see where you have explained what you think the ‘innate morality’ is that we are born with.

I note that you made the following comment:

Morality is not moral belief. Only when we treat those things as being equal does there seem to be a problem.


Morality is that act of exhibiting a behavior of which individual or social ideals can be used to determine the quality of the action. In other words you are right in that individual behavior cannot expressly be used to gage or even determine the moral beliefs of that individual.

But if you are categorizing morality as something other than the a behavior which can be contrasted with a moral, it needs to be more clearly defined.

creativesoul's photo
Wed 09/08/10 11:28 PM
Di,

Glad you could visit this thread, I had hoped that you would. Let's see what is going on here in our thoughts...

creative:

Definition 1.) Belief is that which is accepted as being true (accurately corresponding to reality).


Redy:

Definition 1.) I don’t see how this definition can be used in the discussion without another qualifier. Belief does not always correspond to reality.


You're right. Belief is not necessarily true(accurately correspond to reality), nor does it need to be. It is believed to be, regardless of whether or not it is.

For example: Our brains multitask continuously and they do so below the level of consciousness. Our entire body functions as a biological unit. After consuming a large meal, we often feel the need to relax and rest. That’s not accidental, because a great deal of our energy is required to assist in the digestive process.

The brain is no different. The majority of its power is reserved for and used to maintain awareness on the immediate present – the conscious moment.

To accommodate the necessity of remaining ‘in the moment’, our brain has evolved so that below the level of consciousness generalized thought processes continue. Much of this process presents itself as heuristic cognition. Heuristic ability is an innate quality that most scientists consider a survival mechanism.

The problem with heuristics is that it ‘programs’ (if you will) itself by utilizing data which the individual has categorized (tagged) as content on which survival may depend.

What happens is that individuals categorize incorrectly by allowing fear based bias gets tagged as survival information. One word here should explain the rest – stereotyping. It is not based on reality, yet it becomes a belief of individuals. This is not the only argument because superstition and fear of the unknown (fear of change) is miss-categorized.

Therefore,

Definition 1.) Belief is that which is accepted by the individual as having some basis in reality.


Neat piece on heuristics... I am afraid that I do not see a problem here though concerning the definition of belief. This alternative definition has the same meaning and more words. "That which is accepted by the individual as having some basis in reality" means "that which is accepted as being true", doesn't it?

creative:

Definition 2.)Morality is innate and universally shared behavioral expectation.


Redy:

The definition here is too far reaching, as it encompasses two completely different ideas. Morality is innate and universally shared. The second idea is - morality is universally shared behavioral expectations.

I don’t find these two ideas to be mutually inclusive.

I agree with Morality is innate and universally shared – to a point. For example, the survival of an infant is totally dependent on others. Therefore, innately, the infant MUST ‘trust’ whatever/whoever meets its needs. But is that actually morality?


Not exactly. The definition needs some work, as previous admittance suggested. There is much more to it. I'm seeking to use deduction. The most prevalent issue, for now, is that the term 'innate' creates coherency problems. From the premiss 'we are born void of belief', it necessarily follows that we are born void of belief about others. Expectation requires belief. Consider the term 'innate' as incorrect, self-contradictory, and therefore removed. Morality, if held to be behavioral expectation, cannot be innately had. That which gives rise to it can be. Trust and truth(loose correspondence theory) are two such concepts.

Redy:

That ‘trust’ has limits as has been demonstrated with experiments in which a baby is placed on a floor, or low table top, designed to look like it has a drop off. On the other side of the drop off and several feet away is the baby’s mother trying to get that baby to crawl to her which means the baby must crawl into the drop off.

Almost all babies will stop, many will cry, and some can be coaxed enough so that they put their hand past the drop off. If they get that far, they ‘learn’ that the danger is not real and will usually no longer pay attention to the optical illusion.


Nice example which illustrates the concepts of trust and original belief affecting thoughts/behavior through the use of our innate rationality.

Redy:

As for a universally shared behavioral expectation, I can agree. Most morals are learned and instilled endemically, well before there is any concept that other cultures exist that adhere to their own set of morals.

So I would accept for

Definition 2) Morality is universally shared as a culturally acceptable set of behavioral expectations.


This is a normative/prescriptive claim, and those kinds of moral claims constitute the bulk of problems with moral discussions. I hold that it is such normatively subjective claims that have created the 'problem' of morality. In order for a universal morality to have weight, as it were, all individual and cultural ethical/moral codes and beliefs must converge upon the same things. My argument shows that one's learning common language necessarily instantiates several different concepts being put to use, prior to our knowing and/or having acquired a complex conceptual understanding of them. A few of those are traditionally held to be moral concepts.

We put them to use before knowing what they are. Necessarily so. Can we somehow logically conclude that they are essential to our being human? It seems tough from this particular approach, however, it is impossible from the common one.

creative:

Definition 3.) Moral belief is belief regarding acceptable/unacceptable behavior.


Redy:

No need for Definition three, one and two should suffice.


I disagree Di. The historical conflation between moral belief and morality needs to be identified, separated, and adhered to in order to establish the mistake in thought which continues to this day when most people speak about morality. They equate it to utterances of ought, which are grounded by moral belief because those(oughts) necessarily follow from moral belief(belief about 'good and evil' and 'right and wrong'. The ought is a behavioral expectation, the moral belief that grounds the ought is not.

Moral belief is not behavioral expectation, it is belief about(that grounds) it. The distinction here is vitally important. Example...

Moral belief 'A' is "murder is evil". Murder is the behavior. The moral belief is belief about that behavior. There is no behavioral expectation in the moral belief "murder is wrong". It is implied though, and expresses through oughts. That cannot happen unless one presupposes it's truth. It then provides grounds for an ought. "One ought not murder(because it is wrong)"

So, the ought statement is a behavioral expectation, but the moral belief which grounds the ought is not.

creative:

We are newborns in a real sense concerning secular 'morality'. Morality is not expressed through an ought utterance, belief about morality is. There exists a universal constant which applies to all humans, regardless of particulars, that amounts to being a behavioral expectation(which ethics and morality both converge upon) that is not being met to the degree that it necessarily instantiates itself prior to our having the ability to acquire adopted belief about morality('sense' of ought). This I intend to show.


Redy:

You must first show that morality is innate? The argument I presented above indicates morality is not innate.


Actually Di, I admitted an earlier mistake and subsequently showed that morality cannot possibly be innate from the given premisses, however, that does not deny that the moral concepts of trust and truth are not instantiated as the argument shows. Innate morality is not necessary to show universally shared moral concepts being put to use in the same way prior to our mentally grasping the concepts.

creative:

p1.) We are necessarily social creatures.

p2.) We are born void of belief.

p3.) We are born rational creatures.


Redy:

p2 – assumes a blank slate – that goes about as far as the second or third time a baby cries and is picked up, fed, or otherwise comforted. We know that babies learn to cry for attention, because those who receive no attention, stop crying even when in pain or starving. They ‘learn’ to ‘expect’ (believe) that their cries will fulfill their needs. So yes to p3 - we are born rational creatures.


Actually p2 does not assume a blank slate in the strictest of meaning. Blank slates do not do anything. Being born void of belief/knowledge about the world does not mean that we are born without genetic predisposition(s).

creative:

Axiom1: I believe 'X' means I believe 'X' is true.
C.) It is humanly impossible knowingly believe a falsehood.
C2.) It is humanly impossible to intentionally make a mistake.


Redy:

Axiom1 – is that equivalent to “X is true only if I believe X is true” ? Meaning the truth of X is always based in a subjective belief? For an Axiom to be accepted it must be true in all cases for which it could apply. Because the ‘reality’ is that morality is also influenced by factual information, facts must be included in the Axiom.


No, it does not mean that, and there is no need for facts in an axiom. An axiom is un undeniable, irrefutable, self-evident truth. Belief is not sufficient for truth. It means 'X' is believed iff 'X' is believed to be true; as in being the case; as in accurately corresponding to reality.

I think Wux made reference to this. In other words, how do we fit ‘facts’ into this equation? Certainly morals are also guided by facts learned along the way, as much as they may be based in beliefs on which no facts have been made evident.


We use known facts to either verify or deny. I attempt to use them as the basis for premisses. We must however, use them properly. For instance, the fact that one holds a false belief, does not mean that they know that it is false, nor does the fact that they hold it make it true.

When Wux suggested changing C2 to “…intentionally making an unintentional mistake,” I agreed with it because we do intentionally make mistakes and often because we knowingly believe in a falsehood.


How does "intentionally making an unintentional mistake" make sense? It is one or the other - it cannot be both simutaneously.

Intentionally taking an action is quite a bit different from knowingly making a mistake. It makes no sense to say that one can intentionally or knowingly make a mistake. That would be saying that we intentionally took action based upon an expected outcome other than what was expected. It is incomprehensible.

If the outcome agrees with the expectation, it is not a mistake. Only when the outcome does not meet the expectation do we have a mistake. One's calling another's action a "mistake" has no bearing whatsoever upon whether or not it is. The relationship between expectation and result are what determines whether or not something constitutes being a mistake. Mistakes reflect flaws in volitional capability, they are not determined by personal value assessments. That is not to say that we cannot - after the fact - call something a mistake. Nor is it to say that one cannot say that anothers future action would be a mistake based upon the expecte outcome, or the possible violation of one's moral code. Those do not determine whether or not the action produces an unexpected outcome. That is all that constitutes being a mistake, an unexpected or unwanted outcome.

Or should we choose to shorten this point. What constitutes being an intentional mistake? I cannot think of one, but would be very interested in seeing one be explained.

Wow, this is a long post! Perhaps we could shorten these to more concise points?

Redykeulous's photo
Thu 09/09/10 04:56 PM
Edited by Redykeulous on Thu 09/09/10 04:57 PM
Redy:

Ok – trying to consolidate here, so much of what is below will be concluded and we can move on.

This is a normative/prescriptive claim, and those kinds of moral claims constitute the bulk of problems with moral discussions. I hold that it is such normatively subjective claims that have created the 'problem' of morality. In order for a universal morality to have weight, as it were, all individual and cultural ethical/moral codes and beliefs must converge upon the same things.

My argument shows that one's learning common language necessarily instantiates several different concepts being put to use, prior to our knowing and/or having acquired a complex conceptual understanding of them. A few of those are traditionally held to be moral concepts.

We put them to use before knowing what they are. Necessarily so. Can we somehow logically conclude that they are essential to our being human? It seems tough from this particular approach, however, it is impossible from the common one.

creative:

Definition 3.) Moral belief is belief regarding acceptable/unacceptable behavior.


Redy:

No need for Definition three, one and two should suffice.


creative:
I disagree Di. The historical conflation between moral belief and morality needs to be identified, separated, and adhered to in order to establish the mistake in thought which continues to this day when most people speak about morality.

They equate it to utterances of ought, which are grounded by moral belief because those(oughts) necessarily follow from moral belief(belief about 'good and evil' and 'right and wrong'. The ought is a behavioral expectation, the moral belief that grounds the ought is not.

Moral belief is not behavioral expectation, it is belief about(that grounds) it. The distinction here is vitally important. Example...

Moral belief 'A' is "murder is evil". Murder is the behavior. The moral belief is belief about that behavior. There is no behavioral expectation in the moral belief "murder is wrong". It is implied though, and expresses through oughts. That cannot happen unless one presupposes it's truth. It then provides grounds for an ought. "One ought not murder(because it is wrong)"

So, the ought statement is a behavioral expectation, but the moral belief which grounds the ought is not.


Ok – I think I have a better understanding now of where you are going with this.

creative:
We are newborns in a real sense concerning secular 'morality'. Morality is not expressed through an ought utterance, belief about morality is. There exists a universal constant which applies to all humans, regardless of particulars, that amounts to being a behavioral expectation(which ethics and morality both converge upon) that is not being met to the degree that it necessarily instantiates itself prior to our having the ability to acquire adopted belief about morality('sense' of ought). This I intend to show.


I will withhold comments on this as I read your further thoughts on the matter.

creative:


p1.) We are necessarily social creatures.

p2.) We are born void of belief.

p3.) We are born rational creatures.



Redy:


p2 – assumes a blank slate – that goes about as far as the second or third time a baby cries and is picked up, fed, or otherwise comforted. We know that babies learn to cry for attention, because those who receive no attention, stop crying even when in pain or starving. They ‘learn’ to ‘expect’ (believe) that their cries will fulfill their needs. So yes to p3 - we are born rational creatures.


creative:
Actually p2 does not assume a blank slate in the strictest of meaning. Blank slates do not do anything. Being born void of belief/knowledge about the world does not mean that we are born without genetic predisposition(s).


Axiom1: I believe 'X' means I believe 'X' is true.
C.) It is humanly impossible knowingly believe a falsehood.
C2.) It is humanly impossible to intentionally make a mistake.



Redy:
Axiom1 – is that equivalent to “X is true only if I believe X is true” ? Meaning the truth of X is always based in a subjective belief? For an Axiom to be accepted it must be true in all cases for which it could apply. Because the ‘reality’ is that morality is also influenced by factual information, facts must be included in the Axiom.



No, it does not mean that, and there is no need for facts in an axiom. An axiom is un undeniable, irrefutable, self-evident truth. Belief is not sufficient for truth. It means 'X' is believed iff 'X' is believed to be true; as in being the case; as in accurately corresponding to reality.

- OK - We’ll move on.

Redy:
I think Wux made reference to this. In other words, how do we fit ‘facts’ into this equation? Certainly morals are also guided by facts learned along the way, as much as they may be based in beliefs on which no facts have been made evident.


creative:
We use known facts to either verify or deny. I attempt to use them as the basis for premisses. We must however, use them properly. For instance, the fact that one holds a false belief, does not mean that they know that it is false, nor does the fact that they hold it make it true.

I think it works the other way around - we use ‘belief’, in some individually arranged hierarchical order, to verify or deny facts. But I realize I’m invoking semantics here, because I do not attach the same power to the term ‘belief’, as the majority of people do. When making the statement “I believe…” it predominantly means, I have come to a conclusion based on the facts at my immediate disposal.

I could easily say “I surmise..” in it’s place because I’ve learned that making a firm conviction to anything makes it more difficult to make adjustment to thought as new information/knowledge/facts are presented.

So, for now, I’ll allocate ‘belief’ as I see it portrayed in the context of your discussion.

Redy:
When Wux suggested changing C2 to “…intentionally making an unintentional mistake,” I agreed with it because we do intentionally make mistakes and often because we knowingly believe in a falsehood.


creative:
How does "intentionally making an unintentional mistake" make sense? It is one or the other - it cannot be both simutaneously.

Intentionally taking an action is quite a bit different from knowingly making a mistake. It makes no sense to say that one can intentionally or knowingly make a mistake. That would be saying that we intentionally took action based upon an expected outcome other than what was expected. It is incomprehensible.


This was referring to cognitive functions, heuristics, and to - “we use ‘belief’, in some individually arranged hierarchical order, to verify or deny facts.” that I wrote above.

We can have the facts, but choose to take action ‘in spite of’ or ‘in denial of’ knowledge that might have or possibly should have, prevented the ‘mistake’. In some cases the action/mistake was overruled by the hierarchical order of beliefs, while in others the mistake was a lack of critical analysis.

However, after reading the quote below, I think you might relegate my argument to “volitional capability”.
Let me continue below.

creative:
If the outcome agrees with the expectation, it is not a mistake. Only when the outcome does not meet the expectation do we have a mistake. One's calling another's action a "mistake" has no bearing whatsoever upon whether or not it is. The relationship between expectation and result are what determines whether or not something constitutes being a mistake. Mistakes reflect flaws in volitional capability, they are not determined by personal value assessments. That is not to say that we cannot - after the fact - call something a mistake. Nor is it to say that one cannot say that anothers future action would be a mistake based upon the expecte outcome, or the possible violation of one's moral code. Those do not determine whether or not the action produces an unexpected outcome. That is all that constitutes being a mistake, an unexpected or unwanted outcome.

Or should we choose to shorten this point. What constitutes being an intentional mistake? I cannot think of one, but would be very interested in seeing one be explained.


The mistake I refer to has been referenced above, as in a cognitive (psychosomatic) one. These are the type of mistakes people continuously make rather than admit their face the dissonance of inconsistency between facts and beliefs. In that case people will behave one way because of a belief based on a false premise, but in conversation with that same person, they often, quite clearly, state the facts which refute the premise of their belief – but they cannot come to terms with past behaviors, so they continue to make the ‘mistakes’.

To them the outcome is as expected – but the outcome is not about OUTSIDE – the only outcome that means anything to that individual, in the outcome that’s produced INSIDE – and that is, behavior and belief are consistent.

I have to ask – is that what you refer to as “Mistakes reflect flaws in volitional capability”.

I’m hoping to resolve this last issue because I’m very interested in where this discussion could go.

Exploring human ethics is of great interest to me, because I have often felt that we are quickly approaching a new era in modernity. This new era may be the best time for taking steps toward developing a coincidental human ethics philosophy that is broad enough for a wide range of acceptance and eventually expanded globally.

no photo
Thu 09/09/10 06:02 PM

wux,

I am not following your objections here. For one, you're not showing the flaws you claim exist. Broken down point by point, for ease of discussion...

creative:

p1.) We all have moral belief.


I support this only on the basis of having to have a belief of something or somebody once it has been named. A belief of "I don't believe" is a belief. A god is not a thing of learning, it is a conceptual thing, over which we exercise a belief; and the belief is concerned over the existence or non-existence of god. Morality, ditto. There is no necessary belief that all of us must belief that morality exists, but some of us could say "I believe there is no such thing as morality." This still presupposes the fact that the utterer knows what morality is, he or she knows the basic concept and its specific requirements, except the utterer believes that those are false specs, and the concept is not necessary to exist.


How does this contradict the claim that we all have moral belief; belief about acceptable/unacceptable behavior?

creative:

C2.) It is humanly impossible to intentionally make a mistake.


wux:

I reject this. I think I could only accept it if you reworded it to
C2.) It is humanly impossible to intentionally make an unintentional mistake.

This is very, very important. Mistake by itself does not necessarily mean it is unintentional.

Many will argue that mistake can only be unintentional. I don't believe that. If I am wrong, then please consider that in my wrong opinion I accept the reworded version.

The reason intentional mistakes can be made is the reason of reference. Many acts are made; some are mistakes; but they are mistakes only in certain perspective, while in other perspectives they are not mistakes, and all conscious acts are done intentionally. Sometimes it is a lot of work to establish that an act is a mistake, by finding the reference of view in which it was an unintentional act causing a bad effect. All conscious acts are intentional, yet some aspects of intentional acts are unintentional, and they are not all mistakes. So instead of bogging ourselves down in separating the reference or view, I say that putting the word "unintentional" would nicely take care of this can of worms that can open every time we use the word "mistake", which would be, after all, a big mistake to do so.

However, these two failures or mistakes in your presentation I won't even use in the description of my views. This was just a toccata.


Nothing written here contradicts the argument wux. See if this helps: A mistake is a flaw in one's volitional capability, necessarily so. One intentionally takes action based upon forseen consequences. Necessarily so. If the outcome meets the expectation, there is no mistake. Only when the outcome contradicts the desired expectation is a mistake had. The argument was also not presented in your objection, only that conclusion. It necessarily follows from the axiom and secondary premiss(first conclusion).

Axiom1: I believe 'X' means I believe 'X' is true.

The above is necessarily true. Therefore so is the below...

C.) It is humanly impossible knowingly believe a falsehood.

Because the above two premisses are necessarily true, and we know that one takes action based upon volition, this necessarily follows, and is therefore also true.

C2.) It is humanly impossible to intentionally make a mistake.

wux:

However, I have huge problems with your question at the end of your post, or questions:

"Tell me then, why have we violated that universally shared human condition? Perhaps the better question is why ought we continue to do such a thing?"

THAT condition? SUCH a thing?

What condition? WHICH thing?

You actually forgot to name the condition that you say we keep violating, and you never named the thing that we continue to do.


No, I didn't forget wux, it required looking back at the argument in question. Here it is again. Because morality/ethics is always about behavioral expectation, as per the argument preceding, that becomes the focus. The following represents a universally applicable behavioral expectation that must exist prior to having learned anything through language. As written, with the answer to your question bolded for additional clarity.

In order to even be able to learn a common language, one must necessarily place 'pure' faith(unquestionable trust/belief unimpeded by doubt) in the teacher of that language to be truthful in their testimony.

That is the universally applicable common denominator in human behavioral expectation(universal morality).


Sorry if you feel it was not referenced well enough. Perhaps a more careful reading of the entire argument will help. It is not a simple matter of investigation.

wux:

I am sorry, I don't mean to upset you, but you made this mistake here, and I can't be sure whether it was intentional or unintentional.


No worries wux, I am not upset. It is humanly impossible to intentionally make a mistake, as per shown in the argument.


just want to jump in with a single observation. I do not agree that we all have some kind of concept of morality. I think most of us do, fortunately, but not all. I believe firmly that there are truly people out there who are totally amoral - mot immoral (which is a belief system in and of itself) but amoral - they exist in an absence of morality, a moral void if u will. No concept , no caring about the constructs of right or wrong or good or bad/evil. and they are very dangerous

Thorb's photo
Thu 09/09/10 06:22 PM
Interesting concept but

I believe your argument for naive realism is not how we learn.
trial and error is.

we do not naively believe this or that ...
we test it through trial and error and that is how we gain our belief in everything ... including language.

JMHO derived from my personal antidotal experiences and memory of my childhood.


no photo
Thu 09/09/10 08:03 PM

Interesting concept but

I believe your argument for naive realism is not how we learn.
trial and error is.

we do not naively believe this or that ...
we test it through trial and error and that is how we gain our belief in everything ... including language.

JMHO derived from my personal antidotal experiences and memory of my childhood.


learning theory which is an actual pedegogy or body of knowledge, will tell you that there are several modalities for learning and they are influenced by things like sensation, perception, and cognitive development. We learn best via multiple modalities in terms of memory and retention. But the best teacher, yes thorb, is experience. If one both reads and expereinces something - tangible like ...an apple, or intangilbe like love or anger, they will have those dual modalities and tend to have learned what those things are.

If you just read about apples, love and anger, you will have heard of those things. If you experience apple, love, and anger you will know those things. Experience is the best teacher - it can teach without reading but reading cannot truly be an effective teacher without experience jmho in my little world (imlw)

Redykeulous's photo
Thu 09/09/10 08:55 PM


Interesting concept but

I believe your argument for naive realism is not how we learn.
trial and error is.

we do not naively believe this or that ...
we test it through trial and error and that is how we gain our belief in everything ... including language.

JMHO derived from my personal antidotal experiences and memory of my childhood.


learning theory which is an actual pedegogy or body of knowledge, will tell you that there are several modalities for learning and they are influenced by things like sensation, perception, and cognitive development. We learn best via multiple modalities in terms of memory and retention. But the best teacher, yes thorb, is experience. If one both reads and expereinces something - tangible like ...an apple, or intangilbe like love or anger, they will have those dual modalities and tend to have learned what those things are.

If you just read about apples, love and anger, you will have heard of those things. If you experience apple, love, and anger you will know those things. Experience is the best teacher - it can teach without reading but reading cannot truly be an effective teacher without experience jmho in my little world (imlw)


The extent of experiences that the overwhelming majority of humans actually have in a life time could not possibly account for the vast moral judgments and moral values that most people subscribe to.

It stands to reason that we do not come by our moral standard through individual experience alone.

So in what other ways or by what other means do we adopt the standards of what is right or wrong, and good or bad?

Taking that another step further, why don't humans, the world over, exhibit exactly the same adherance to the exactly the same moral/ethical standards?

Could there even be such a thing as an inherant quality of morality or a human ethics that is universally shared?

Those questions are part of what has led to this discussion.
The replies are long and the difference of opinions make it harder to plod through, but it might be a little easier reading with those questions in mind.

I think it's good to get others ideas. It keep us all thinking and questioning.





Redykeulous's photo
Thu 09/09/10 09:17 PM

Interesting concept but

I believe your argument for naive realism is not how we learn.
trial and error is.

we do not naively believe this or that ...
we test it through trial and error and that is how we gain our belief in everything ... including language.

JMHO derived from my personal antidotal experiences and memory of my childhood.



I think in this case (in this discussion) naive realism may be to the point. The OP, if I'm following correctly, through the use of certain ascertains, is examining the implications of an innate human characteristic for morality.

In other words, this characteristic would be a universally encoded human ethics. If that’s the case somehow we are overriding what is naively internal with external or objective data. In effect we are, basically, corrupting what should be a universally accepted ideal that is neither learned nor judged by any objective standard.

But I hadn’t thought of that until I read your post. Mmm makes it more interesting to think of it that way.

creativesoul's photo
Thu 09/09/10 09:23 PM
Edited by creativesoul on Thu 09/09/10 09:36 PM
creative:

We use known facts to either verify or deny. I attempt to use them as the basis for premisses. We must however, use them properly. For instance, the fact that one holds a false belief, does not mean that they know that it is false, nor does the fact that they hold it make it true.


Redy:

I think it works the other way around - we use ‘belief’, in some individually arranged hierarchical order, to verify or deny facts.


There are some issues here stemming from word usage/meaning. Let's see if I can correct my own ambiguity which played a role. When I said "We", I was speaking of people who realize and understand that we are fallible creatures, and that the most reliable method we have at our disposal to correct past mistakes of belief and 'knowledge' is using correspondence theory of truth. Throughout written human history, it holds good to say that we have, in fact, increased our knowledge through transitional periods of letting go of the old and replacing it with that which more accurately corresponds to reality.

You're describing cases of people that hold absolute conviction in demonstrably false belief, often in spite of overwhelming evidence to the contrary. That kind of unshakable conviction often stems from being loaded with religious belief, especially of the kind which claims that anything which contradicts it(the religion) is necessarily evil. It is in cases such as this, as Russell so clearly described, one has no choice but to deny the facts at hand should they pose a contradiction to that kind of unshakable belief. That denial does not change the facts at hand.

In a rather curious way, even in these cases a correspondence theory of truth is referenced, albeit in a much different manner of use. It is quite correct to say that one who holds an unshakable belief in God believes that God represents an accurate correlation to/of reality.

Belief is insufficient for knowledge. That is a brute fact.

As a necessary consequence, false belief cannot possibly verify fact. It cannot deny the case as it is. A fact/knowledge describes the case as it is. Knowledge cannot be false. So, while I understand what you mean here, it is rather misleading to suggest that just because there are cases in which conviction necessarily impedes rationality, that it is the case that belief verifies knowledge, because that is not the case. False belief can impede and/or completely inhibit the acquisition of knowledge. However, just because someone does not believe that a piece of knowledge is true, does not not make it false. Belief is insufficient for truth as well.

Redy:

But I realize I’m invoking semantics here, because I do not attach the same power to the term ‘belief’, as the majority of people do. When making the statement “I believe…” it predominantly means, I have come to a conclusion based on the facts at my immediate disposal.

I could easily say “I surmise..” in it’s place because I’ve learned that making a firm conviction to anything makes it more difficult to make adjustment to thought as new information/knowledge/facts are presented.

So, for now, I’ll allocate ‘belief’ as I see it portrayed in the context of your discussion.


Semantics are all we have sweetheart! flowers We certainly cannot use gestures, moans, grunts, and the like and expect to get our points across.

To assert 'X' necessarily implies that 'X' is believed to be true. Belief(the concept of) indeed has power Di, and there is no need in denying that, for using that fact can be to our advantage.

:wink:

Redy:

We can have the facts, but choose to take action ‘in spite of’ or ‘in denial of’ knowledge that might have or possibly should have, prevented the ‘mistake’.


I would not disagree here as long as we are clear that 'having the facts' and mentally grasping the facts are two separate cases.

In some cases the action/mistake was overruled by the hierarchical order of beliefs, while in others the mistake was a lack of critical analysis.


Or both, for I cannot make a distinction between these?

Redy:

The mistake I refer to has been referenced above, as in a cognitive (psychosomatic) one. These are the type of mistakes people continuously make rather than admit their face the dissonance of inconsistency between facts and beliefs. In that case people will behave one way because of a belief based on a false premise, but in conversation with that same person, they often, quite clearly, state the facts which refute the premise of their belief – but they cannot come to terms with past behaviors, so they continue to make the ‘mistakes’.

To them the outcome is as expected – but the outcome is not about OUTSIDE – the only outcome that means anything to that individual, in the outcome that’s produced INSIDE – and that is, behavior and belief are consistent.


I attempt to avoid too many particulars, however, this hypothetical(if there is indeed a mistake at hand) supports what the claim being made. They are unknowingly(unintentionally) making the mistake. The distinction between mistaken and successful outcomes is had in one's volitional capability. You're correct here in your understanding of what I've written. A mistake reflects a flaw in volitional capability. It follows that that is a flaw in one's understanding reality.

Redy:

Exploring human ethics is of great interest to me, because I have often felt that we are quickly approaching a new era in modernity. This new era may be the best time for taking steps toward developing a coincidental human ethics philosophy that is broad enough for a wide range of acceptance and eventually expanded globally.


Indeed, we are in dire need of and perhaps are on the verge of the mother of all paradigm shifts.

You're engagement here is most appreciated!

flowers

creativesoul's photo
Fri 09/10/10 02:04 PM
Thorb:

Interesting concept but

I believe your argument for naive realism is not how we learn.
trial and error is.

we do not naively believe this or that ...
we test it through trial and error and that is how we gain our belief in everything ... including language.

JMHO derived from my personal antidotal experiences and memory of my childhood.


There are three basic methods of learning, 1.)from experience, 2.) through common language, 3.) a combination thereof

Naive realism describes a frame of reference stemming from our being born into a world of which we have no experience, therefore no preconceived notions about it. Naive realism does not discount any particular method of learning, neither can it take into account the kind of learning that we experience, nor does it need to. It accounts for the basis from which we learn anything at all. It represent the necessary underwriting of all possible experience, including the world views which grow beyond and later discount it.

One can learn from experience alone that fire 'hurts' when it is touched without knowing how to articulate that kind of thought process with a common language. That would be learning through/from experience as you've suggested and I readily agree with. One can learn through common language that the uncomfortable feeling is called "pain", and that the 'object' which produced the pain on contact is called "fire". We need not learn to make that kind of correlation between "pain" and ourselves, we do however, need to learn how to come to terms with such. That begins the acquisition of conceptual understanding(the combination thereof), because language is the mother of all conceptual abstraction, including the different kinds of mature worldviews that stem from naive realism - all of which do.

Language provides the only means of complex conceptual abstract understanding. Understanding references experience, necessarily so for it is ll we have to reference. In other words, without complex language there is no complex conceptual understanding of what it is that we are experiencing, for the basis for grasping such is not had.

None of this can happen if we doubt the fact that we are having an experience. There can be no doubt that we, the pain, nor the fire exists, and that it caused uncomfortable feelings when it was touched. Even if I were to grant an argument that it is possible for one to be fearful of touching the fire without ever having touched it, it still demands that the fire is being perceived as a real danger, that the fear is real, and that the fear affected the 'voluntary' decision making process. The existence of the fire is not doubted. Naive realism simply takes all of this into account. This is case demonstrates our inability to doubt that reality is anything other than the way we see it at that time. The existence of that which is being perceived and correlated through cognitive faculty is necessarily being treated as though it is real. The existence of those things are not - cannot - be doubted, and neither can one doubt their own existence prior to having developed a rather complex set of beliefs which allows such thinking to be formulated.

Prior to having acquired a complex conceptual understanding of experience, and that entails my description of "pain" and "fire", one cannot possibly doubt whether or not we are perceiving, identifying, and correlating these things to ourselves and each other. In order to know that reality is not completely perceived, one must first be exposed to the building blocks which facilitate that kind of understanding. An innate belief in naive realism is not in question here, nor is it being claimed. That kind of belief cannot be had without complex understanding of what naive realism is. So, it is not to say that we believe in naive realism at birth, it is to say that we are indeed, naive realists. Possessing knowledge of the distinction between our interpretation of reality and reality itself requires being built upon rather complex understanding. Namely, it requires the acknowledgment that our perception is indeed fallible. Until one recognizes that this is the case, one cannot distinguish between that which is being perceived and our interpretation of it. Therefore, until this kind of complex conceptual understanding is had, one is necessarily born - not as believing in, but as a naive realist.


sweetest:

learning theory which is an actual pedegogy or body of knowledge, will tell you that there are several modalities for learning and they are influenced by things like sensation, perception, and cognitive development. We learn best via multiple modalities in terms of memory and retention. But the best teacher, yes thorb, is experience. If one both reads and expereinces something - tangible like ...an apple, or intangilbe like love or anger, they will have those dual modalities and tend to have learned what those things are.


The above assessment seems questionable. Experience being called "best" is a case of applying a value of comparision(best/worst) to a single referent. Experience is the only way to learn, and it entails learning through language. Language constitutes the means by which we come to terms with experience. If one cannot come to terms with their own personal experience, then how can one make a claim about the way things are? The idea itself is incomprehensible.

sweetest:

If you just read about apples, love and anger, you will have heard of those things. If you experience apple, love, and anger you will know those things. Experience is the best teacher - it can teach without reading but reading cannot truly be an effective teacher without experience jmho in my little world (imlw)


Reading is experience.

A claim to knowledge engages the claimant to explain how it is that s/he has arrived at the claim itself. If one claims to know something or another, then it is reasonably expected that that entails one's understanding what it is that they claim to know. To claim knowledge is to claim that they understand the case as it is, and is always a claim to know something about reality(the way things are). Understanding necessarily presupposes a comparitive assessment between different things. This invokes identification, correlation, and recognition(the use of innate rationality and/or common language). One cannot make a claim to knowing or knowledge unless one can describe these relationships at least to their own satisfaction(justification). A claim to knowledge is a claim to understand the way things are. However, one point being made here is that a claim of knowledge to another necessitates justification to the other. So while knowledge does not necessarily require justification to another to be had by the knowing individual, justification to another is the only means through we can share such a thing.

Successfully sharing knowledge requires knowledge of how to do such a thing. That kind of knowledge invokes the subjective nature of human identity/thought. Sharing one's understanding requires the use of successful measures. That requires valuing another as a person first, and a believer second, and making that known. That being said, there is knowledge to be had concerning what is commonly called subjective 'knowledge'. I fear it is a misdiagnosis. The examples of love and anger are relevent here.

Love and anger are necessarily subject to individual perspective, preference, taste, etc. There is no universal description of what those things are yet they all converge upon how it makes us feel to believe and/or know that another cares for us and holds our person with high regard. Therefore, to 'know' love and/or anger is nothing more than to know oneself. To know oneself is to understand what makes us 'tick', as it were. How we frame, relate, and react to reality is clearly illustrated by this. Therefore, how we think is who we are and to know ourselves is to know how we think. Recognizing our emotions and our thoughts regarding those in addition to coming to terms with how those things affect us through invoking a certain state of mind, is knowing ourselves. If we cannot predict our own behavior in any given set of hypotheticals to the degree that it comes to pass as predicted, should future reality allow, then we do not know ourselves. Understanding and believing that another values us for the same reasons that we value ourselves, and perhaps even more reasons which are not necessarily shared by us, is to 'know' what it feels like to be loved by another.

That is all very relevent to being human and is entailed by the given argument.

creativesoul's photo
Fri 09/10/10 02:17 PM
Edited by creativesoul on Fri 09/10/10 02:38 PM
sweetest:

just want to jump in with a single observation. I do not agree that we all have some kind of concept of morality. I think most of us do, fortunately, but not all. I believe firmly that there are truly people out there who are totally amoral - mot immoral (which is a belief system in and of itself) but amoral - they exist in an absence of morality, a moral void if u will. No concept , no caring about the constructs of right or wrong or good or bad/evil. and they are very dangerous


This refers to a set of complex beliefs, which comes after one's having developed a world-view. The argument here points at what necessarily precludes such a thing. That being said, the point is valid in that it certainly seems to be the case.

Not possessing moral belief does equate to an absence of morality, itself. Those are two different but related things. If one exists in a moral void, then they have somehow come to the conclusion that those things are unimportant, irrelevent, unnecessary and/or useless. That can be, and is often the case with those who've thrown morality out along with God. No fear of repurcussion and/or negative consequence after death can be considered 'good' reason to pursue one's own self interest at all costs, regardless of the harm to another. That constitutes being a moral code, and reflects the the underwritten primary reason for ethics.

If one can be offended, they have morality. Forcefully take away the important things(whatever they are), and those kinds of people will instantiate their own version of moral belief. That always comes as a result of that which constitutes sufficient reason to believe, or warrant. It also presupposes truth, and necessarily stems from trusting in a source.

Not caring about anyone else, is to wholly care about oneself. Where there is care, there is moral belief, even if it is completely selfish belief. If one possesses no self-interest whatsoever, then they are dead.

no photo
Fri 09/10/10 03:41 PM

Thorb:

Interesting concept but

I believe your argument for naive realism is not how we learn.
trial and error is.

we do not naively believe this or that ...
we test it through trial and error and that is how we gain our belief in everything ... including language.

JMHO derived from my personal antidotal experiences and memory of my childhood.


There are three basic methods of learning, 1.)from experience, 2.) through common language, 3.) a combination thereof

Naive realism describes a frame of reference stemming from our being born into a world of which we have no experience, therefore no preconceived notions about it. Naive realism does not discount any particular method of learning, neither can it take into account the kind of learning that we experience, nor does it need to. It accounts for the basis from which we learn anything at all. It represent the necessary underwriting of all possible experience, including the world views which grow beyond and later discount it.

One can learn from experience alone that fire 'hurts' when it is touched without knowing how to articulate that kind of thought process with a common language. That would be learning through/from experience as you've suggested and I readily agree with. One can learn through common language that the uncomfortable feeling is called "pain", and that the 'object' which produced the pain on contact is called "fire". We need not learn to make that kind of correlation between "pain" and ourselves, we do however, need to learn how to come to terms with such. That begins the acquisition of conceptual understanding(the combination thereof), because language is the mother of all conceptual abstraction, including the different kinds of mature worldviews that stem from naive realism - all of which do.

Language provides the only means of complex conceptual abstract understanding. Understanding references experience, necessarily so for it is ll we have to reference. In other words, without complex language there is no complex conceptual understanding of what it is that we are experiencing, for the basis for grasping such is not had.

None of this can happen if we doubt the fact that we are having an experience. There can be no doubt that we, the pain, nor the fire exists, and that it caused uncomfortable feelings when it was touched. Even if I were to grant an argument that it is possible for one to be fearful of touching the fire without ever having touched it, it still demands that the fire is being perceived as a real danger, that the fear is real, and that the fear affected the 'voluntary' decision making process. The existence of the fire is not doubted. Naive realism simply takes all of this into account. This is case demonstrates our inability to doubt that reality is anything other than the way we see it at that time. The existence of that which is being perceived and correlated through cognitive faculty is necessarily being treated as though it is real. The existence of those things are not - cannot - be doubted, and neither can one doubt their own existence prior to having developed a rather complex set of beliefs which allows such thinking to be formulated.

Prior to having acquired a complex conceptual understanding of experience, and that entails my description of "pain" and "fire", one cannot possibly doubt whether or not we are perceiving, identifying, and correlating these things to ourselves and each other. In order to know that reality is not completely perceived, one must first be exposed to the building blocks which facilitate that kind of understanding. An innate belief in naive realism is not in question here, nor is it being claimed. That kind of belief cannot be had without complex understanding of what naive realism is. So, it is not to say that we believe in naive realism at birth, it is to say that we are indeed, naive realists. Possessing knowledge of the distinction between our interpretation of reality and reality itself requires being built upon rather complex understanding. Namely, it requires the acknowledgment that our perception is indeed fallible. Until one recognizes that this is the case, one cannot distinguish between that which is being perceived and our interpretation of it. Therefore, until this kind of complex conceptual understanding is had, one is necessarily born - not as believing in, but as a naive realist.


sweetest:

learning theory which is an actual pedegogy or body of knowledge, will tell you that there are several modalities for learning and they are influenced by things like sensation, perception, and cognitive development. We learn best via multiple modalities in terms of memory and retention. But the best teacher, yes thorb, is experience. If one both reads and expereinces something - tangible like ...an apple, or intangilbe like love or anger, they will have those dual modalities and tend to have learned what those things are.


The above assessment seems questionable. Experience being called "best" is a case of applying a value of comparision(best/worst) to a single referent. Experience is the only way to learn, and it entails learning through language. Language constitutes the means by which we come to terms with experience. If one cannot come to terms with their own personal experience, then how can one make a claim about the way things are? The idea itself is incomprehensible.

sweetest:

If you just read about apples, love and anger, you will have heard of those things. If you experience apple, love, and anger you will know those things. Experience is the best teacher - it can teach without reading but reading cannot truly be an effective teacher without experience jmho in my little world (imlw)


Reading is experience.

A claim to knowledge engages the claimant to explain how it is that s/he has arrived at the claim itself. If one claims to know something or another, then it is reasonably expected that that entails one's understanding what it is that they claim to know. To claim knowledge is to claim that they understand the case as it is, and is always a claim to know something about reality(the way things are). Understanding necessarily presupposes a comparitive assessment between different things. This invokes identification, correlation, and recognition(the use of innate rationality and/or common language). One cannot make a claim to knowing or knowledge unless one can describe these relationships at least to their own satisfaction(justification). A claim to knowledge is a claim to understand the way things are. However, one point being made here is that a claim of knowledge to another necessitates justification to the other. So while knowledge does not necessarily require justification to another to be had by the knowing individual, justification to another is the only means through we can share such a thing.

Successfully sharing knowledge requires knowledge of how to do such a thing. That kind of knowledge invokes the subjective nature of human identity/thought. Sharing one's understanding requires the use of successful measures. That requires valuing another as a person first, and a believer second, and making that known. That being said, there is knowledge to be had concerning what is commonly called subjective 'knowledge'. I fear it is a misdiagnosis. The examples of love and anger are relevent here.

Love and anger are necessarily subject to individual perspective, preference, taste, etc. There is no universal description of what those things are yet they all converge upon how it makes us feel to believe and/or know that another cares for us and holds our person with high regard. Therefore, to 'know' love and/or anger is nothing more than to know oneself. To know oneself is to understand what makes us 'tick', as it were. How we frame, relate, and react to reality is clearly illustrated by this. Therefore, how we think is who we are and to know ourselves is to know how we think. Recognizing our emotions and our thoughts regarding those in addition to coming to terms with how those things affect us through invoking a certain state of mind, is knowing ourselves. If we cannot predict our own behavior in any given set of hypotheticals to the degree that it comes to pass as predicted, should future reality allow, then we do not know ourselves. Understanding and believing that another values us for the same reasons that we value ourselves, and perhaps even more reasons which are not necessarily shared by us, is to 'know' what it feels like to be loved by another.

That is all very relevent to being human and is entailed by the given argument.


yes reading is experience indeed, and what one gains from that experience is that they learn to be good readers. Not much else. But of course there is great value inherent for some, or many in becoming a good reader because it can be a great jumping off point.

What is learned is different in each experience. The model is to learn or teach in direct parallel to what you wish to accomplish. If you wish to build a computer or a bridge, reading will not teach you that. But it is a good start if one has already become a good reader. Even the mathematics are primarily a verbal skill - tho most don't realize this.

and yes, my assessment is flawless.

no photo
Fri 09/10/10 03:48 PM



Interesting concept but

I believe your argument for naive realism is not how we learn.
trial and error is.

we do not naively believe this or that ...
we test it through trial and error and that is how we gain our belief in everything ... including language.

JMHO derived from my personal antidotal experiences and memory of my childhood.


learning theory which is an actual pedegogy or body of knowledge, will tell you that there are several modalities for learning and they are influenced by things like sensation, perception, and cognitive development. We learn best via multiple modalities in terms of memory and retention. But the best teacher, yes thorb, is experience. If one both reads and expereinces something - tangible like ...an apple, or intangilbe like love or anger, they will have those dual modalities and tend to have learned what those things are.

If you just read about apples, love and anger, you will have heard of those things. If you experience apple, love, and anger you will know those things. Experience is the best teacher - it can teach without reading but reading cannot truly be an effective teacher without experience jmho in my little world (imlw)


The extent of experiences that the overwhelming majority of humans actually have in a life time could not possibly account for the vast moral judgments and moral values that most people subscribe to.

It stands to reason that we do not come by our moral standard through individual experience alone.

So in what other ways or by what other means do we adopt the standards of what is right or wrong, and good or bad?

Taking that another step further, why don't humans, the world over, exhibit exactly the same adherance to the exactly the same moral/ethical standards?

Could there even be such a thing as an inherant quality of morality or a human ethics that is universally shared?

Those questions are part of what has led to this discussion.
The replies are long and the difference of opinions make it harder to plod through, but it might be a little easier reading with those questions in mind.

I think it's good to get others ideas. It keep us all thinking and questioning.





I agree. Morals are learned in a variety of ways that encompass our experiences including those which are culturally based that many accept on the "authority" of tradition - be it religious or otherwise. But u put words in my mouth it seems unless we have misunderstood one another. I haven't said any where that experience is the only way to learn. I believe to be the best because I have the courage of my convictions. I do not play games with words. I say what it is. Yes, it is very much possilbe (ref. the OP) to put qualitative markers on learning and the best ways to do. The proof is in the pudding, and the pudding is seldom on an internet thread. If you will excuse me, I need to go prepare for my nest experience with some reading. Thank you for creating such an interesting discussion!flowerforyou

Abracadabra's photo
Fri 09/10/10 04:23 PM

sweetest:

just want to jump in with a single observation. I do not agree that we all have some kind of concept of morality. I think most of us do, fortunately, but not all. I believe firmly that there are truly people out there who are totally amoral - mot immoral (which is a belief system in and of itself) but amoral - they exist in an absence of morality, a moral void if u will. No concept , no caring about the constructs of right or wrong or good or bad/evil. and they are very dangerous


This refers to a set of complex beliefs, which comes after one's having developed a world-view. The argument here points at what necessarily precludes such a thing. That being said, the point is valid in that it certainly seems to be the case.

Not possessing moral belief does equate to an absence of morality, itself. Those are two different but related things. If one exists in a moral void, then they have somehow come to the conclusion that those things are unimportant, irrelevent, unnecessary and/or useless. That can be, and is often the case with those who've thrown morality out along with God. No fear of repurcussion and/or negative consequence after death can be considered 'good' reason to pursue one's own self interest at all costs, regardless of the harm to another. That constitutes being a moral code, and reflects the the underwritten primary reason for ethics.

If one can be offended, they have morality. Forcefully take away the important things(whatever they are), and those kinds of people will instantiate their own version of moral belief. That always comes as a result of that which constitutes sufficient reason to believe, or warrant. It also presupposes truth, and necessarily stems from trusting in a source.

Not caring about anyone else, is to wholly care about oneself. Where there is care, there is moral belief, even if it is completely selfish belief. If one possesses no self-interest whatsoever, then they are dead.


I mean absolutely no disrespect here at all Michael, but often times your arguments often appear to me to be so utterly *trivial as to be unnecessary of even noting, yet you often present them as if they are some sort of Grand Insight.

Note: I'm using the term "trivial" here to simply mean blatantly obvious or self-evident.

In other words, as far as I can see all you're basically attempting to make a case for is the idea that "all humans" are obviously born with an "innate ability for moral thought". And in this response to sweetestgirl you're basically just saying that all humans have a capacity to think "morally" whether they use that ability or not. Then you also press on in an attempt to make a case that even "amoral" people are actually using this ability to "dismiss" a need for moral or ethical values.

However, isn't this whole approach rather redundant?

It think it's obvious that all humans have an innate ability to think in terms of morality. I personally feel that this very concept would arise automatically with any animals that becomes fully sentient and self-aware.

After all, in a very real sense, ALL sense of morality necessarily comes from a sense of self. If we didn't first have the ability to recognize what it is that we like or dislike, then we'd have no reference point for morality at all.

For example is it "bad behavior" to punch someone in the face? Well, unless you have some CLUE as to what that might feel like yourself how could even begin to moralize it?

I personally believe that the reason we have any sense of "morality" at all, stems entirely from the very basis of what we personally like or dislike.

If people didn't care whether they lived or died, then killing other people probably wouldn't seem like such an 'immoral thing' at all. The reason we think it's so highly immoral is simply because we don't want other people killing us. If we didn't mind dying we wouldn't think there was anything wrong with killing.

A sense of morality doesn't stem from a sense of ought, but rather it stems from a sense of what we think other people ought not do to us. flowerforyou

That's how I would view a sense of morality coming into the human picture.

creativesoul's photo
Fri 09/10/10 04:58 PM
sweetest:

yes reading is experience indeed, and what one gains from that experience is that they learn to be good readers. Not much else. But of course there is great value inherent for some, or many in becoming a good reader because it can be a great jumping off point.


Hmmmm. Reading is an experience which teaches one to be a good reader and not much else? That does not make much sebse to me. I would certainly not advocate that reading alone, without direct experience, constitutes holding the same value that both reading and direct experience has. However, I would most certainly have to say the same thing about the limited degree of understanding that direct experience without reading has to offer.

Surely we agree here, that possessing both offers an increased possibility for understanding that either one by themselves poses.

What is learned is different in each experience. The model is to learn or teach in direct parallel to what you wish to accomplish. If you wish to build a computer or a bridge, reading will not teach you that. But it is a good start if one has already become a good reader. Even the mathematics are primarily a verbal skill - tho most don't realize this.

and yes, my assessment is flawless.


I see this conclusion and explanation thereof as being a consqeuence of not taking into account the pivotal role that written language plays in our being able to perform tasks which are necessarily complex in nature. One can build a bridge by doing alone, no doubt. Our innate rationality affords us that, with or without having reading as an asset. However, the very idea of one's building a computer through direct experience alone is unfathomable. Written history was nearly 7,000 years old, if we are to include the oldest known pictorial representations of human memory, before we had come to acquire the vast amount of knowledge which facilitated the ability to build one. Remove that knowledge, which is completely contingent upon reading/writing, and you will also remove the very ability for one to hold the components in hand. Without having those, in is quite a stretch to say that we could build a computer anyway.


I haven't said any where that experience is the only way to learn. I believe to be the best because I have the courage of my convictions. I do not play games with words. I say what it is. Yes, it is very much possilbe (ref. the OP) to put qualitative markers on learning and the best ways to do. The proof is in the pudding, and the pudding is seldom on an internet thread. If you will excuse me, I need to go prepare for my nest experience with some reading. Thank you for creating such an interesting discussion!


I am interested in reading an explanation which illustrates learning void of experience. The idea is incomprehensible and counter-intuitive. Experience is all we have. All knowledge is based upon appearances. That which has never directly appeared cannot be directly known. That which may or may not exist beyond our direct experience cannot be said to exist in a place other than the notion itself bookmarks the unknown realm. The acknowledgment of that possibility is as far as we can go in our thoughts. That is where the line is drawn between known and unknown. It is again, completely self-contradictory, incomprehensible, unintelligible, and counter-intuitive to claim that we can know anything about that which may or may not exist in the unknown realm. It is after all... unknown.

You're welcome. Thank you for engaging.

creativesoul's photo
Fri 09/10/10 05:46 PM
Abra:

I mean absolutely no disrespect here at all Michael, but often times your arguments often appear to me to be so utterly *trivial as to be unnecessary of even noting, yet you often present them as if they are some sort of Grand Insight.

Note: I'm using the term "trivial" here to simply mean blatantly obvious or self-evident.


Well if a self-evident truth or a simple fact stands in direct opposition of a claim, then evidently it wasn't being taken into consideration at the time the claim was being made. I would not hold that as being "trivial".

I suggest that we not confuse what we think constitutes the reasoning of another with what that reasoning actually is. How do you know that I present anything as some sort of "Grand Insight"?

Is that approach appropriate or even relevent to the matter at hand here? I mean, is talking directly about the author beneficial to the process of sharing understanding about morality?

Since the notion of "Grand Insight" has been invoked, albeit on a rather demeaning note, what would constitute a moral argument satisfying such a description?

Would successfully formulating the kind of moral argument which academic convention openly admits would be influencial but has yet to have been formulated constitute sufficient reason to hold the argument with high regard? Do we just compare these things, like what constitutes being "Grand Insight"(groundbreaking) by our own personal knowledge of the matter at hand, or do we seek out the academic understanding as well?

http://plato.stanford.edu/entries/moral-realism/

In other words, as far as I can see all you're basically attempting to make a case for is the idea that "all humans" are obviously born with an "innate ability for moral thought".


Nah. That much is obvious.

And in this response to sweetestgirl you're basically just saying that all humans have a capacity to think "morally" whether they use that ability or not. Then you also press on in an attempt to make a case that even "amoral" people are actually using this ability to "dismiss" a need for moral or ethical values.

However, isn't this whole approach rather redundant?


I thought so, but in making a public claim, it stands to reason that I accept the responsibility of addressing the concerns/objections as they arise. You're quoting an example of my doing just that.

It think it's obvious that all humans have an innate ability to think in terms of morality. I personally feel that this very concept would arise automatically with any animals that becomes fully sentient and self-aware.

After all, in a very real sense, ALL sense of morality necessarily comes from a sense of self. If we didn't first have the ability to recognize what it is that we like or dislike, then we'd have no reference point for morality at all.


I would not disagree that our likes and dislikes are personal and that they also represent a reference point. Just not the only one, because they only reference prescriptive moral concerns.

For example is it "bad behavior" to punch someone in the face? Well, unless you have some CLUE as to what that might feel like yourself how could even begin to moralize it?


Well, we need not necessarily actively engage in all moral concerns to be able to comprehend the consequences. Analysis does not require emotional content bearing upon moral decision making, even if it is about that very thing. Again, this is talk about prescriptive claims.

A sense of morality doesn't stem from a sense of ought, but rather it stems from a sense of what we think other people ought not do to us.


In a prescriptive sense, that is the case. Most often discussions of morality lead in that direction because people equate their own personal moral belief to morality. Because one's likes and dislikes play a role in influencing moral belief those things come into play.

The argument being given is not a prescriptive/normative one and therefore those things play no role in universal morality. It represents an active collection of self-evident truthes being used as a basis for necessitarian approach. And yes, it still needs work.

flowerforyou

no photo
Fri 09/10/10 05:56 PM

sweetest:

yes reading is experience indeed, and what one gains from that experience is that they learn to be good readers. Not much else. But of course there is great value inherent for some, or many in becoming a good reader because it can be a great jumping off point.


Hmmmm. Reading is an experience which teaches one to be a good reader and not much else? That does not make much sebse to me. I would certainly not advocate that reading alone, without direct experience, constitutes holding the same value that both reading and direct experience has. However, I would most certainly have to say the same thing about the limited degree of understanding that direct experience without reading has to offer.

Surely we agree here, that possessing both offers an increased possibility for understanding that either one by themselves poses.

What is learned is different in each experience. The model is to learn or teach in direct parallel to what you wish to accomplish. If you wish to build a computer or a bridge, reading will not teach you that. But it is a good start if one has already become a good reader. Even the mathematics are primarily a verbal skill - tho most don't realize this.

and yes, my assessment is flawless.


I see this conclusion and explanation thereof as being a consqeuence of not taking into account the pivotal role that written language plays in our being able to perform tasks which are necessarily complex in nature. One can build a bridge by doing alone, no doubt. Our innate rationality affords us that, with or without having reading as an asset. However, the very idea of one's building a computer through direct experience alone is unfathomable. Written history was nearly 7,000 years old, if we are to include the oldest known pictorial representations of human memory, before we had come to acquire the vast amount of knowledge which facilitated the ability to build one. Remove that knowledge, which is completely contingent upon reading/writing, and you will also remove the very ability for one to hold the components in hand. Without having those, in is quite a stretch to say that we could build a computer anyway.


I haven't said any where that experience is the only way to learn. I believe to be the best because I have the courage of my convictions. I do not play games with words. I say what it is. Yes, it is very much possilbe (ref. the OP) to put qualitative markers on learning and the best ways to do. The proof is in the pudding, and the pudding is seldom on an internet thread. If you will excuse me, I need to go prepare for my nest experience with some reading. Thank you for creating such an interesting discussion!


I am interested in reading an explanation which illustrates learning void of experience. The idea is incomprehensible and counter-intuitive. Experience is all we have. All knowledge is based upon appearances. That which has never directly appeared cannot be directly known. That which may or may not exist beyond our direct experience cannot be said to exist in a place other than the notion itself bookmarks the unknown realm. The acknowledgment of that possibility is as far as we can go in our thoughts. That is where the line is drawn between known and unknown. It is again, completely self-contradictory, incomprehensible, unintelligible, and counter-intuitive to claim that we can know anything about that which may or may not exist in the unknown realm. It is after all... unknown.

You're welcome. Thank you for engaging.
u r welcome - I like your topic(s) - multiple at this point, and my comment about a flawless assessment was just a little tongue in cheekwinking -there the clarification of a cute emoticon! A picture saves 1000 words?

Fascinating that u know the age/history of written language - very cool. With my own background Iam a litte embarrassed that I did not know the age of written language.

Building a computer can be done entirely without reading if one has experience and all of the correct components - to shop for components one must at leaat read specifications and part nos. But nonetheless I would be wary advising even the most experienced player to build anything without a manual handy. Why invite problems (life certainly has enough without providing a breeding ground for them, yes)?

Once again, to read about how to build a computer one does not become apt in building one becomes knowledgable of the act- has knowledge of, and learns to read (in this case the instructions) but to claim a skill without having physically built the machine is a fool's errand jmho, imlw One will not know how to build a computer by just reading about it - though it's a good start

would u hire a poor reader whose built 105,000 computers successfully, or a good reader who never has. If it was my machine...well I think I made myself clearer

Still I value reading highly primarily as a form of communication that aids learning

Abracadabra's photo
Fri 09/10/10 06:34 PM
Creative wrote:

In a prescriptive sense, that is the case. Most often discussions of morality lead in that direction because people equate their own personal moral belief to morality. Because one's likes and dislikes play a role in influencing moral belief those things come into play.


Yes, I see this. In fact, as far as I can see, the only "meaningful" discussions of morality necessarily must be based on one's likes and dislikes, because morality, IMHO, requires a judgment call of some kind to be made. In other words, the very concept of "morality" without attaching it to any sense of "right or wrong" is a meaningless concept. Therefore some 'criteria' for being able to measure what's "right or wrong" must be associated with the concept of morality and that requires a subjective judgment.

Again, this is just how I view the concept of morality, because for me the very concept is meaningless outside of this scope.

Therefore my conclusion would be that all morality is indeed subjective, and so the very notion of any "universal innate morality" seems to be an idea that couldn't exist, unless of course everyone has the same subjective experiences, and the same reactions to those experiences.


The argument being given is not a prescriptive/normative one and therefore those things play no role in universal morality. It represents an active collection of self-evident truthes being used as a basis for necessitarian approach. And yes, it still needs work.

flowerforyou


Well, I guess can see this in the purest philosophical sense, but it seems to me that even at its best, this idealization would necessarily be 'contaminated' with the subjective ideas and experiences of the person whose doing the idealizing. bigsmile

I mean, in the end, it would basically need to be a thesis that 'everyone' agreed with, otherwise how could it be claimed to be "Universal".


creativesoul's photo
Fri 09/10/10 07:12 PM
Building a computer can be done entirely without reading if one has experience and all of the correct components


I think we're talking past one another here. One can feasibly claim that it is possible for one to learn how to build a computer without that person having learned to read. I think that this is what you mean, and I would I agree. However, what I mean is that that scenario is impossible without someone, somewhere having acquired sufficient knowledge and understanding of vast amounts of knowledge and logical inference. That is necessary in order for us to have the parts themselves. The parts are absolutely necessary for the act of building, and those are the product of written language, knowledge, understanding, and doing.

Thus, I weigh these kinds of considerations into an examination and/or conclusion of the value of written language and how that plays a role in learning. Reading after all, is not just about learning words. One can know a word without having acquired an understanding of the concept it portrays, should it do so.

Once again, to read about how to build a computer one does not become apt in building one becomes knowledgable of the act- has knowledge of, and learns to read (in this case the instructions) but to claim a skill without having physically built the machine is a fool's errand


I would agree that one cannot claim a skill without doing. Our thoughts diverge here though, I think. I hold that gaining an accurate understanding through reading is a skill, in and of itself. It is just a different kind of "doing".

Dragoness's photo
Fri 09/10/10 07:32 PM



Interesting concept but

I believe your argument for naive realism is not how we learn.
trial and error is.

we do not naively believe this or that ...
we test it through trial and error and that is how we gain our belief in everything ... including language.

JMHO derived from my personal antidotal experiences and memory of my childhood.


learning theory which is an actual pedegogy or body of knowledge, will tell you that there are several modalities for learning and they are influenced by things like sensation, perception, and cognitive development. We learn best via multiple modalities in terms of memory and retention. But the best teacher, yes thorb, is experience. If one both reads and expereinces something - tangible like ...an apple, or intangilbe like love or anger, they will have those dual modalities and tend to have learned what those things are.

If you just read about apples, love and anger, you will have heard of those things. If you experience apple, love, and anger you will know those things. Experience is the best teacher - it can teach without reading but reading cannot truly be an effective teacher without experience jmho in my little world (imlw)


The extent of experiences that the overwhelming majority of humans actually have in a life time could not possibly account for the vast moral judgments and moral values that most people subscribe to.

It stands to reason that we do not come by our moral standard through individual experience alone.

So in what other ways or by what other means do we adopt the standards of what is right or wrong, and good or bad?

Taking that another step further, why don't humans, the world over, exhibit exactly the same adherance to the exactly the same moral/ethical standards?

Could there even be such a thing as an inherant quality of morality or a human ethics that is universally shared?

Those questions are part of what has led to this discussion.
The replies are long and the difference of opinions make it harder to plod through, but it might be a little easier reading with those questions in mind.

I think it's good to get others ideas. It keep us all thinking and questioning.







Early morality is taught by those we trust. We take their word for it basically. I believe that as we grow older, unless we just continue to believe what we were told, we question those "morals" taught in childhood. Some of which were taught us with pain, which is a great teacher, making them stick to us really well.

As we get older we can see how the "moral" stand up and proves itself either right or wrong. If we follow this "moral" and we can see/feel that it harms others, we can discern that it is not as "right" as we were taught. So we alter it if we are of a free mind, if not we just continue to use it and disregard it's harmful results.

My humble opinion on this portion of the thread. My head hurts from reading so much:wink: