Topic: The difference...
Blackbird's photo
Thu 06/12/08 01:40 AM
Beings, Computers, Sentience, Programming, and awareness.

Theoretically it is possible per my opinion to build a computer powerful enough to run a software set complete enough to reach a state of sentience. I am going to try to use blanket terms to keep everyone happy and on the same page so far as spirituality.

In the present reality this is unlikely.

First off we were created (or evolved to make everyone happy) by a creator (or natural system of evolution) that by far surpasses us in ability, sophistication, and outright time to spare. As our lifetimes are short in comparrison each human that creates technology stands on the shoulders of other humans that have created works, technology, or concepts before them. This in itself makes a human with our current technology creating such a thing problematic.

What I personally believe would be required and how one can theorize it would be possible regardless of whether i think it is a good or bad idea.

1. Sensory input would have to include sensors and an input system worthy of providing input as complex as we perceive it on a daily basis. Vision, sound, vibrations, touch, and smell and taste are good examples of input senses. Depending on the balance used any and all could be used, and in theory for an alien (conceptually other than what we understand) sentient mentality one could in theory for example be connected to the world wide web.

2. Data processing capability capable of running a complex set of instructions that would eventually allow it to gain a level of sentience including memory so each lesson could be kept, remembered and added to overall perception.

3. Instruction sets or software that allow the cpu or brain to operate all sensors, input, and navigation either through the physical world or world of information to interact with it's known world. The software (even in organic form I will still for the sake of ease call it software) must include the ability to re-write it's own code. It must be able to make choices and adjust it's own perception and reactiosn to stimulous to match it's desire. This is the first process we as human beings experience as babies. Much time is spent simply existing and learning the basics. We are hungry and learn eventually if we cry or hold a hand toward our caretaker that they feed us ect ect. A created device, computer, or creature capable of reaching sentience must be able to learn and grow mentally and psychologically automatically adding behavior sets to it's own programming in order to progressively evolve mentally.

4. Desire is the hard part. We evolve and grow mentally because we desire. Warmth, comfort, sustenance, attention, pleasure, amusment. We have a complex system of needs that we experience. For a creation to evolve it must posses a need, or desire. If one simply makes a machine that needs to plug itself in to keep running and charge batteries this is it's only need and fails to cause interaction with it's world outside of this act. It also fails to give the device survival need. As humans are driven largely driven by the need for constant input or interaction to make sure it reaches sentience some kind of interaction need would have to exist to keep it constantly interacting. A sleep cycle that allows it to go through memories or ponder desired activities would also help. Sensory gratification from the start would need to be part of basic programming it was able to control but unable to change such as....It likes having a certain area touched by a human. this could be considered cheating but is a neccesary part of the experience we as life forms experience. A cat likes to be petted, a dog likes to be petted, and so do we. So at least one senor point regardless of complexity that it has positive reactions to strong enough to be addictive while being self disciplin capable woudl be a good idea. the Test or living environment would have to include beings or devices willing to interact with the creation for the purpose of learning and growing.

5. Basic survival need priority must exist on a basic level. Any person that is depressed, injured, ect ect to the point of losing this becomes an unsure survival the need for survival encompasses more than the basic needs.

To answer a few finer points:
This would require creation for it's own sake. a purpose built machine or creature would fullfill it's purpose, but even with the best intent would probably be quite satisfied or complete in performing it's intended task.

A computer software set for example capable of reprograming itself woudl be able to purposefully go against what it was programmed to do if it's input and sensory reactions caused it to re-write it's original software. A good way to prevent such a device becomming dangerous is to make it impossible for core instructions to be re-written. If for the purpose of studying the possibility of what a creation would do without this limit it would have to have limited power or be kept in a "sandbox" or safe limited area it was unable to leave. (Even a remote controll car used badly could cause problems or accidents)

All of this implies the capability of becomming sentient but as I stated before it speaks nothing of whether such a creation could be granted a soul by (whatever powers exist in the universe) or be deemed worthy of possesion by any available spirit or soul.

This is all theory because although it is currently possible on a BASIC level with current technology it would be prohibitivly expensive and would likely not be of a size easily controlled once all requirements were met. (Sensor inputs, memory banks, cpus powerful enough to run the unbelievably complex softwar required ect ect) As a start if someone had enough money and was obsessed enough a good start would be a computer that interfaces by wire external sources. This itself can be problematic because to reach sentience the more input received the faster this may happen. A baby or child takes time to reach sentience, and functions on a timeline that starts slow and develops more and more rapidly (in theory and yes I know this is a cold analytical way to put it but how else can I convey the concept?) The less input available the longer such a creation would take before it is known whether it could truly reach sentience.

My honest opinion is that being that we are only human even if we create such a creation we could not judge it's sentience. Such a creation would have to be capable of communicating and eventually show an interest in assertinig it's own sentience without our prompting for a valid demonstration. Original software would have to lack any programming to lead it towards this act even if it was the desired goal, because self desire would be the ultimate demonstration of self sentience beyond basic or daily desire and demonstrate real thought such as (Who am I, why am I here, what is my purpose, and why am I treated differently )

Creating this creation per my opinion would be irresponsible and cruel unless done very carefully. If it did indeed reach sentience then unplugging it once the experiement was over or sentience achieved would be equal to killing it. Since we are incapable of judging sentience without it's initiation we would not know from the time it started whether it was sentient or experiencing feelings valid enough to make "killing it" wrong.

Abracadabra's photo
Thu 06/12/08 06:06 AM
Edited by Abracadabra on Thu 06/12/08 06:08 AM
I would like to see an example of a computer purposefully going against what it had been programmed to do.

Could you explain such a thing for me?


Yes I would be glad to.

As you have so graciously pointed out in the following statements I know exactly what you are trying to get at, and I have known all along,....

First of all, I could not possibly deny that a computer can indeed be able to perceive, according to the vagueness within the definition that I have been using. It really matters not though, because we both know that they do not truly perceive.


But that's the very thing that got us into this "argument" in the first place Michael.

You were trying to claim that awareness arises from perception. Jennie and I, both disagreed.

We both said that awareness must come first.

You argued otherwise. You said that awareness arises from perception. And you defined perception to be the ability to collect and process information - and nothing more!.

I even gave you ample opportunity to change or even elaborate on that definition.

I tried to be as considerate as possible, giving you ample opportunity to confess that when you say to 'perceive' you really already mean aware.

Yet you refused to change or elaborate your meaning of 'perceive'.

Not you're belatedly confessing that,....

because we both know that they do not truly perceive.


Truly perceive? You gave a definition for perceive. You said that it is the ability to collected and process information. And nothing more! You refused to elaborate on that definition when given ample opportunity to.

So what you do you mean by truly perceive?

Do you mean,... to be aware of perceiving? huh

If so, then you're initial claim that awareness arises from perception cannot be true because you are demanding that perception already includes awareness.

That's what got us to where we are right now - you claim and insistence that awareness arises from perception. huh

But now you seem to be demanding that perception includes awareness.

Excuse me,...It's my turn to climb up on the White Cliffs and scream, "Redundant!"

Therefore, the stance that you take against the notion that a computer cannot perceive leads one to believe that you think that they can.


I believe that a computer can collect and process information.

Is it aware that it is doing it? NO!

But you didn't require that it be aware that it is 'perceiving' by your definition.

On the contrary, you were insisting that awareness is something that can arise from the ability to perceive.

But now you seem to be demanding that we both know that computers can't perceive huh

What we both know is that today's computer's are not aware that they are collecting and processing information (i.e. perceiving by your definition of perceive)

Now if we assume this to be true,... then the question becomes, "Can they become aware?"

Can a computer become aware that it already perceiving? (i.e. collecting and processing information.)

I thought that was an interesting question, based on your insistence that perception alone does not automatically equate to awareness.

However, now you seem to be backsliding and now demanding that it does equate to awareness.

Which was the very stance that Jeannie and I took from the very beginning!

Let me carry on to another post to address you question quoted at the beginning of this post,...


Abracadabra's photo
Thu 06/12/08 06:09 AM
Edited by Abracadabra on Thu 06/12/08 06:11 AM
I would like to see an example of a computer purposefully going against what it had been programmed to do.

Could you explain such a thing for me?


Yes I would be glad to,... but I felt a need to clarify my position in the last post before answering,...

Here you are again using the word purposefully to mean being aware of what it's doing.

I already confessed that a computer cannot go against it's programming.

However, I went on to add that a programmer can indeed program a computer to have 'free choice' (lets not call it will because that already implies an awareness, a will to do something. So let's just call it free choice at this stage.

A computer can be programmed to have 'free choice'.

How will it choose? Well that depends on how it's been programmed.

However you should not think of programming as being a rigid algorithm that determines precisely what choices must be made. That's simply not true. The choices that the robot can make can indeed come from it's stimuli. (from its ability to perceive).

And again, I'm using the term 'perceive' here based on your original definition only.

Not on you belated confession that you are truly requiring that perception = awareness.

So I'm saying that a computer can both perceive and be programmed to have free choice.

But I am not claiming that it can be aware that it has either of these abilities.

At this point it is not yet sentient. There are plenty of robots in this world today already that can both, perceive and make free choices, but no one is claiming that they are sentient.

~~~

And now we come to something very interesting,...

What is true sentience?

Are all humans born with true sentience, or must they become born again?

If we give a robot everything it needs to become 'aware' will it then automatically become aware?

Clearly there's must be a time in it's development when it becomes enlightened to the fact that it is aware.

I'm willing to bet that your thinking that it would have to be when you first turn it on.

But no, that's not the way it works. Not even for humans. Even humans must be born again.

When they are born physically they are born with perception only.

They must become enlightened to have true awareness. They must be born again.

~~~

Robots are no different.

If I were to program a robot with the ability to become sentient, it's up to the robot to become sentient. All I would have done is give it the ability to become sentient. I can't force it to become sentient. It has to do that on its own.

If you recall I spoke of the "I AM" project. That would be the team of engineers that would program the perceptual center of the robot. They would give the robot a name, say Robby. Robby the robot.

When you ask the robot who it is, it responds, "I am Robby".

Is it sentient yet?

No. It still thinks it's Robby.

If you saw my robot and talked to it, you'd say, "My god! You built a sentient machines! The robot thinks it's Robby!"

I would peer at you over the tops of the rims of my reading glasses, raise my eyebrow and say, "No quite".

Robby the robot would need to grow up first. Robby the robot would need to become enlightened.

We could teach Robby to go to war and be a soldier. If Robby did bad things, or things he wasn't told to do we would ask him why he did them and he would say, "I don't know".

Robby would still be, just a robot.

You would be impressed. You would be saying, "My God that robot of yours will do every thing we tell it to, and sometimes do things we didn't tell it to. It's amazing. I can't tell it apart from a human being.

And I would peer at you over my glasses, raise my eyebrow and say ask you, "What makes you think humans are any different?"

And you'd say to me, "But even you claim that Robby isn't quite sentient yet"

I would say, "Yep that right"

Then you'd say to me, "But humans are sentient!"

I'd say to you, "Really? Are you sure?"

Then one day you might be introducing Robby to some friends. And your friends ask Robby who he is, and he responds, "I AM"

You look at Robby and say, "Well aren't you going to tell them your name?"

Robby says, "My name is Robby"

Then you come to me and say, "That was strange. Today I introduced Robby to some people and he seemed to have momentarily forgotten his name"

I call Robby in and ask him who he is, he says, "I AM"

Then I say to Robby, "Aren't you Robby?"

And Robby replies, "Robby is my name. I am not Robby"

Then we have a sentient robot Michael.

Who are you? huh

Are you Michael?

no photo
Thu 06/12/08 07:02 AM
Abra,

I like the way you handled the idea regarding the difference between "free choice" and the "will."

Creative, you make a statement that "free will" does not exist.
I think what you are trying to say is that "free choice" does not exist. I believe this is a more accurate assertion.

Choices are always influenced and some are automatic because of our own programing. In this case "free choice" does not exist. I can't really choose to stop breathing ~ unless I chose to die of course.

"Free will" is a confusing term because when people use this term what they are really talking about is "free choice."

The will just is. It is either weak or strong. It is like potential. Potential is not referred to as "free potential." It is just potential. The will is not "free" or "in bondage." Neither can the will be influenced or programed. It simply exists as potential. It is the power to direct itself. It is the power to direct thought and attention. It is either weak or strong.

If it does not exist within you, then you are not sentient. The will is the soul, the spirit, the connection to the true self.

That is my understanding.

Jeannie






Abracadabra's photo
Thu 06/12/08 07:11 AM
Edited by Abracadabra on Thu 06/12/08 07:11 AM
Beings, Computers, Sentience, Programming, and awareness.

Theoretically it is possible per my opinion to build a computer powerful enough to run a software set complete enough to reach a state of sentience. I am going to try to use blanket terms to keep everyone happy and on the same page so far as spirituality.


Oooooooo! A valid candidate for the "I AM" team. drinker

In the present reality this is unlikely.


I'm not totally convinced for this. But I am convinced that current programming methods aren't likely to succeed. People who are working on these kinds of 'artificial intelligence' projects simply aren't using analog technology correctly. They are attempting to achieve AI via purely digital programming methods. That's never going to work. If we are to build a sentient machine it must be an Anadigidriod. (my own term)

Ana-digi-droid = Analog - Digital - Android.

It's not so much that we don't have the technology. It's just the programmers who are attempting to achieve AI aren't even remotely going about it correctly.

[qutoe]First off we were created by a creator that by far surpasses us in ability, sophistication, and outright time to spare.

Well, the outright time to spare might be a big factor. bigsmile

This in itself makes a human with our current technology creating such a thing problematic.


Yes especially since they are going about it all wrong to begin with.

1. Sensory input would have to include sensors and an input system worthy of providing input as complex as we perceive it on a daily basis.


I'm not convinced that it would need to have the same level of sensory input just to become sentient. But clearly, it's going to need to be a very complex system. Especially if the idea that complexity is an issue.

However, that might be more of an issue for nature than for man. That is to say that for a brain to evolve on it's own and become sentient may require more complexity than a brain that has an intelligent designer purposefully working on it to help it become sentient.

2. Data processing capability capable of running a complex set of instructions that would eventually allow it to gain a level of sentience including memory so each lesson could be kept, remembered and added to overall perception.


I think that's a biggie right there. The ability to 'perceive' as Michael had originally defined it. The ability to collect, store, and procession huge amounts of information quickly and efficiently.

This is probably the single greatest hurdle. However, I might ad that the answer to this dilemma is in analog technology not purely digital. Trying to do this digitally is absurd. Humans don't do it digitally. The human body and brain does this using analog technology, not digital technology.

(If I may refer to biologically evolved machines as "technology")

3. The software must include the ability to re-write it's own code. It must be able to make choices and adjust it's own perception and reactions to stimulus to match its desire. A created device, computer, or creature capable of reaching sentience must be able to learn and grow mentally and psychologically automatically adding behavior sets to it's own programming in order to progressively evolve mentally.


Absolutely, and it makes perfect sense to speak about it's 'desires' even before it becomes sentient. For example, if it's power system is low it will 'instinctively' (hard-wired analog technology) become 'hungry'.

That's is programmed in as a hard-wired "desire" to obtain more fuel.

After all, even humans can't really control their "desire" to be hungry or not. If their body is running low on fuel they WILL become hungry whether they "desire" to become hungry or not.

Those kinds of non-sentient desires can indeed be preprogrammed (or simply built-in) long before the robot ever even becomes sentient.

4. Desire is the hard part.


Well, like I just pointed out, not all desires come from conscious decision.


For a creation to evolve it must posses a need, or desire.


My sister's kids have absolutely no desires at all. I'm not kidding either. I never seen people who are so indifferent to life. When asked what they'd like to do, they barely have enough ambition to say, "I don't care". It usually comes out as such a garbled mumble that you have to ask them twice just to understand what they said. They don't even have a desire to speak clearly.

Maybe they aren't truly sentient? laugh

A sleep cycle that allows it to go through memories or ponder desired activities would also help.


That's an interesting thought. A robot could actually even be made to disconnect it's brain from it's sensory altogether. Thus focusing all of it's processor power on simply organizing all of it's previously store experiences. This can even be done on a small scale AI project.

I was quite interested in robots at one time. I actually built a few. In fact I several old robot 'brains' laying on my desk as I type this. There are single-board microcontrollers. Stand-alone computers. It's amazing what even one of these little 'brains' is capable of. It's a shame I never got into AI robotics professionally. I did work on industrial robotics, but they weren't seeking to develop AI. They just wanted to address the industrial tasks that they wanted to accomplish.

Sensory gratification from the start would need to be part of basic programming it was able to control but unable to change such as....


I agree. And this would not be cheating because that's precisely what nature (evolution) did to us!

A cat likes to be petted, a dog likes to be petted, and so do we.


Exactly, and there's nothing wrong with making a robot that likes to be petted. bigsmile

I guess we need to make it cute and furry then. laugh

5. Basic survival need priority must exist on a basic level. Any person that is depressed, injured, ect ect to the point of losing this becomes an unsure survival the need for survival encompasses more than the basic needs.


I think you actually already covered this in #4 on the motivation for desire.

A computer software set for example capable of reprograming itself woudl be able to purposefully go against what it was programmed to do if it's input and sensory reactions caused it to re-write it's original software. A good way to prevent such a device becomming dangerous is to make it impossible for core instructions to be re-written.


Exactly. Fee Will, or Free Choice, can indeed be restrained, or confined by the programmer. Even humans are confined in what they can choose to do. For example, we can't choose to stop our hearts from beating purely be free choice of thought. Sure we can stab ourselves in the heart, but that's different matter. The point is that we have no Free Choice to control it using our brains alone.

So there would definitely be things that the robot would not have free choice over, yet that doesn't lessen it from a human, because humans are in the same boat.

If for the purpose of studying the possibility of what a creation would do without this limit it would have to have limited power or be kept in a "sandbox" or safe limited area it was unable to leave. (Even a remote control car used badly could cause problems or accidents)


Absolutely! We don't turn untrained humans loose as babies either. We TEACH THEM as they grow.

[qutoe]All of this implies the capability of becomming sentient but as I stated before it speaks nothing of whether such a creation could be granted a soul by (whatever powers exist in the universe) or be deemed worthy of possesion by any available spirit or soul.

Again, I agree. We can't give a robot everything that is required for sentience, but does that guarantee that it will become sentient? Are all humans truly sentient? Some of them certainly don't act very sentient that's for sure.

This is all theory because although it is currently possible on a BASIC level with current technology it would be prohibitively expensive and would likely not be of a size easily controlled once all requirements were met. (Sensor inputs, memory banks, CPUs powerful enough to run the unbelievably complex software required ect ect)


Especially if you're thinking digitally!!! You've got to start thinking in terms of analog computing! Analog computers are SO MUCH MORE POWERFUL than digital computers. What they lack is ease of programming. That's why we use digital computers. Not because digital computers are better, they are just much easier to program QUICKLY. But analog computing is where the REAL POWER lies.

Anadigidriod - Analog - Digital - Android. (It's the only way)

A baby or child takes time to reach sentience


Some of them never do! I seriously wonder about my sister's kids. And lots of other people too.

My honest opinion is that being that we are only human even if we create such a creation we could not judge it's sentience. Such a creation would have to be capable of communicating and eventually show an interest in asserting it's own sentience without our prompting for a valid demonstration.


Again, I agree. And like I say, we can't even verify that any given human being is truly sentient. How they heck are we going to tell if a robot is truly sentient?

Original software would have to lack any programming to lead it towards this act even if it was the desired goal, because self desire would be the ultimate demonstration of self sentience beyond basic or daily desire and demonstrate real thought such as (Who am I, why am I here, what is my purpose, and why am I treated differently )


And this is the tricky part. Because it would be absolutely necessary to give the robot a 'sense' of self. Otherwise what would be the focus of the program? The program must be focused on developing a sense of 'self'.

And this is what I was trying to get at in my previous post to Michael. The programming would need to be such that the Robot thinks, its "Robby".

The day the robot realizes that Robby is just it's name is the day we break open the champagne and start a brand new project to build Robby a girlfriend robot. :wink:

Creating this creation per my opinion would be irresponsible and cruel unless done very carefully. If it did indeed reach sentience then unplugging it once the experiment was over or sentience achieved would be equal to killing it. Since we are incapable of judging sentience without it's initiation we would not know from the time it started whether it was sentient or experiencing feelings valid enough to make "killing it" wrong.


Again, I'm in total agreement. It wouldn't just be manmade 'sentience' it would be a manmade LIFE FORM.

To use it as a disposable soldier, or as a slave who has no rights would be just as wrong as doing those same things to a human.


no photo
Thu 06/12/08 07:18 AM

Perception...

Why is it necessary? Why do all living beings have it?

Why do computers not have it?


computers do have perception because perception is how one react to their enviroment or the information accessed.. but it appears that you are mixing up perception with human emotions....since emotions are an irrational reaction as to what one may perceive then emotions to a computer may seem more like a malfunction ..but a computer just like a human can be programmed to have emotions and in turn react irrationally ..

no photo
Thu 06/12/08 07:21 AM
Edited by Jeanniebean on Thu 06/12/08 07:28 AM
All of this implies the capability of becomming sentient but as I stated before it speaks nothing of whether such a creation could be granted a soul by (whatever powers exist in the universe) or be deemed worthy of possesion by any available spirit or soul.


Are you suggesting then, that being "sentient" does not mean that you have a soul and that you can be "sentient" and still not be "alive" or have a "soul?" Does being sentient require having a will?


My honest opinion is that being that we are only human even if we create such a creation we could not judge it's sentience.


Blackbird,

This is what Abra has been saying all along. We cannot truly know if a being is sentient except for ourselves. We cannot know if another human is even sentient and aware of them self.
They may appear to be, but we cannot know for sure. How do we know they are? We can only judge by their actions. We can believe they are, but we cannot truly know. Truly knowing only comes from direct experience.

But again, I have to ask you if you think that sentience requires a soul or a connection to a spiritual awareness. Does sentience require a will?



Such a creation would have to be capable of communicating and eventually show an interest in asserting it's own sentience without our prompting for a valid demonstration. Original software would have to lack any programming to lead it towards this act even if it was the desired goal, because self desire would be the ultimate demonstration of self sentience beyond basic or daily desire and demonstrate real thought such as (Who am I, why am I here, what is my purpose, and why am I treated differently )

Creating this creation per my opinion would be irresponsible and cruel unless done very carefully. If it did indeed reach sentience then unplugging it once the experiment was over or sentience achieved would be equal to killing it.


Simply "unplugging it" would not be killing it, it be more like putting it into a coma. bigsmile You could always plug it back in and turn it back on, if it was built that way.


Since we are incapable of judging sentience without it's initiation we would not know from the time it started whether it was sentient or experiencing feelings valid enough to make "killing it" wrong.


Did you see that movie with Will Smith called I-Robot? That was a good one. One robot becomes sentient. In the end, they suggested that the robots were somewhat aware because they preferred to cluster together rather than not... by their own choice.

JB


no photo
Thu 06/12/08 07:27 AM
Edited by Jeanniebean on Thu 06/12/08 07:27 AM


Perception...

Why is it necessary? Why do all living beings have it?

Why do computers not have it?


computers do have perception because perception is how one react to their enviroment or the information accessed.. but it appears that you are mixing up perception with human emotions....since emotions are an irrational reaction as to what one may perceive then emotions to a computer may seem more like a malfunction ..but a computer just like a human can be programmed to have emotions and in turn react irrationally ..


Funches,

You must of seen the movie "Short Circuit." bigsmile

Emotions are not an "irrational reactions. IMO" I believe emotions are more like sensors.

However, I would hate to have my computer start crying every time I decided to turn it off ~~so yes I would consider emotions in my desk top computer to be a bit of an annoying malfunction.laugh

Although very interesting... huh

:tongue: laugh laugh


Abracadabra's photo
Thu 06/12/08 07:32 AM

Abra,

I like the way you handled the idea regarding the difference between "free choice" and the "will."

Creative, you make a statement that "free will" does not exist.
I think what you are trying to say is that "free choice" does not exist. I believe this is a more accurate assertion.

Choices are always influenced and some are automatic because of our own programing. In this case "free choice" does not exist. I can't really choose to stop breathing ~ unless I chose to die of course.

"Free will" is a confusing term because when people use this term what they are really talking about is "free choice."

The will just is. It is either weak or strong. It is like potential. Potential is not referred to as "free potential." It is just potential. The will is not "free" or "in bondage." Neither can the will be influenced or programed. It simply exists as potential. It is the power to direct itself. It is the power to direct thought and attention. It is either weak or strong.

If it does not exist within you, then you are not sentient. The will is the soul, the spirit, the connection to the true self.

That is my understanding.

Jeannie


Yep. flowerforyou

I might add also that even animals exhibit 'will'. They just don't exhibit a self-awareness. At least not in the egotistical way that humans do.

Personaly I feel that animals do exhibit 'self-awareness', all they seem to be lacking is an ego. And some pets even display a sense of ego, they get jealous for example. Clearly they show jealously! They also display things like anger, and can even become angry toward other animals that they are jealous of. That jealously often stems from a DESIRE for attention.

So even animals display, will, desire, and even ego. They just lack any apparent sense of 'self-awareness'. At least they don't seem to be aware of their 'ego'.

I think they do have a sense of 'self-awareness', it just isn't ego-based like humans.

It's the ego that seperates humans from animals.

And that's not a bad thing. Having an ego isn't a bad thing. Having arrogance is the bad thing. Not having an ego. We often use the term 'ego' in a derogatory way to imply arrogance. But ego doesn't not mean arrogance. Ego simply means to fully recognize that we are in full control of our own freedom of choice. And that we can use it to do whatever we want to do.

Even though animals have freedom of choice, they don't seem to recognize that they can purposefully direct it to accomplish complex goals. Maybe that's due to their inablity to abstractly think outside of the 'now'. They don't plan the future. They just live in the 'now'.

We dream of the future and about how we can manipulate it based on how we exercise our free choice.

Maybe programming a robot to be "aware" that it can manipulate its future based on its free choice now, is the key to creating sentience?

Hmmm? Interesting thought there. bigsmile

Abracadabra's photo
Thu 06/12/08 07:43 AM

Did you see that movie with Will Smith called I-Robot? That was a good one. One robot becomes sentient. In the end, they suggested that the robots were somewhat aware because they preferred to cluster together rather than not... by their own choice.

JB


I liked that movie.

I also agree that having a desier to cluster together rather than not is a sign of sentience.

But animals do this all the time, they run around in herds, packs, flocks, etc.

I believe that animals are much more sentient than most people give them credit for.

I don't see a huge difference between animals and humans. Other than humans are arrogant, and animals are not (for the most part).

However, I think even some animals have displayed a certain level of selfish behavoir. A jealous pet is a prime example.

No one's going to convince me that animals don't display jealousy. I've seem them do it!

And they certainly display the desire to flock together. flowerforyou

Blackbird's photo
Thu 06/12/08 07:53 AM


Are you suggesting then, that being "sentient" does not mean that you have a soul and that you can be "sentient" and still not be "alive" or have a "soul?" Does being sentient require having a will?


I'm glad you caught that, but I'm completely without surprise the guys missed it and you caught it. If you will note I originally stated I would use blanket terms as I have tried to do in most of these forums so that everyone can easily agree even if we disagree we are on the same page. One debate has been over the course of time the sentience and the possession of souls in animals. Most agree even if they can not talk animals are sentient, but because most can not talk or for religious reasons many consider them without souls. I was claryfying this point as unknown to prevent a wild speculation debate beyond anything any of us can answer even theoretically.

No, and yes in order. Let me explain but this is a matter for speculation or debate...
I can not speak in Authority whether animals have souls or we have souls. I believe we do, and i believe they do, but my belief is not shared by all so I find that a matter for speculation.

Yes I believe sentience requires will. The reason is that will is what creates the sentience per my view. The will to define oneself which has been the plight of all creatures on earth. A dog chooses it's behavior sets in spite of training based on it's temperment. Regardless of placement a Cat will often choose it's owner or "best friend" in human beings.

Cats of different species put together in the same house will form a pride with it's own pecking order sometimes observing gender order and sometimes ignoring gender based on the free will of each cat, their interactions, and their feelings about each other. I've seen a pride of cats set up a trap for a tom that outweighed all of them by having the male go out, and lure it in, only to have the male reach safety and the tom was surrounded and torn apart by the females in spite of this being against all cat behavior sets known. This constitutes will without actually proving anything has a soul.

I do believe the creation could assert and prove it's own sentience, but we would be unable to force this demonstration. Problems in proof lie among other things in being able to snapshot the current programming in the creation and examining the orignal code to verify that it was unable to lead to it's proof of sentience. In other words, the creation would have to form it's own behavior sets and write them into programming, and then assert itself as a being to be considered sentient per my humble opinion only theorizing.

Unplugging killing versus coma...depends on whether memory was static doesn't it?

This was observed and suggested in that movie prior to any knowledge of the Robot being sentient which is interesting within itself. I suppose a distinction was that the robots had a tendency to huddle together for unknown reasons. The Sentient robot asserted itself, and sought things for itself which constitutes free will because these desires theoretically were outside of it's original programming.

Abracadabra's photo
Thu 06/12/08 08:15 AM
Edited by Abracadabra on Thu 06/12/08 08:17 AM
I'm glad you caught that, but I'm completely without surprise the guys missed it and you caught it


I didn't miss it. I simply didn't want to get side-tracked into that religious discussion. I thought it would be an unproductive and confusing distraction.

I can not speak in Authority whether animals have souls or we have souls. I believe we do, and i believe they do, but my belief is not shared by all so I find that a matter for speculation.


Exactly. And I feel the same way you do. If we have "souls" then so do animals. That's my stance on that.

Whether 'sentience' equates to being a 'soul' or not all depends on how these things are defined and thought of. If a 'soul' is believed to be 'god-given' then who can possible say? It's an unanswerable question (thus an unproductive side track)

If a robot actually became 'sentient' in the truest sense of the word (actually experienceing its awareness and being aware that it is alive). Then as far as I'm concened that's the same thing as having a "soul".

Soul - to feel and be aware of those feelings.

In a very real sense then it would be impossible to become sentient without obtaining a soul (or becoming a living soul).

The question then is simply this,...

Are soul's god given? Or do they come into being via sentience?

Are atheists right? Is sentience all there is to a 'soul'? When sentience dies does the 'soul' die?

Or are various religious right?

Is there an external zookeeper who creates new "souls" and inserts them into physical bodies?

Or is pantheism correct? Does anything that becomes sentient automatically become a manifestation of God? Or a potential receptical for a 'light being'.

I think the line between atheism and pantheism is a subtle distinction, and one that can (and often is) a source of much debate and controversy.

As soon as you meantion the word 'soul' that opens up a whole new can of worms. That's why I didn't touch it.

It wasn't that I missed it. flowerforyou






no photo
Thu 06/12/08 02:00 PM
Edited by Jeanniebean on Thu 06/12/08 02:04 PM
Is there an external zookeeper who creates new "souls" and inserts them into physical bodies?


You got me thinking about what might be considered "a soul" and how might they come into being.

I have always called it "a unit of awareness." This unit is part of the whole. It is a very small beginning of awareness.

From there, that single unit of awareness grows as it collects information.

Think of the tiny creatures that come into being and are spit out from the center of the earth from deep under the ocean volcanoes, or the microscopic offspring spawned from who knows what that floats in the oceans and grows into larger creatures of all kinds.

Life grows. Awareness grows into life.

Which begs the question where does a unit of awareness arise from originally? I think it is born or spawned by the combining of different units, much like egg and sperm combine to create a new life.

It is my imagination working over time, I know. But most people say that only god can "create" a new soul. In a pantheistic sense this is what happens. The combination of different aware units will create a new unit.

Its just growth and expansion of that which is aware. These units seek growth. They seek food (energy,) and they attract energy and information and matter. They consume energy and information and this can condense into matter. They become form. Don't get excited, its just a rock. LOL. Its consciousness is not like ours, but this is what makes it a rock. When this unit leaves or exists this rock it moves on to other forms. Other units enter the form of the rock.

We don't "have souls" we are soul. (There is only one.) Soul is god. Soul sustains life. This is my understanding.

Jeannie

Abracadabra's photo
Thu 06/12/08 03:17 PM
We don't "have souls" we are soul. (There is only one.) Soul is god. Soul sustains life. This is my understanding.


These is my thoughts too. Abstractly speaking of course.

Everyone knows my thoughts concerning the idea of a "soulkeeper" godhead that raises souls to either become pets or the objects of sadistic torture. That whole idea is seriously demented, I think. Not only would it be weird. But then there would be the humongous question of whether the godhead has a 'soul' and if so, then where did it get it's soul from? Etc, etc, etc. ad nausea. Moreover what reason would we have to believe in such a thing other than ancient mythologies that were copied and pasted from each other until they became so absurdly ridiculous that they no longer make any sense (like as if they made sense to begin with in the first place).

So then there's the idea that whatever spirit is, we are it. Precisely how that works no one can know. Is there only one spirit? Or is spirit an infinitude of homogenous individualities. bigsmile

I feel that whatever it amounts to ontologically speaking, if there is any individuality to it at all, the 'separate' individualities must ultimately be egalitarian in nature. I see no reason why there should be one authoritarian godhead in the universe and all other beings must cower down to its, necessarily egotistical, fascism.

In fact, after reading a recent post by Belushi, I'm seriously considering abandoning the term "God" altogether. I feel that the egocentric-based religions have sadistically soiled that word with the eternal stain of sacrificial blood that does nothing but pollute the essence of humanity with unnecessary burdens of spiritual condemnation and cultural pandemonium.

The very label itself needs to be crucified and burned at that stake in a grand finale that righteously incinerates the very thought that gave birth to its malevolent existence.

What was the question again?

Oh yes, when do we meet in Reno?

no photo
Thu 06/12/08 03:20 PM
i am not certain that this carrot, i am now eating, has instincts...

Abracadabra's photo
Thu 06/12/08 03:29 PM

i am not certain that this carrot, i am now eating, has instincts...


It probably did before you bit into it. :angry:

The poor thing. :cry:

no photo
Thu 06/12/08 03:41 PM
hey, i'm not the one that yanked it out of it's home!!!



i am not certain that this carrot, i am now eating, has instincts...


It probably did before you bit into it. :angry:

The poor thing. :cry:

Abracadabra's photo
Thu 06/12/08 04:09 PM

hey, i'm not the one that yanked it out of it's home!!!


Yeah, but you probably paid the pick man!

You're the GodFather of the Carrot Slaughter.

(sounds like a great title for a vegan murder mystery)

no photo
Thu 06/12/08 05:50 PM
Oh yes, when do we meet in Reno?



bigsmile love smokin flowerforyou

Jess642's photo
Thu 06/12/08 05:55 PM
:cry: All sentient beings are inherantly provided with survival instincts.

I watched a pretty faced wallaby, with both its hind legs broken, still try to get away from a dog that was annoying it.:cry:

Humans that choose to commit suicide are usually in an altered state, ie; suffering with depression, or other impaired mental function, and are not functioning instinctually.