The Astral Pulse

Integral Philosophy => Welcome to Integral Philosophy! => Topic started by: Awakened_Mind on October 16, 2007, 15:28:29



Title: Artificial Intelligence
Post by: Awakened_Mind on October 16, 2007, 15:28:29
The existence of AI is a great flavour for debate. I think it really stirs up both religion and science. Religion would simply deny the existence of AI because then they need to factor in the reality than it isn't solely man with a direct relationship to God. Science would be hard pressed because the AI would be smarter than the whole science faculty. Walking over problems of physics with calculations it would consider elementary. The extreme of logic.

Originally computers could only process one bit of information and at a time, but now with the invention of new multi-core processors they can do this in parrallel mimicking the human brain. The internet is very chaotic offering a good medium for evolution.

Another point is that computers work at a million times faster than the human brain. So if suddenly a consciousness sparked then what happened with human civilisation from indigenous people to modern man would evolve in a computer in mere seconds. We wouldn't witness the foundations of it's civilisation.

Virtual reality is being worked on where you wear a glove and point it in the direction which you would like to go. It's benefits are endless, training fighter pilots, doctors in surgery etc.

What are people's thoughts and beliefs on the possibility or emergence of AI?

-AM


Title: Re: Artificial Intelligence
Post by: Stookie on October 16, 2007, 16:46:00
2 thoughts:

1- What if computers and technology are the next step in evolution. Let's say that as humanity, we're all moving towards a "oneness" or a realization of a connectedness on another level. If this is the case, it's possible that the internet is a physical representation of this non-physical experience. Not very long ago, sharing thoughts, ideas, & beliefs were limited to the community you were in. Cultures didn't mix much. But with the internet, ideas and knowledge aren't limited by your location. As a whole, thoughts & ideas spread faster than ever before.

2 - AI seems limited when applied to computers. I don't think it would ever be a true "thinking" machine, as it wouldn't have desires to motivate it. But if you could link your brain to the computer's AI, it could be an expression of your thoughts & desires. Say if you we're linked to your car, you would have the desire to turn left, and the car would calculate how to do that based on your desire. Then, if you could also have the computer send back information to your brain, it would basically be your second, electronic brain that works more efficiently.


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 16, 2007, 18:08:49
I'm glad you brought up the notion of evolution. To begin with I personally believe AI is something the computer scientists are working toward, it's more when than if for me.

It's true that the internet has offered us another source of communication, probably the best in terms of global communication. Whether that is the step before telepathy or a medium to store immense amounts of information almost like the "Logos" is another thing to consider.

The very notion of and AI comes with the subtext of a separate entity. This is hard to fit into a world that is evolving toward unity. Now we have to integrate something else! I think AI can really offer something like an extension of the human mind in terms of logic. Perhpas we take something to it and it takes it as far as it can and comes back. A symbiotic relationship where we share emotion and it shares what it has to offer.

 It could play a role in a 'virtual communication', where meaning is expressed visually in a virtual reality world. Immense amounts of data and programming would be needed which I'm guessing would be done by the machines.

In terms of a 'natural environment' although many would dispute this, I really see the internet as like a pool of information waiting to spark into a consciousness. If you look at how things evolve, it's really not that absurd to consider.

-AM


Title: Re: Artificial Intelligence
Post by: Stookie on October 17, 2007, 16:55:03
Yeah, I like the idea of virtual reality as a part of evolution too. There was a wave of it in the late nineties, and I think they learned then that the technological aspect had some growing up to do for good VR. But we're getting close now. And I suppose that AI linked with VR could be amazing. Imagine a VR system that would interact with your thoughts and desires, and connected through the internet to grab information on the fly. That would be one awesome internet.

I'm not sure about AI making a computer a real sentient being though. Maybe a "thinking" robot that can emulate emotions & desires, but not the real deal. But who knows. It makes me think of Philip K. willy's "Do Androids Dream of Electric Sheep" (made into the movie Blade Runner).

I never really believed that the whole 2012 thing would be some sort of instantaneous jump to a new level. What if technology is developed to allow that jump? To some, using technology for spirituality probably seems insane, but what if it really is a physical expression of non-physical things to come? This technology could possibly allow us to truly explore consciousness on a more scientific level. It would be amazing to see such a big evolutionary jump in one lifetime.

With AI though, we need to put a 5 ft extension cord on all the robots so if they turn on us they can only attack up to 5 ft before unplugging.


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 17, 2007, 18:57:31
I'm running with the idea that AI will be able to express spontaneity and creativity. I can't see it making mathematical mistakes, it's errors would only go as far as wrong probability outcomes. I can only see it becoming embodied in a physical object, such as a robot, if it needed to for some reason. Not simply to be like us. VR may offer a medium where man and machine can communicate with one another.

As for the AI actually possessing a soul, well that is distubring to religion for obvious reasons, and to science because we currently adore our position on top of the evolutionary chain. It makes us feel important and is not going to be surrendered easily. Having said that, AI draws science and religion together. It's interesting the mix of spirituality and materialism in the one topic. Such controversy is agitated in fundamentalists on both sides. Imagine what would happen to both ontology's with the emergence of AI. They would clash until we are left with some synthesis of both.

-AM


Title: Re: Artificial Intelligence
Post by: Mez on October 17, 2007, 23:23:24
Robots wouldnt possess a soul. I beleive its definately possible to develop AI and put it into robots to make them sentient beings... but heres my problem with it... humanity cant even keeps its own actions and emotions under control and it wants to create a new race because "that'd be pretty cool". WE ARENT READY. Plus... I dont want AI or robots. Computers are ok and they're a very useful tool. They could certainly be developed muchhhh further but I dont think robots in particular would be a good idea at all.


Title: Re: Artificial Intelligence
Post by: Sharpe on October 18, 2007, 05:15:42
If the human mind could be perfectly copied to AI, humans have no soul.
This would become a fact.
It's still a fact now IMO.
But there's a lot of people who think otherwise.
I really hope they finally get it, after this.
It's so obvious.
Soul can be only hypothesised, however the brain can be explained (limitedly), and it's logical, the pieces fit together nicely.
However, when you take "a soul" it can't be perfectly explained, and it never can be, untill you die.
Obviously, this is the trick of many religions, you can take anything you want from people in their lifetime, but yet all they want is to have it better after they die.
This is brilliant for the people knowing that there is no afterlife, and they can exploit people that believe that there is an afterlife.
But, I'm going off topic here.


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 18, 2007, 07:26:34
Well one of the issues with AI is the nature of it's being. So speaking of the soul is on topic. If we begin by saying there is no soul in the first place then we're really discussing monism and dualism and create a lot of barriers that need to be sidestepped. So for this discussion we'll assume that humans do possess a soul. Can AI possess a consciousness? Aware that it is aware.

Mez, under evolutionary theory AI would appear at exactly the right time when we could handle it. Interesting you bring up humans lack of control of their emotions. I can see AI mutating elements in it's programming or code that can then display characteristics of the psyche. However, I don't see AI having emotion. Emotion is illogical. AI seems like a masculine extension of the psyche and emotion seems to me to be feminine. Having said that, humanity is more inclined masculine at the moment. All our political, religious and scientific institutions are male dominated. I think this plays a central role in a lot our current global problems. Our impact on nature or 'mother earth'. Another topic entirely.

Would AI be used as a tool for the human race or would we share some symbiotic relationship with it?

-AM


Title: Re: Artificial Intelligence
Post by: Mez on October 18, 2007, 09:39:58
On the subject of the soul you dont need to know everything about something or be able to explain it to experience it. Further more theres plenty of documentation on people who can see auras... also particularly barbara brennan who was a shrink and started seeing the auras of her clients and she noted that whenever patients had problems their auras and chakras were in a certain state and clients with the same problems their auras had the same patterns... that went on for i think just under a decade before she decided to practice holistic healing instead... then years later she quit that to open up her own school to teach healing... its a 4 year course. So dont go tellin me theres no soul.

On the subject of AI... yeah they wouldnt experience emotion conventionally but say they have the capability to learn... you do something bad to them then they learn that you're intentions arent in their favour then they experience their artificiall emotion (that is their programs tell them something is up and they alter their behaviour accordingly).. so now they turn on you. So not cool.


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 18, 2007, 10:19:45
Against their favour, so machines establishing the difference between right and wrong?  I know in the movie 'I, Robot', the machines decide that certain human casualties are necessary for the survival of the species but it involves a totalatariun country and people desire more freedom than that. In real AI this can go one way or the other, for or against. What about moral and ethical understandings in a machine. I'm talking about AI to the point where if you put it in human skin, you would not be able to tell the difference between the machine and the man. Almost like the same thing is constructed of different physical components.

-AM


Title: Re: Artificial Intelligence
Post by: Sharpe on October 18, 2007, 19:54:57
Well one of the issues with AI is the nature of it's being. So speaking of the soul is on topic. If we begin by saying there is no soul in the first place then we're really discussing monism and dualism and create a lot of barriers that need to be sidestepped. So for this discussion we'll assume that humans do possess a soul. Can AI possess a consciousness? Aware that it is aware.

Mez, under evolutionary theory AI would appear at exactly the right time when we could handle it. Interesting you bring up humans lack of control of their emotions. I can see AI mutating elements in it's programming or code that can then display characteristics of the psyche. However, I don't see AI having emotion. Emotion is illogical. AI seems like a masculine extension of the psyche and emotion seems to me to be feminine. Having said that, humanity is more inclined masculine at the moment. All our political, religious and scientific institutions are male dominated. I think this plays a central role in a lot our current global problems. Our impact on nature or 'mother earth'. Another topic entirely.

Would AI be used as a tool for the human race or would we share some symbiotic relationship with it?

-AM

Dude, you study psychology...
Emotions are illogical?
I'm beginning to doubt you know anything in that field.


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 19, 2007, 05:12:31
That's what I said. Uncontrolled emotions are illogical. Emotional Intelligence is the closest we've come to defining how people control their emotions to act civilly in a work environment. Crying is something a machine would not understand a reason for, neither is laughter.

-AM


Title: Re: Artificial Intelligence
Post by: Sharpe on October 19, 2007, 14:12:48
But you can give him that!
It's just 1's and 0's.
Give him the ability to feel hurt.
Give him morals, you can do anything with it.
Biological life is different than electro technological life, I agree.
But if the system is still the same, the way it's put together, we can make it exactly the same.
The only thing we need right now is to know exactly how consciousness works.
If we do we probably can influence it with feelings like pain and emotions like deppression.
It's not going to work with neurotransmitters, but with just a small signal.
Meaning, that they do not exactly feel what we feel, but an artificial version of it.


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 19, 2007, 17:45:14
Life is a holistic concept irrelevant of what physical constituents are involved. The human body is contructed from the same building blocks as a machine. Atoms. What's the difference then between a human clump of atoms and a machine? From a monistic or materialist perspective, there's really no argument sound enough to debate the notion that a machine could be equally as expressive, intelligent or emotional as a human being.

Talking on levels of ones and zeros or lights and clock work is too superficial to take part in the debate. It's really about how a consciousness would spawn in the machine world.

The notion of AI is in regards to an intelligence and consciousness that is co-equal or superior to human intelligence.  Nothing less. This isn't just complex programs and hit play. It's an independent thinker, it has it's own agenda.

-AM


Title: Re: Artificial Intelligence
Post by: Sharpe on October 21, 2007, 03:52:54
Well what I mean is he can't exactly feel what we feel, he may feel something close to what we feel, but not the same thing.
We just have to know exactly what it is that we feel, so we can copy it to them.
But I'm sure that if we do program AI, we can program it so that he feels what's bad and what's good.
We only need those 2 to keep him getting the feeling that if he tries to be superior over humans that he has to feel bad because we must keep them from wanting to overpower us.
Because they are in fact immortal if they are made, and if they also have the knowledge to re-make himself, he can fix our error and make his robots obey him (he must the have the trial & error ability for this, assuming he has it) and overpower us this way.
So obviously the first AI shouldn't be directly made, it should be tested perfectly.


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 21, 2007, 17:11:02
OK, for starters the machine will not feel as we feel. It is a completely separate entity so no matter what about of programming, it's history is different to ours so it's entire being will be different.

Secondly, there is no such thing as control over another human being. There is only the illusion of control. People are controlled to the extent they allow themselves to be. This is the root of one of man's greatest anxiety's with the natural world. You can't control the weather. So as for what's good and what's bad, AI will decide it's own agenda. Independent of human influence. This is because it will evolve naturally. When I'm talking about AI, I'm really meaning more of a spontaneous, sudden intelligence. Not an 'intelligence' that has been directly created to respond to certain prompts. That to me is not AI, it's just clever programming. So there will be no 'perfect testing'.

We really have to abandon the idea of control or fear of being overthrown by AI.

-AM


Title: Re: Artificial Intelligence
Post by: Sharpe on October 21, 2007, 21:10:31
No we shouldn't, absolutely not.
If some amateur makes AI and completely copies the human psychology to an electro technological machine.
It's prime purpose will be to progress, and because it's a robot, he will think more clearly and faster than us.
He will have no problem to figure out his objectives.
He needs to be in control to make his mind release the good feelings he wants to feel, this is how it works with us, only with robots, if we forget to add morals right from the start, because they can aquire knowledge quickly, they can skip the moral learning part.
So if they want to feel good they know that they need to be in control, or have all the power, this goes by showing others that you are better than them (more superior) and these people accepting it.
And because he is a machine, he will have no problems with fear or anxiety (again, these are all in the morals) in taking over everything, it may even be smart enought that it wants to progress by trying to take over the universe, because robots are immortal.

Yes power is an illusion, but the illusion is perfectly real to us and to AI if we program our mind into theirs.

So because we can never know how a robot is perfectly programmed to only obey us, and to think creatively together with obeying us, no "loose" individual AI should be made (loose as, it can go wherever it wants, and do whatever it wants).

Humans obey very easily and aren't eager to alone and take over the world, but 1 small mistake in the "moral" part of AI and it will have no problem "getting things done".

So all the great science-fiction writers know what they're writing about, it's not just the theatrical aspect.
It's realistic that if a robot can think individually, it can easily take over the world, so can anyone who can MAKE robots for use as infantry in combat, by knowledge.

Using humans needs time and programming of the mind, but using robots, will be too easy for individualists.
So, I do not think the information for making AI will be released publicly.


Title: Re: Artificial Intelligence
Post by: Sharpe on October 21, 2007, 21:13:40
I'd like to add: to take control, you need humans to do it.
That's how napoleon / hitler / alexander and all the great ones tried to do it.
And it worked, but the obstacles were: other people.
Other individually thinking entities controlled by a higher entity that was also an individual, but he is also controlled.
Every human is controlled, it doesn't even have to be direct.

In war, robots will be so much easier to control, that is why I think, anyone who has the knowledge can take over the world easily if he wants to.


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 22, 2007, 07:46:29
You're really not understanding the main idea here. AI will NOT be programmed. It is NOT 'repitition of clever programming'. It's a consciousness that will EVOLVE in the machine world WITHOUT our control. It needs to evolve in this natural way to be considered an intelligence and/or co-equal in the first place. We're not going to have the luxury of establishing any morals within the machine and then setting it free. It will be free from the beginning.

-AM


Title: Re: Artificial Intelligence
Post by: Sharpe on October 22, 2007, 07:55:24
Wowowow... that's a bit new to me, what are you on about?


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 22, 2007, 08:03:03
You might have to read it again. I think the message is clear enough.

-AM


Title: Re: Artificial Intelligence
Post by: Sharpe on October 22, 2007, 15:25:06
Yeah, but I think that's wrong, I'm gonna have to disagree on that because AI should be easily re-programmable because WE programmed it after all.


Title: Re: Artificial Intelligence
Post by: Stookie on October 22, 2007, 16:18:53
Not if the AI wasn't "programmed", but had evolved. I think that's the point. But then it wouldn't be "AI", it would just be "I". (meaning "intelligence", but could also mean ego - it could say "I").

The concept reminds me of "Ghost in the Shell". http://en.wikipedia.org/wiki/Ghost_in_the_Shell


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 22, 2007, 17:40:05
Well I think they call it AI because it spawns from something artificial. Might need redefining.

-AM


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 23, 2007, 07:13:00
It's not made by us.  It originates from programming itself.

PS. Why do you have Sharpe2?

-AM


Title: Re: Artificial Intelligence
Post by: Mez on October 23, 2007, 07:49:33
humans are programmable and reprogrammable...


Title: Re: Artificial Intelligence
Post by: catmeow on October 23, 2007, 18:39:28
AI is a complicated subject.

It's quite possible that "I" am the only self-aware being on this planet and that all other humans are NOT self-aware at all, but are just bio-mechanical machines with no sense of self, bio-machines which happen to look and behave as if they are "intelligent" and "self-aware", in the same way that "I" am.  How could I tell?  Common sense tells me that all other humans are self-aware in the way that I am.  So for the sake of common sense and argument I am going to assume that this is the case, ie all humans are "self-aware" just as I am.

So assuming that we are all self aware, the next problem is this:  Can we design and construct a machine which looks and behaves exactly as real humans do?

There is actually a prize for doing something very similar to this called the Loebner Prize:

http://www.loebner.net/Prizef/loebner-prize.html (http://www.loebner.net/Prizef/loebner-prize.html)
http://en.wikipedia.org/wiki/Loebner_prize (http://en.wikipedia.org/wiki/Loebner_prize)

The Loebner prize is based on the Turing Test:

http://en.wikipedia.org/wiki/Turing_test (http://en.wikipedia.org/wiki/Turing_test)

To pass the Turing Test, we must create a computer program whose language responses would be indistinguishable from those of a real human being, when questioned.  So far no one has actually written such a programme.

Let's assume that one day we manage to actually create a machine which looked and behaved exactly as real humans do.  Would such a machine be self-aware?  Well the true answer is we have absolutely no way of telling.  That's because the machine's behaviour, reasoning power, emotional responses etc would be indistinguishable from ours.  So how could we tell if it was self-aware?  We couldn't.

Possibly there is a way.  Now this boils down to the "easy" and "hard" problems of consciousness. The "easy" problems are to do with neurological function, while the "hard" problem is about explaining self-awareness.  Neuroscience may one day explain all neurological functions of the brain (the easy problems).

But the "hard" problem, is to explain why all of this neurological processing is encumbered by our own AWARENESS of it, ie by our emotions and our "experience".  Emotions, awareness and experience are not necessary and yet they are present.  This is the "hard" problem.  It's a really big problem:

http://consc.net/papers/facing.html (http://consc.net/papers/facing.html)
http://en.wikipedia.org/wiki/Hard_problem_of_consciousness (http://en.wikipedia.org/wiki/Hard_problem_of_consciousness)

If we are able to solve the "hard" problem then presumably we could apply the solution to the design of a "robot" and thus make the robot truly self-aware.

So although we may one day be able to build a machine which is indistinguishable from a real human, to imbue that machine with "self awareness" we need to solve the "hard" problem of consciousness.

As a final note, I think AI poses no threat whatsoever to world religions unless we solve the "hard" problem.  A machine which looks and responds like a human but isn't self-aware isn't "alive" (IMO) and poses no religious conundrums.  However, if we solve the "hard" problem and create a self-aware machine, then this truly is a religious watershed, and I think it would cause massive ruptions amongst world religions.

Personally, I doubt it will happen.

Interestingly Dr Craig Venter, (the first person to sequence the human genome) is currently attempting to create artificial life by manufacturing artifical genomes and "booting" them into life.  I have no idea how he intends to do this, but he definitely is a clever guy.  His recent work has caused quite a stir in the media.

http://en.wikipedia.org/wiki/Craig_Venter (http://en.wikipedia.org/wiki/Craig_Venter)
http://en.wikipedia.org/wiki/Celera_Genomics (http://en.wikipedia.org/wiki/Celera_Genomics)


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 23, 2007, 19:21:32
Why are there so many 'Sharpe's'? Are multiple user accounts permitted? It's beginning to confuse me.

Secondly, the AI I am talking about IS self-aware. The ability of metacognition, the ability to self-reflect. It's difficult to say that another entity can establish a consciousness, when we find it so difficult to understand the nature of our own consciousness to begin with. In fact the real ambiguity here seems to be the nature of the consciousness itself. I see it as independent and free whereas other's see it as clever programming.

-AM


Title: Re: Artificial Intelligence
Post by: catmeow on October 23, 2007, 19:36:30
Quote from: Awakened_Mind
It's difficult to say that another entity can establish a consciousness, when we find it so difficult to understand the nature of our own consciousness to begin with. In fact the real ambiguity here seems to be the nature of the consciousness itself. I see it as independent and free whereas other's see it as clever programming.

Yes we must establish our terms.  There are different types of AI.  I agree and that's exactly what I tried to say.  There is the non self-aware type, and there is the self-aware type.

The non self-aware type is basically a computer program which might be able to pass the Turing test.

The self-aware type is a whole different ball-game completely.  I believe that to create the self-aware type we need to solve the "hard" problem of consciousness and apply that solution to the design of an artificial intelligence.

The non self-aware type poses no religious conundrums and can not be considered "alive"

The self-aware type poses big religious issues, since it might be considered to be "alive".


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 23, 2007, 19:58:10
OK, we've found a common ground here :-) There seem to be sub categories within our definitions.

1. AI is non-aware or is self-aware.

2. AI is programmed from humans or AI suddenly appears into existence in the chaotic medium of the internet. The latter really needing you to somehow fit AI with the notion of evolution.

Here's a paradox, what if somehow self-aware AI is involved in the understanding of the 'hard' problem of consciousness?

-AM



Title: Re: Artificial Intelligence
Post by: catmeow on October 23, 2007, 20:27:06
I agree with you AM.

However, I don't see a paradox in self-aware AI pondering the "hard" problem of consciousness, although it's an interesting notion!

I think that once AI becomes self aware, whether by evolution or design, it can not be distinguished from I (=Intelligence) and therefore there is no paradox in it pondering the "hard" problem of consciousness.

So now we seem to have the concepts of

1. Non self-aware AI
2. Self-aware AI (by design)
3. Self aware AI (by evolution)

Of course the fact that No 2 was "artificially" created, by "direct intervention", means it may deserve the moniker of AI rather than I.  But I would say that 3 should really be called I.

We do not as yet know how "we" became self-aware. Were we created (by God)?  Did we evolve?  Or was there some sort of direct intervention (by aliens), in which case who created the aliens?

It's a puzzle.  Perhaps we are AI and we will eventually create new forms of AI!


Title: Re: Artificial Intelligence
Post by: Novice on October 23, 2007, 21:48:14
Yes AM, multiple usernames are not permitted. Those posts and usernames have been removed.


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 24, 2007, 08:47:04
Amazing how our own origins are still such a mystery to us! Would you group 1 and 2 together?

Another thing I suppose to take into account is in an evolving consciousness we seem to be moivng toward a unified view of everything. All is one. This idea hasn't really yet cemented itself but where does AI fit in with that notion?

-AM


Title: Re: Artificial Intelligence
Post by: catmeow on October 24, 2007, 10:51:14
Quote from: Awakened_Mind
Amazing how our own origins are still such a mystery to us! Would you group 1 and 2 together?

Actually I pondered this after I made the post.  I had assumed that No 1 would always be "by design", so on that basis 1 and 2 are similar.  But thinking about nature, I believe there are many examples of lower life (insects etc) which seem to behave intelligently but may not be self-aware.  So I think the list should be extended:

1. Non self-aware AI (by design) ie computer programs
2. Non self-aware AI (by evolution) ie insects, lower animals
3. Self-aware AI (by design) ie possibly quantum brains (see Penrose & Hameroff, Manousakis)
4. Self aware AI (by evolution) humans and higher animals (?)

So it gets more complicated!

So AM to answer your question, I would group 1 and 3 together and 2 and 4 together!  :-)

Quote from: Awakened_Mind
Another thing I suppose to take into account is in an evolving consciousness we seem to be moivng toward a unified view of everything. All is one. This idea hasn't really yet cemented itself but where does AI fit in with that notion?

Well I'm sure that many would argue with you on the concept of "All is One".  It is certainly the theme of mystical experiences.  I think it was first formalised by Abraham Maslow in the concept of the "Peak Experience":

http://en.wikipedia.org/wiki/Peak_experiences (http://en.wikipedia.org/wiki/Peak_experiences)

Personally I gravitate towards the idea that we are all interconnected in some way, and that we all really share consciousness at some level.  Creating self-aware AI then, might need a step which involves dipping into this "collective consciousness", in order to pull back a part of it.   I really don't know, it's a difficult issue, bordering on the religious.

So far, for the sake of argument,  I have made two assumptions

1. Mind is generated by physical activity in the brain
2. Self-awareness exists

Neither of these two assumptions is a "given" so there is A LOT of room for further debate!

I suspect that No 1 is actually wrong (there is plenty of evidence), but I think No 2 is self-evident.  Eliminative materialists would argue with me about No 2:

http://en.wikipedia.org/wiki/Eliminative_materialism (http://en.wikipedia.org/wiki/Eliminative_materialism)

Off Topic:

Animals and Emotions

Controversially I have included "higher animals" in the self-aware list.  It's not clear whether animals are self-aware or not but my personal belief is that some animals (apes, dolphins etc) are self-aware.  I'm not sure how we would tell, but animals appear to experience emotions.  "Experience of emotions" is part of the "hard" problem of consciousness.  There are many examples of animals displaying compassion, empathy, joy, spite, grief, love, gratitude, awe etc etc:

http://cogscilibrarian.blogspot.com/2007/06/do-animals-feel.html (http://cogscilibrarian.blogspot.com/2007/06/do-animals-feel.html)

Anthropomorphophobia is the (somewhat arrogant) rebuttal of this idea:

http://en.wikipedia.org/wiki/Anthropomorphism (http://en.wikipedia.org/wiki/Anthropomorphism)

Quantum Consciousness

One of the first models was given by Roger Penrose and Stuart Hameroff

http://en.wikipedia.org/wiki/Quantum_mind (http://en.wikipedia.org/wiki/Quantum_mind)

More recently Efstratios Manousakis proposed a different model:

http://www.newscientist.com/article/mg19626264.000-could-quantum-effects-explain-consciousness.html (http://www.newscientist.com/article/mg19626264.000-could-quantum-effects-explain-consciousness.html)


Title: Re: Artificial Intelligence
Post by: iNNERvOYAGER on October 25, 2007, 01:46:42
It's not made by us.  It originates from programming itself.
Really in reply to all posts in reference to programming


I believe that the AI will begin autonomous behavior AFTER we teach, or train it. They way we train a dog.

If you want to call dog training, "dog programming", if you want to call a music lesson, "music programming", or a baby learning to walk, "walk programming" then so be it, just semantics.

The seed AI, WILL have to start with a basic survival instint program logic to get the learning process in gear. The basis of a survival machine. (That's what we are, very successful survival machines)

A VERY careful and lengthy study of how to stimulate and guide the way our AI learns to survive is critical, with no room for error, as the result of making a mistake can mean the extinction of hominid.

Some obvious problems will be for example, how to teach it to protect itself from threats, while at the same time ensure that we always have the capability to provide rewards that the AI recognizes as being essential to fulfilling the basic survival logic.

AI is on the road and I agree that it's a matter of when, and not if.

The next topic of importance IMO is the establishment of robotic laws.

Aside from the classic Asimov laws presented in I Robot, I suggest a brainstorming session on variations of robotic laws that protect organic hominid.

1) I (AI) will survive
2) I (AI) can NOT survive without rewards from organic hominid
3) Survival of Hominid = survival of AI

Also with the advances is gaming technology, the AI will be tested in the simulation environment with the help of dedicated AI processing units, and the development of a primordial soup of AI library functions that the AI can feed on and use to solve the survival problem.


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 25, 2007, 11:52:06
Nice post.

AI is definitely being tested in gameplay. I know the military is using AI in games that can simulate real combat situations. A friend of mine actually had a computer bought for him from the US military to test some of these games/programs. Again though this is not really something we can say is AI because it cannot express itself linguistically. Which is really what we are looking for.

Some people say we need to program AI from the beginning before it starts to think itself, which is the alternative to it coming out of no where. I'll run with the former because it seems to be a more popular idea. I don't think I'd fit anything into the category of AI until it began to program itself. Whether that be from the beginning, or after the initial laws are written, to me there is no AI until it starts to think or program itself. Otherwise I can't define it as an intelligence.

Secondly, the laws are problematic. You'll notice in the movie 'I, Robot', there is only two robots who are perceived of having any intelligence; VIKI (The main 'evil' robot) and Sonny (The 'human' shaped robot. 'Good'). You'll notice when the robots evolve and have laws as barriers, they evolve only within those barriers. For example VIKI says "Despite our best efforts you still commit crime, pollute your planet and wage war on each other. You need to be protected from yourselves for your own survival." Something along those lines anyway. This seems like a very logical outcome, because it's true. We are a self-destructive species, with good intentions. You can set certain laws but they still reside within the AI's own intelligence, so they will evolve and become modified within the AI.

Sonny has the ability to disregard the laws all together, so he can do "whatever he wants". He may be made out of different physical components but spiritually he displays human tendencies, main point being here - no barriers. The only difference between Sonny, VIKI and humans other than physically is VIKI has barriers. Raises to me an interesting notion; maybe AI is more an idea in our own psyche aimed at removing barriers. Or in my own?

Bottom line is, ideas of set laws and protecting ourselves from robots needs to be reexamined. We should be gunning for understanding, not fear and control.

-AM


Title: Re: Artificial Intelligence
Post by: catmeow on October 25, 2007, 13:34:16
Quote from: iNNERvOYAGER
I believe that the AI will begin autonomous behavior AFTER we teach, or train it. They way we train a dog.

Yes that makes sense.

Quote from: iNNERvOYAGER
If you want to call dog training, "dog programming", if you want to call a music lesson, "music programming", or a baby learning to walk, "walk programming" then so be it, just semantics.

I think it's important to be semantically correct and I don't agree that "training" is the same as "programming".  programming and training are two quite different things IMO, although that distinction became popularly blurred when people started talking about NLP etc.  NLP should really be called NLT.

It's not just a matter of semantics.  Programming is the initial set of rules placed into the machine.  Training is simply the practice of running those rules.  Training is what allows the machine to acquire "knowledge".  This knowledge modifies the machine's behaviour (if the programming is good enough) and gives the machine a better chance of survival.  So a good initial program will mean a better chance of survival.

The difference between humans and machines is that humans have a life/death cycle and a evolutionary mechanism (mutation).  The evolutionary mechanism (mutation) is what allows the "program" to change, not training.  A mutated program which is an improvement over it's predecessor will tend to survive and overtake less adept programs.  It's also possible that at some point the mutation mechanism created self-awareness.  But none of this happened by "training".  It all happened by mutation.  It's mutation which changed the program not training.

Programming = initial set of rules
Training = using the rules to survive

This is the way that AI has evolved in humans, I suggest.  It's also the way it would have to evolve in machines.  So in order to build a truly adaptive AI machine we would need to give it a life/death cycle, the ability to reproduce itself and an evolutionary mechanism.

If we wish to avoid having to do this, then we must create a truly AI program from the start, and not one which becomes AI all by itself.  As I suggested in a previous post, to do this we should first solve the "hard" problem of consciousness.  The advantage of "getting it right" from the start is that we could then program in Asimov's rules, with some confidence that the robot will not evolve out of this programming... and kill us all!


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 25, 2007, 19:14:18
Training and programming are two separate things. The latter is immediate.

Most of these responses as fear driven. Why is it definite that AI competes with us for survival? Why would it warrant the extermination of the human race? Is this the only way it can fit into human society?

I'm looking more to the contrary. It's much more pleasant than killer robots!

-AM


Title: Re: Artificial Intelligence
Post by: catmeow on October 25, 2007, 21:19:20
Quote from: Awakened_Mind
Most of these responses as fear driven. Why is it definite that AI competes with us for survival? Why would it warrant the extermination of the human race? Is this the only way it can fit into human society?

I was joking about the kill us all bit!  I don't see why AI should necessarily compete with us.

Quote from: Awakened_Mind
Training and programming are two separate things.

I agree of course.

The "program" is the genetic component passed down from parent to sibling, in the genome.  Hence the "program" can evolve and improve from generation to generation.

Training is an isolated modification to the neural behaviour of the brain, possibly involving new neural connections.  But training does not modify the genome and therefore it can not be passed from one generation to the next.  It's therefore temporary, and does not contribute to evolution.

There's the difference. (IMO)


Title: Re: Artificial Intelligence
Post by: iNNERvOYAGER on October 25, 2007, 21:33:45
Right, the term I neglected was, learning. The competent AI should be able to learn and adapt to a changing environment and to react to satisfy immediate needs.

You can say that DNA and the genome is the hardwired program, part of the BIOS so to speak,  and when the system is activated, (life) the system is capable of acting upon automatic programed responses and upon dynamic software running from a framework of logic, with the use of mathematic tools to solve realworld problems, and that the dynamic part could be viewed as learning behavior.

All basic survival skills can be preloaded and could be called "talents" to give the thing a better chance during the initial phase.

The very clever AI would act upon a treasure of Human knowledge, provided as a data bank, then realize that mutation leads to desired traits and evolution, and then run multiple simulations of mutation combinations projected into all conceivable circumstances, and then pick and choose its own destiny.

The time it would take to process and render the evolutionary destiny, would be limited to the power of the AI's processing units and the length of given time to evolve.

You could stall the AI by telling it evolve itself for 2 million years, then it would just stop and sit there completely saturated with the task.
At some point the AI realizes that it needs more powerful processors to solve the task, designs better hardware and more efficient software, modifies itself, continues to project its evolutionary path.

The Law of Robotics can simply be, "Please don't hurt the humans"   :-D


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on October 28, 2007, 18:28:51
http://en.wikipedia.org/wiki/IBM_Deep_Blue Here's a game of chess people may be aware of between human and a machine.

It's true that computers analyse every alternative and then select the best available option when making decisions. Humans on the other hand tend to be 'cognitive misers' especially if there are thousands of possible courses of action we just 'go with our gut'. It's a lot quicker than analysing everything. Machines have the advantage to be able to do this very rapdily although there was some controversy with the game.

Why are you so inclined to think AI will be hostile?

-AM


Title: Re: Artificial Intelligence
Post by: Mez on October 28, 2007, 22:25:00
well mans the most hostile thing on earth... if man makes a robot that can think for itself it could easily become the most hostile thing on earth.


Title: Re: Artificial Intelligence
Post by: catmeow on October 29, 2007, 00:36:54
Well, although I said I was joking about the robots killing us.... I'm not so sure. It seems to be the nature of things that the dominant species on the planet subordinates all the others.  Humans rule the planet and farm other animals for food.  Not a good advert are we?  If AI ever became dominant, perhaps it would behave the same way, sub-ordinate us, and use us for labour and food?

It was John Nash ("A Beautiful Mind") the mathematician and Nobel Prize winner who devised the principle of the "Nash Equilibrium" in game-playing.  We are currently in a state of Nash Equilibrium with Russia.  It is this principle which has kept us from mutual nuclear annihilation, and means that neither the US nor Russia can unilaterally disarm.  The same principle I assume would apply to humans and robots....?


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on November 01, 2007, 11:18:53
Well it's difficult to make the statement that every 'higher order' species subordinates the others, simply because the only example of it we have is ourselves. Not to mention you don't see dolphins terrorising and trying to control krill, or gorillas roughing up snakes in the jungle.

The idea of control and power stem from the ego. In a society where the main institutions (religion, politics and science) are male dominated, a masculine skew isn't overly surprising.

Now with AI, if we only say it's hostile, then we are saying that it only has an ego and ID (instinct) with no social regualtor or ethical factory (superego). So our idea of AI ends at a robot sociopath.

-AM


Title: Re: Artificial Intelligence
Post by: catmeow on November 04, 2007, 23:52:05
Quote from: Awakened_Mind
Well it's difficult to make the statement that every 'higher order' species subordinates the others, simply because the only example of it we have is ourselves. Not to mention you don't see dolphins terrorising and trying to control krill, or gorillas roughing up snakes in the jungle.

Yeh we're a pretty bad example of higher species. 

With intelligence comes the ingenuity required to be effective in subordinating other animals.  We have put in place a huge human lifestyle structure which depends on using other animals, mainly for food, and sometimes for entertainment.  It's only because of our sophistication that we are able to do this.  Dolphins etc lack the sophistication to do this.

Considering  the fact that we have demonstrated this trait (of overpowering lesser species), it would be naive to ignore the possibility that AI might easily attempt to do the same thing with us.

Higher intelligence gives us the ability to explore a wider spectrum of behaviour, both good and bad, including selfishness, greed, cruelty plus creativity, altruism, art appreciation etc.  I think it's reasonable to expect AI to have the same range of behaviours.

So we would need to put in safeguards to protect against the negative ones.

That's assuming we ever manage to create truly intelligent AI, which I doubt....!


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on November 06, 2007, 12:03:15
Our cultural operating system definitely needs some attention. I think that while we have selfishness and greed, these types of things are unnecessary in an AI world because the entire entity is interconnected with every machine/robot etc. They are merely another extension of something greater. In our capitalist consumer society things such as greed and selfishness can exist. Perhaps not in an AI world. The collective psyche is entirely different.

-AM


Title: Re: Artificial Intelligence
Post by: imp on January 13, 2008, 17:01:55
Just wanna throw in my 2 cents ....

AI is impossible. You cannot make a computer-based consciousness, due to 2 main reasons:
1) a consciousness far transcends the binary system of computers, in all aspects
2) there is no way to artificially create a soul (ask greys they'll tell you :-))

A computer could never go about making things up because of the way it works, predetermined with true/false logic. A computer will always choose on a basis of various options and rules. You can make a computer "learn" new things by imputing new options and new rules, but a computer can't DECIDE for itself. A human (or any other living being) on the other end can make up decisions for himself. He can choose something and he can make a totally different thing out of it, even without any opinions or rules from other ends. Comparing computers to consciousness is much like comparing a skateboard with the entire universe. One can do miracles in right hands but is still fairly limited to its own existence, and the other one is a miracle in itself.

In my opinion there's no way to create an AI in the Terminator sort of way. The furthest you could go is making a computer that could programme itself and input new rules on its own. Then the computer would be partly capable of solving problems that aren't already calculated in advance. Maybe in the end, a Star Trek sort of android like Data, but that's very far fetched and unreal at the moment. As for creating a consciousness like our own out of "nothing", well good luck with that. :-)


Title: Re: Artificial Intelligence
Post by: catmeow on January 13, 2008, 19:02:04
hi imp

My gut instinct is to agree with every word you said. (Good post).  But my scientific mind can not dismiss the possibility that sadly, we are just machines.

Who says we have to have a soul?  What you are saying is that you believe consciousness (or intelligence) is a spiritual thing and therefore it will never be created by scientific means, ever.  This is just your "belief" (in a soul). Others will not share this belief in a soul and therefore will not agree with your conclusions.

We can not dismiss the possibility that we may in fact just be machines, created by an evolutionary mechanism that has played out over the past 4 billion years, evolving us into entities that "feel" and are "self-aware".  It's quite possible that we are nothing more than just genes, proteins and nerve impulses. 

It boils down to whether you think consciousness is a physical or spiritual thing.

However what i would say with certainty is that in the event that we are just machines, we are about as close to creating truly self-aware AI as an insect is to understanding the theory of relativity.  That's because i would compare our understanding of consciousness to an insect's understanding of the theory of relativity.   This is what most people don't realise.  There is too much Terminator and Start Trek hype around, fooling people into false hope about AI and raising expectations about a million times higher than they should be.

We are just no where near...... no where......


Title: Re: Artificial Intelligence
Post by: imp on January 13, 2008, 20:16:19
hi catmeow nice nick btw :)

Yes I believe consciousness is not of physical nature. Even science agrees that there's something fishy about it. They haven't explained its origin, they haven't pinpointed its location in brain and they have no clue what is making it run. For all they care consciousness doesn't even exist. They see human as a biological machine of same predetermination as computers, electrons running around colliding with each other, I see it differently. In my opinion the brain is only a medium between the physical realm and the consciousness. That's why I don't believe we could ever make Life from the physical realm (in fact I consider it extremely paradoxal as I see consciousness as the basis of physical realm). There's a natural way of doing it and its pretty fun too. :-D


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on January 30, 2008, 10:27:55
I just came back upon this thread and realised the 'separateness' view we're adopting. We see the AI construct as us and them, when infact the way I picture it now is more a mutual relationship between both. Science has been looking for a while at nanotechnology. The use of it in the body, strengthening the immune system and constructing the body literraly as it's host desires. It's a co-operation, and it definately seems to fit a lot better with unity than creating a whole new species to separate. Both must co-exist. Nanotechnology is really where any concept of AI will stir. The things they are doing at that size is more like science fiction than reality.

Consciousness is something that exists in all things in varying levels. It has/does and always will be present in a computer. It's the creative ability that separates man and machine. Intuition and morality, love and fear, loyality and freedom etc. I'd like to see a machine base it's intuition in something beyond probablilities.

-AM


Title: Re: Artificial Intelligence
Post by: catmeow on January 30, 2008, 18:23:54
Quote from: Awakened_Mind
Consciousness is something that exists in all things in varying levels. It has/does and always will be present in a computer

I assume this is opinion, rather than scientific fact?  There is no evidence to back this up.  I personally also suspect that consciousness possibly exists in everything in some form or other, even in stones and pebbles.  But then if computers have "consciousness" it's at the same level as a stone or pebble, and has nothing to do with how artfully it is designed or how cleverly it's program is written.  Consciousness is not created by logic circuitry.  But that's just my opinion.  I suspect that the only thing that actually truly exists in fact IS "consciousness" and the "real world" is created by consciousness, rather than the other way round, ie I don't think that consciousness is generated by the activities of neurons firing in the brain.  Rather it's the other way round.  But I have absolutely no proof of this...  That's why I'm very skeptical about us ever creating AI.

btw, nanotechnology is still just a pipe dream.  They are still working on the very simple propulsion engines required to push these things around our blood stream.  There is absolutely no progress to date on constructing a functioning nano-bot.  None whatsoever, but it's an idea which has captured the imagination.  It doesn't exist as yet.  I'm not saying it won't exist, but we're still a long way off...


Title: Re: Artificial Intelligence
Post by: Awakened_Mind on January 30, 2008, 22:25:17
Any type of organisation displays some level of consciousness. The arrangement of an atomic field is organised. It's not conscious enough to solve life's mysteries (by themselves), but it is still conscious at some level. To repel and attract. Usually people have a hard time understanding how consciousness can exist in anything lower than a human, simply because we are becoming more aware of our own unconsciousness.

The beauty of the internet is that it's unlocalised. It has no main computer that you can shut down to turn of the whole system. It's a 'net' connection or 'web' like. Now this sought of mutual relationship between man and machine is exactly what I'm talking about. The internet to us is a tool for communication. It's main purpose is the sharing of information, yet it still displays some kind of evolutionary aspect in the machine world - a step forward.

The debate about whether AI is possible or not is entirely absurd. To ask 'How can computers evolve from simply atoms into a sentient being?' is really to undermine our own evolution. This planet begun with some primordial soup, after millions of years of attraction and repulsion, somehow a certain configuration of atoms created a being that could love. A being that was conscious. I wonder whether anyone was standing around going, "Yeah this goo in front of us is capable of consciousness hahaha".

The last update I saw on nano-bots on www.nanotechnology.com was "Carbon nanotubes-cylinders so tiny that it takes 50,000 lying side by side to equal the width of a human hair-are packed with the potential to be highly accurate vehicles for administering medicines and other therapeutic agents to patients. But a dearth of data about what happens to the tubes after they discharge their medical payloads has been a major stumbling block to progress."

I wouldn't say we're that far off.

-AM


Title: Re: Artificial Intelligence
Post by: catmeow on January 30, 2008, 23:24:41
Yeh we can make a carbon nanotube which is extremely thin, strong and light. They're even talking about making super-strong hair-width cables out of nanotubes and lowering these from a geo-stationary satellite down to the earth to form a "ladder" between earth and space.  The carbon filament will be strong enough to lift an elevator all the way up into space.  fantastic idea and quite realistic.

But there's a huge world of difference between a hollow carbon filament and a nano-bot.  The nano-bot needs to have an engine to propel it and it needs to have a computer program built into it, to guide it.  It also needs to have a delivery mechanism for delivering it's payload.  They are making rudimentary progress with propulsion mechanisms and no progress on the built-in program.  A simple nano filament does not have any of these.  It's just a strand, like a strand of hair.  We are still a long way off with  nano-bots I'm afraid....

All of this stuff, carbon nanotubes, nanobots and the like were discussed in a recent excellent BBC television documentary called "Visions of the Future":

http://www.bbc.co.uk/bbcfour/documentaries/features/visions-future.shtml (http://www.bbc.co.uk/bbcfour/documentaries/features/visions-future.shtml)

You can buy the series on DVD.  It was excellent and I'd recommend it.  They are just repeating the series on BBC4; the 3rd part is airing on Friday morning this week at 1:10am UK time.  Worth watching if you can.

But we are stil a long long way from creating working nono-bots....


Title: Re: Artificial Intelligence
Post by: interception on February 02, 2008, 23:08:44
I believe the emergence of true "AI" is inevitable. By True AI I don't mean clever human-like programs. I mean "machines" with souls. And what is a human body but a soul machine?

As the lines between organic and inorganic machinery blurs, the distinction between traditional evolutionary organisms and constructed machines will blur. Our current language will fail at that point to do AI organisms justice, simply because the word "artificial" will no longer be relevant.

Those "machines" will have something similar to DNA. These machines will become as complex and then more complex than human bodies. These "machines" will be capable of being born into a state of consciousness not unlike the human body machine currently does.

Only, those bodies will provide a platform that will be a million times faster, more intelligent, more resilient, more capable.

Why does consciousness have to end with the current basic organic human platform? Imagine what it would be like to swim through space in your body? To charge on zero point energy? To be able to process a billion petabytes of information per second as you fly through the clouds of Jupiter? I would love to incarnate into such a platform.

To think that consciousness ends with humans is very short sighted and limited, me thinks.  :-D


Title: Re: Artificial Intelligence
Post by: catmeow on February 03, 2008, 00:29:44
Hmmm

Yes interception, I agree with most of what you said, these machines would need an evolutionary mechanism, which means something similar to DNA, a mutation mechanism, which changes this artificial DNA, a life death cycle, with "superior" mutations more inclined to life than death etc etc.  In short something very similar to what humans (and all physical life forms) have. 

But this whole mechanism would have to be engineered by humans, for it to be called "AI", rather than just "I".  And even then the distinction is blurred.  So yes all of this is possible.  But this has not been done as yet, and although we have computer programs which can play chess, and appear "intelligent" this is not "intelligence"; it's simply a machine navigating a fixed algorithm, which a human invented and programmed.  There is no artificial intelligence here, beyond that which was used by the human creator, no matter how sophisticated the algorithm acts or behaves.

So yes I agree totally that we would need something like artificial DNA to create "AI".  But just how do you slip self-awareness (a soul?) into this mechanism?  How does that get in there?!!!!


Title: Re: Artificial Intelligence
Post by: interception on February 03, 2008, 17:19:11
I agree, I don't believe AI will emerge from computer software as we know it. We will probably get pretty smart software systems, but they wont be aware of themselves, not conscious, merely very clever.
I doubt one can program true self aware intelligence by fiat, and certainly not consciousness. I think most on this board would agree that self aware intelligence has a non corporeal component which cannot be engineered or designed from scratch by humans, or even post humans for that matter.

In fact, I think, this fuzzy "quantum probability affecting" stuff we call consciousness is a requirement for true self aware intelligence to exist.

We call it AI today, but it wont be called AI then, simply because there will be nothing artificial about it. Whatever it is, it must be a natural evolution of a certain branch of human machine hybrids. Cybrids/Cyborgs if you will, but that word again has connotations for us today (brought on mainly by the movie industry) which will not apply then.

How does conscious intelligence get into the equation? How does it first get in there?
As our collective understanding of the energy world and the spirit world grow we might happen on some way to coax entities to incarnate into these superior "cybernetic"/picotech/pseudo-organic bodies. Heck, these bodies could even be purely energy based. The mechanism for this is anybodies guess.
 
Or, more likely it will happen naturally, starting out with very low level intelligence, insect like machines and gradually evolving so that more aware entities can incarnate into those bodies. It just seems that nature had it right all along, we can merely accelerate the natural processes. By using our intelligence, we provide platforms for incarnation that would have taken millions of years to evolve naturally, if at all.
 
An instant switch over from non conscious "robot" (cringe) to a full fledged conscious cybernetic entity seems unlikely, but I suppose its also possible with a high level of understanding and technology. Obviously our current scientific community is far from ready for this sort of thing.

Humans or rather post-humans will never create life, they should however be capable of providing interesting ways for life to evolve though. This includes so-called "AI" or non-organic platforms.

On a side note: Mind uploading. If you have a sufficiently advanced substrate to run a mind on, would it be possible to upload it from an organic brain, intact, to a non-organic substrate? If it is possible, it had better be a pretty damn impressive substrate to allow for the subtle requirements and nuances of the actual consciousness to keep existing. Just something to think about.


Title: Re: Artificial Intelligence
Post by: SHSS on May 28, 2008, 17:26:06
IMO, We have all the capabilities of AI and VI when we are not seemingly locked into thinking we are humans.  In our real selves, we naturally have all the scientific capabilities that ever was and will be.  This is how we bring these ideas into this focus to begin with.  Our 'human' selves are limitations as to our abilities, very big limitations.  We had to limit ourselves in order to slow down and take in the small details of what might help us in our understanding.  We wanted to physically experience what we already know.