News:

Welcome to the Astral Pulse 2.0!

If you're looking for your Journal, I've created a central sub forum for them here: https://www.astralpulse.com/forums/dream-and-projection-journals/



Artificial Intelligence

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Mez

humans are programmable and reprogrammable...

catmeow

AI is a complicated subject.

It's quite possible that "I" am the only self-aware being on this planet and that all other humans are NOT self-aware at all, but are just bio-mechanical machines with no sense of self, bio-machines which happen to look and behave as if they are "intelligent" and "self-aware", in the same way that "I" am.  How could I tell?  Common sense tells me that all other humans are self-aware in the way that I am.  So for the sake of common sense and argument I am going to assume that this is the case, ie all humans are "self-aware" just as I am.

So assuming that we are all self aware, the next problem is this:  Can we design and construct a machine which looks and behaves exactly as real humans do?

There is actually a prize for doing something very similar to this called the Loebner Prize:

http://www.loebner.net/Prizef/loebner-prize.html
http://en.wikipedia.org/wiki/Loebner_prize

The Loebner prize is based on the Turing Test:

http://en.wikipedia.org/wiki/Turing_test

To pass the Turing Test, we must create a computer program whose language responses would be indistinguishable from those of a real human being, when questioned.  So far no one has actually written such a programme.

Let's assume that one day we manage to actually create a machine which looked and behaved exactly as real humans do.  Would such a machine be self-aware?  Well the true answer is we have absolutely no way of telling.  That's because the machine's behaviour, reasoning power, emotional responses etc would be indistinguishable from ours.  So how could we tell if it was self-aware?  We couldn't.

Possibly there is a way.  Now this boils down to the "easy" and "hard" problems of consciousness. The "easy" problems are to do with neurological function, while the "hard" problem is about explaining self-awareness.  Neuroscience may one day explain all neurological functions of the brain (the easy problems).

But the "hard" problem, is to explain why all of this neurological processing is encumbered by our own AWARENESS of it, ie by our emotions and our "experience".  Emotions, awareness and experience are not necessary and yet they are present.  This is the "hard" problem.  It's a really big problem:

http://consc.net/papers/facing.html
http://en.wikipedia.org/wiki/Hard_problem_of_consciousness

If we are able to solve the "hard" problem then presumably we could apply the solution to the design of a "robot" and thus make the robot truly self-aware.

So although we may one day be able to build a machine which is indistinguishable from a real human, to imbue that machine with "self awareness" we need to solve the "hard" problem of consciousness.

As a final note, I think AI poses no threat whatsoever to world religions unless we solve the "hard" problem.  A machine which looks and responds like a human but isn't self-aware isn't "alive" (IMO) and poses no religious conundrums.  However, if we solve the "hard" problem and create a self-aware machine, then this truly is a religious watershed, and I think it would cause massive ruptions amongst world religions.

Personally, I doubt it will happen.

Interestingly Dr Craig Venter, (the first person to sequence the human genome) is currently attempting to create artificial life by manufacturing artifical genomes and "booting" them into life.  I have no idea how he intends to do this, but he definitely is a clever guy.  His recent work has caused quite a stir in the media.

http://en.wikipedia.org/wiki/Craig_Venter
http://en.wikipedia.org/wiki/Celera_Genomics
The bad news is there's no key to the Universe. The good news is it's not locked. - Swami Beyondananda

Awakened_Mind

Why are there so many 'Sharpe's'? Are multiple user accounts permitted? It's beginning to confuse me.

Secondly, the AI I am talking about IS self-aware. The ability of metacognition, the ability to self-reflect. It's difficult to say that another entity can establish a consciousness, when we find it so difficult to understand the nature of our own consciousness to begin with. In fact the real ambiguity here seems to be the nature of the consciousness itself. I see it as independent and free whereas other's see it as clever programming.

-AM
Truth exists beyond the dimension of thought.

catmeow

Quote from: Awakened_Mind
It's difficult to say that another entity can establish a consciousness, when we find it so difficult to understand the nature of our own consciousness to begin with. In fact the real ambiguity here seems to be the nature of the consciousness itself. I see it as independent and free whereas other's see it as clever programming.

Yes we must establish our terms.  There are different types of AI.  I agree and that's exactly what I tried to say.  There is the non self-aware type, and there is the self-aware type.

The non self-aware type is basically a computer program which might be able to pass the Turing test.

The self-aware type is a whole different ball-game completely.  I believe that to create the self-aware type we need to solve the "hard" problem of consciousness and apply that solution to the design of an artificial intelligence.

The non self-aware type poses no religious conundrums and can not be considered "alive"

The self-aware type poses big religious issues, since it might be considered to be "alive".
The bad news is there's no key to the Universe. The good news is it's not locked. - Swami Beyondananda

Awakened_Mind

OK, we've found a common ground here :-) There seem to be sub categories within our definitions.

1. AI is non-aware or is self-aware.

2. AI is programmed from humans or AI suddenly appears into existence in the chaotic medium of the internet. The latter really needing you to somehow fit AI with the notion of evolution.

Here's a paradox, what if somehow self-aware AI is involved in the understanding of the 'hard' problem of consciousness?

-AM

Truth exists beyond the dimension of thought.

catmeow

#30
I agree with you AM.

However, I don't see a paradox in self-aware AI pondering the "hard" problem of consciousness, although it's an interesting notion!

I think that once AI becomes self aware, whether by evolution or design, it can not be distinguished from I (=Intelligence) and therefore there is no paradox in it pondering the "hard" problem of consciousness.

So now we seem to have the concepts of

1. Non self-aware AI
2. Self-aware AI (by design)
3. Self aware AI (by evolution)

Of course the fact that No 2 was "artificially" created, by "direct intervention", means it may deserve the moniker of AI rather than I.  But I would say that 3 should really be called I.

We do not as yet know how "we" became self-aware. Were we created (by God)?  Did we evolve?  Or was there some sort of direct intervention (by aliens), in which case who created the aliens?

It's a puzzle.  Perhaps we are AI and we will eventually create new forms of AI!
The bad news is there's no key to the Universe. The good news is it's not locked. - Swami Beyondananda

Novice

Yes AM, multiple usernames are not permitted. Those posts and usernames have been removed.
Reality is what you perceive it to be.

Awakened_Mind

Amazing how our own origins are still such a mystery to us! Would you group 1 and 2 together?

Another thing I suppose to take into account is in an evolving consciousness we seem to be moivng toward a unified view of everything. All is one. This idea hasn't really yet cemented itself but where does AI fit in with that notion?

-AM
Truth exists beyond the dimension of thought.

catmeow

#33
Quote from: Awakened_Mind
Amazing how our own origins are still such a mystery to us! Would you group 1 and 2 together?

Actually I pondered this after I made the post.  I had assumed that No 1 would always be "by design", so on that basis 1 and 2 are similar.  But thinking about nature, I believe there are many examples of lower life (insects etc) which seem to behave intelligently but may not be self-aware.  So I think the list should be extended:

1. Non self-aware AI (by design) ie computer programs
2. Non self-aware AI (by evolution) ie insects, lower animals
3. Self-aware AI (by design) ie possibly quantum brains (see Penrose & Hameroff, Manousakis)
4. Self aware AI (by evolution) humans and higher animals (?)

So it gets more complicated!

So AM to answer your question, I would group 1 and 3 together and 2 and 4 together!  :-)

Quote from: Awakened_Mind
Another thing I suppose to take into account is in an evolving consciousness we seem to be moivng toward a unified view of everything. All is one. This idea hasn't really yet cemented itself but where does AI fit in with that notion?

Well I'm sure that many would argue with you on the concept of "All is One".  It is certainly the theme of mystical experiences.  I think it was first formalised by Abraham Maslow in the concept of the "Peak Experience":

http://en.wikipedia.org/wiki/Peak_experiences

Personally I gravitate towards the idea that we are all interconnected in some way, and that we all really share consciousness at some level.  Creating self-aware AI then, might need a step which involves dipping into this "collective consciousness", in order to pull back a part of it.   I really don't know, it's a difficult issue, bordering on the religious.

So far, for the sake of argument,  I have made two assumptions

1. Mind is generated by physical activity in the brain
2. Self-awareness exists

Neither of these two assumptions is a "given" so there is A LOT of room for further debate!

I suspect that No 1 is actually wrong (there is plenty of evidence), but I think No 2 is self-evident.  Eliminative materialists would argue with me about No 2:

http://en.wikipedia.org/wiki/Eliminative_materialism

Off Topic:

Animals and Emotions

Controversially I have included "higher animals" in the self-aware list.  It's not clear whether animals are self-aware or not but my personal belief is that some animals (apes, dolphins etc) are self-aware.  I'm not sure how we would tell, but animals appear to experience emotions.  "Experience of emotions" is part of the "hard" problem of consciousness.  There are many examples of animals displaying compassion, empathy, joy, spite, grief, love, gratitude, awe etc etc:

http://cogscilibrarian.blogspot.com/2007/06/do-animals-feel.html

Anthropomorphophobia is the (somewhat arrogant) rebuttal of this idea:

http://en.wikipedia.org/wiki/Anthropomorphism

Quantum Consciousness

One of the first models was given by Roger Penrose and Stuart Hameroff

http://en.wikipedia.org/wiki/Quantum_mind

More recently Efstratios Manousakis proposed a different model:

http://www.newscientist.com/article/mg19626264.000-could-quantum-effects-explain-consciousness.html
The bad news is there's no key to the Universe. The good news is it's not locked. - Swami Beyondananda

iNNERvOYAGER

#34
Quote from: Awakened_Mind on October 23, 2007, 02:13:00
It's not made by us.  It originates from programming itself.
Really in reply to all posts in reference to programming


I believe that the AI will begin autonomous behavior AFTER we teach, or train it. They way we train a dog.

If you want to call dog training, "dog programming", if you want to call a music lesson, "music programming", or a baby learning to walk, "walk programming" then so be it, just semantics.

The seed AI, WILL have to start with a basic survival instint program logic to get the learning process in gear. The basis of a survival machine. (That's what we are, very successful survival machines)

A VERY careful and lengthy study of how to stimulate and guide the way our AI learns to survive is critical, with no room for error, as the result of making a mistake can mean the extinction of hominid.

Some obvious problems will be for example, how to teach it to protect itself from threats, while at the same time ensure that we always have the capability to provide rewards that the AI recognizes as being essential to fulfilling the basic survival logic.

AI is on the road and I agree that it's a matter of when, and not if.

The next topic of importance IMO is the establishment of robotic laws.

Aside from the classic Asimov laws presented in I Robot, I suggest a brainstorming session on variations of robotic laws that protect organic hominid.

1) I (AI) will survive
2) I (AI) can NOT survive without rewards from organic hominid
3) Survival of Hominid = survival of AI

Also with the advances is gaming technology, the AI will be tested in the simulation environment with the help of dedicated AI processing units, and the development of a primordial soup of AI library functions that the AI can feed on and use to solve the survival problem.

Awakened_Mind

Nice post.

AI is definitely being tested in gameplay. I know the military is using AI in games that can simulate real combat situations. A friend of mine actually had a computer bought for him from the US military to test some of these games/programs. Again though this is not really something we can say is AI because it cannot express itself linguistically. Which is really what we are looking for.

Some people say we need to program AI from the beginning before it starts to think itself, which is the alternative to it coming out of no where. I'll run with the former because it seems to be a more popular idea. I don't think I'd fit anything into the category of AI until it began to program itself. Whether that be from the beginning, or after the initial laws are written, to me there is no AI until it starts to think or program itself. Otherwise I can't define it as an intelligence.

Secondly, the laws are problematic. You'll notice in the movie 'I, Robot', there is only two robots who are perceived of having any intelligence; VIKI (The main 'evil' robot) and Sonny (The 'human' shaped robot. 'Good'). You'll notice when the robots evolve and have laws as barriers, they evolve only within those barriers. For example VIKI says "Despite our best efforts you still commit crime, pollute your planet and wage war on each other. You need to be protected from yourselves for your own survival." Something along those lines anyway. This seems like a very logical outcome, because it's true. We are a self-destructive species, with good intentions. You can set certain laws but they still reside within the AI's own intelligence, so they will evolve and become modified within the AI.

Sonny has the ability to disregard the laws all together, so he can do "whatever he wants". He may be made out of different physical components but spiritually he displays human tendencies, main point being here - no barriers. The only difference between Sonny, VIKI and humans other than physically is VIKI has barriers. Raises to me an interesting notion; maybe AI is more an idea in our own psyche aimed at removing barriers. Or in my own?

Bottom line is, ideas of set laws and protecting ourselves from robots needs to be reexamined. We should be gunning for understanding, not fear and control.

-AM
Truth exists beyond the dimension of thought.

catmeow

Quote from: iNNERvOYAGER
I believe that the AI will begin autonomous behavior AFTER we teach, or train it. They way we train a dog.

Yes that makes sense.

Quote from: iNNERvOYAGER
If you want to call dog training, "dog programming", if you want to call a music lesson, "music programming", or a baby learning to walk, "walk programming" then so be it, just semantics.

I think it's important to be semantically correct and I don't agree that "training" is the same as "programming".  programming and training are two quite different things IMO, although that distinction became popularly blurred when people started talking about NLP etc.  NLP should really be called NLT.

It's not just a matter of semantics.  Programming is the initial set of rules placed into the machine.  Training is simply the practice of running those rules.  Training is what allows the machine to acquire "knowledge".  This knowledge modifies the machine's behaviour (if the programming is good enough) and gives the machine a better chance of survival.  So a good initial program will mean a better chance of survival.

The difference between humans and machines is that humans have a life/death cycle and a evolutionary mechanism (mutation).  The evolutionary mechanism (mutation) is what allows the "program" to change, not training.  A mutated program which is an improvement over it's predecessor will tend to survive and overtake less adept programs.  It's also possible that at some point the mutation mechanism created self-awareness.  But none of this happened by "training".  It all happened by mutation.  It's mutation which changed the program not training.

Programming = initial set of rules
Training = using the rules to survive

This is the way that AI has evolved in humans, I suggest.  It's also the way it would have to evolve in machines.  So in order to build a truly adaptive AI machine we would need to give it a life/death cycle, the ability to reproduce itself and an evolutionary mechanism.

If we wish to avoid having to do this, then we must create a truly AI program from the start, and not one which becomes AI all by itself.  As I suggested in a previous post, to do this we should first solve the "hard" problem of consciousness.  The advantage of "getting it right" from the start is that we could then program in Asimov's rules, with some confidence that the robot will not evolve out of this programming... and kill us all!
The bad news is there's no key to the Universe. The good news is it's not locked. - Swami Beyondananda

Awakened_Mind

Training and programming are two separate things. The latter is immediate.

Most of these responses as fear driven. Why is it definite that AI competes with us for survival? Why would it warrant the extermination of the human race? Is this the only way it can fit into human society?

I'm looking more to the contrary. It's much more pleasant than killer robots!

-AM
Truth exists beyond the dimension of thought.

catmeow

Quote from: Awakened_Mind
Most of these responses as fear driven. Why is it definite that AI competes with us for survival? Why would it warrant the extermination of the human race? Is this the only way it can fit into human society?

I was joking about the kill us all bit!  I don't see why AI should necessarily compete with us.

Quote from: Awakened_Mind
Training and programming are two separate things.

I agree of course.

The "program" is the genetic component passed down from parent to sibling, in the genome.  Hence the "program" can evolve and improve from generation to generation.

Training is an isolated modification to the neural behaviour of the brain, possibly involving new neural connections.  But training does not modify the genome and therefore it can not be passed from one generation to the next.  It's therefore temporary, and does not contribute to evolution.

There's the difference. (IMO)
The bad news is there's no key to the Universe. The good news is it's not locked. - Swami Beyondananda

iNNERvOYAGER

Right, the term I neglected was, learning. The competent AI should be able to learn and adapt to a changing environment and to react to satisfy immediate needs.

You can say that DNA and the genome is the hardwired program, part of the BIOS so to speak,  and when the system is activated, (life) the system is capable of acting upon automatic programed responses and upon dynamic software running from a framework of logic, with the use of mathematic tools to solve realworld problems, and that the dynamic part could be viewed as learning behavior.

All basic survival skills can be preloaded and could be called "talents" to give the thing a better chance during the initial phase.

The very clever AI would act upon a treasure of Human knowledge, provided as a data bank, then realize that mutation leads to desired traits and evolution, and then run multiple simulations of mutation combinations projected into all conceivable circumstances, and then pick and choose its own destiny.

The time it would take to process and render the evolutionary destiny, would be limited to the power of the AI's processing units and the length of given time to evolve.

You could stall the AI by telling it evolve itself for 2 million years, then it would just stop and sit there completely saturated with the task.
At some point the AI realizes that it needs more powerful processors to solve the task, designs better hardware and more efficient software, modifies itself, continues to project its evolutionary path.

The Law of Robotics can simply be, "Please don't hurt the humans"   :-D

Awakened_Mind

http://en.wikipedia.org/wiki/IBM_Deep_Blue Here's a game of chess people may be aware of between human and a machine.

It's true that computers analyse every alternative and then select the best available option when making decisions. Humans on the other hand tend to be 'cognitive misers' especially if there are thousands of possible courses of action we just 'go with our gut'. It's a lot quicker than analysing everything. Machines have the advantage to be able to do this very rapdily although there was some controversy with the game.

Why are you so inclined to think AI will be hostile?

-AM
Truth exists beyond the dimension of thought.

Mez

well mans the most hostile thing on earth... if man makes a robot that can think for itself it could easily become the most hostile thing on earth.

catmeow

Well, although I said I was joking about the robots killing us.... I'm not so sure. It seems to be the nature of things that the dominant species on the planet subordinates all the others.  Humans rule the planet and farm other animals for food.  Not a good advert are we?  If AI ever became dominant, perhaps it would behave the same way, sub-ordinate us, and use us for labour and food?

It was John Nash ("A Beautiful Mind") the mathematician and Nobel Prize winner who devised the principle of the "Nash Equilibrium" in game-playing.  We are currently in a state of Nash Equilibrium with Russia.  It is this principle which has kept us from mutual nuclear annihilation, and means that neither the US nor Russia can unilaterally disarm.  The same principle I assume would apply to humans and robots....?
The bad news is there's no key to the Universe. The good news is it's not locked. - Swami Beyondananda

Awakened_Mind

Well it's difficult to make the statement that every 'higher order' species subordinates the others, simply because the only example of it we have is ourselves. Not to mention you don't see dolphins terrorising and trying to control krill, or gorillas roughing up snakes in the jungle.

The idea of control and power stem from the ego. In a society where the main institutions (religion, politics and science) are male dominated, a masculine skew isn't overly surprising.

Now with AI, if we only say it's hostile, then we are saying that it only has an ego and ID (instinct) with no social regualtor or ethical factory (superego). So our idea of AI ends at a robot sociopath.

-AM
Truth exists beyond the dimension of thought.

catmeow

Quote from: Awakened_Mind
Well it's difficult to make the statement that every 'higher order' species subordinates the others, simply because the only example of it we have is ourselves. Not to mention you don't see dolphins terrorising and trying to control krill, or gorillas roughing up snakes in the jungle.

Yeh we're a pretty bad example of higher species. 

With intelligence comes the ingenuity required to be effective in subordinating other animals.  We have put in place a huge human lifestyle structure which depends on using other animals, mainly for food, and sometimes for entertainment.  It's only because of our sophistication that we are able to do this.  Dolphins etc lack the sophistication to do this.

Considering  the fact that we have demonstrated this trait (of overpowering lesser species), it would be naive to ignore the possibility that AI might easily attempt to do the same thing with us.

Higher intelligence gives us the ability to explore a wider spectrum of behaviour, both good and bad, including selfishness, greed, cruelty plus creativity, altruism, art appreciation etc.  I think it's reasonable to expect AI to have the same range of behaviours.

So we would need to put in safeguards to protect against the negative ones.

That's assuming we ever manage to create truly intelligent AI, which I doubt....!
The bad news is there's no key to the Universe. The good news is it's not locked. - Swami Beyondananda

Awakened_Mind

Our cultural operating system definitely needs some attention. I think that while we have selfishness and greed, these types of things are unnecessary in an AI world because the entire entity is interconnected with every machine/robot etc. They are merely another extension of something greater. In our capitalist consumer society things such as greed and selfishness can exist. Perhaps not in an AI world. The collective psyche is entirely different.

-AM
Truth exists beyond the dimension of thought.

imp

Just wanna throw in my 2 cents ....

AI is impossible. You cannot make a computer-based consciousness, due to 2 main reasons:
1) a consciousness far transcends the binary system of computers, in all aspects
2) there is no way to artificially create a soul (ask greys they'll tell you :-))

A computer could never go about making things up because of the way it works, predetermined with true/false logic. A computer will always choose on a basis of various options and rules. You can make a computer "learn" new things by imputing new options and new rules, but a computer can't DECIDE for itself. A human (or any other living being) on the other end can make up decisions for himself. He can choose something and he can make a totally different thing out of it, even without any opinions or rules from other ends. Comparing computers to consciousness is much like comparing a skateboard with the entire universe. One can do miracles in right hands but is still fairly limited to its own existence, and the other one is a miracle in itself.

In my opinion there's no way to create an AI in the Terminator sort of way. The furthest you could go is making a computer that could programme itself and input new rules on its own. Then the computer would be partly capable of solving problems that aren't already calculated in advance. Maybe in the end, a Star Trek sort of android like Data, but that's very far fetched and unreal at the moment. As for creating a consciousness like our own out of "nothing", well good luck with that. :-)
"All paths are the same, leading nowhere. Therefore, pick a path with heart!" -Carlos Castaneda

catmeow

#47
hi imp

My gut instinct is to agree with every word you said. (Good post).  But my scientific mind can not dismiss the possibility that sadly, we are just machines.

Who says we have to have a soul?  What you are saying is that you believe consciousness (or intelligence) is a spiritual thing and therefore it will never be created by scientific means, ever.  This is just your "belief" (in a soul). Others will not share this belief in a soul and therefore will not agree with your conclusions.

We can not dismiss the possibility that we may in fact just be machines, created by an evolutionary mechanism that has played out over the past 4 billion years, evolving us into entities that "feel" and are "self-aware".  It's quite possible that we are nothing more than just genes, proteins and nerve impulses. 

It boils down to whether you think consciousness is a physical or spiritual thing.

However what i would say with certainty is that in the event that we are just machines, we are about as close to creating truly self-aware AI as an insect is to understanding the theory of relativity.  That's because i would compare our understanding of consciousness to an insect's understanding of the theory of relativity.   This is what most people don't realise.  There is too much Terminator and Start Trek hype around, fooling people into false hope about AI and raising expectations about a million times higher than they should be.

We are just no where near...... no where......
The bad news is there's no key to the Universe. The good news is it's not locked. - Swami Beyondananda

imp

#48
hi catmeow nice nick btw :)

Yes I believe consciousness is not of physical nature. Even science agrees that there's something fishy about it. They haven't explained its origin, they haven't pinpointed its location in brain and they have no clue what is making it run. For all they care consciousness doesn't even exist. They see human as a biological machine of same predetermination as computers, electrons running around colliding with each other, I see it differently. In my opinion the brain is only a medium between the physical realm and the consciousness. That's why I don't believe we could ever make Life from the physical realm (in fact I consider it extremely paradoxal as I see consciousness as the basis of physical realm). There's a natural way of doing it and its pretty fun too. :-D
"All paths are the same, leading nowhere. Therefore, pick a path with heart!" -Carlos Castaneda

Awakened_Mind

I just came back upon this thread and realised the 'separateness' view we're adopting. We see the AI construct as us and them, when infact the way I picture it now is more a mutual relationship between both. Science has been looking for a while at nanotechnology. The use of it in the body, strengthening the immune system and constructing the body literraly as it's host desires. It's a co-operation, and it definately seems to fit a lot better with unity than creating a whole new species to separate. Both must co-exist. Nanotechnology is really where any concept of AI will stir. The things they are doing at that size is more like science fiction than reality.

Consciousness is something that exists in all things in varying levels. It has/does and always will be present in a computer. It's the creative ability that separates man and machine. Intuition and morality, love and fear, loyality and freedom etc. I'd like to see a machine base it's intuition in something beyond probablilities.

-AM
Truth exists beyond the dimension of thought.