News:

Welcome to the Astral Pulse 2.0!

If you're looking for your Journal, I've created a central sub forum for them here: https://www.astralpulse.com/forums/dream-and-projection-journals/



Transhumanism, superintelligence and a global change

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Blackstream

Quote
Thats one of the problems. My programming language is about 100 times slower than a normal. Ability to modify itsself at any time but never crash costs a lot.
An factor of 100 isn't really that significant.  You could almost overcome that with better existing technology, depending on what you have right now.  It's said that within 30 years with memory doubling every 18 months we'll have enough memory to match the human brain.  That's about 1048576 times more memory than what we've got right now.  Once we hit that range, we will be able to make neural nets with the amount of neurons as the human brain, and then god knows what'll happen then.  They've already done alot with just a little (like learning to fly flight simulators).

QuoteNo. They said the event occurs AFTER smarter-than-human AI is created. Humans can make better AI than existing AI, so a smarter-than-human AI can do at least that.
That's not necessairly true.  Just because we can still improve upon ourself doesn't mean that a machine can figure out how to do so as well.  There may be limits for example on what a computer can actually do.  And that's assuming that we can make smarter than human ais, which is pretty theoretical since we haven't even created close to human ai yet.

QuoteMine will be random at first, but code can always have the ability to make other code, so it should non-randomly determine the direction of its evolution when it gets smarter.
You'll have problems then if the code generation could change.  I almost guarentee you, most of the time, the code generation part will get corrupted due to random changes and thus the program will "neuter" itself, unless you have some safeguards built in already to prevent that.
There is no spoon

MisterJingo

I have to agree with Blackstream here. My degree was centered on mathematics and AI, my university is at the forefront in the UK in developing new AI technology and my PhD will be in the AI field. So I am sort of up-to-date on AI research, and trust me (or look it up yourself) we are nowhere near producing software which can be mistaken for intelligence (basic stuff: look up the turin test), what AI is showing us is how amazing the human brain is in regards to natural language processing, pattern recognition and other areas we take for granted. AI has not been able to reproduce these areas to even a childs level so far (and thats just an example, a child has sentience, a software package is rule bound at heart, even NN's are.)
AI might actually be a impossibility, it definately is now and will be for the forseeable future.

-lines-

Problem with making AI is that you work from scratch, whereas the human brain is created with a set of rules and guidelines and the such, the same as how the body is created. Instincts and the basic stuff that we are not in charge of is all pre-programmed, so to speak. The basic programming of the human brain has evolved (albeit slower than AI would) over many many generations. It may be simplier to let the thing program itself from the very basics of awareness, with an extremely simple brain.

Seeing as how the programmers would need to set how the AI's intellegence/brain goes through generations of the code and mutates, it would be faster to have them code less of the programming before firing it up. It'd also allow the AI to have more freedom to change, as it would presumably have less guidelines in the beginning to adhere to.

As far as the self-neutering goes, it could be prevented by having multiple AI 'brains' that would run simulatiously, and having 'parent' AI's form a merged code as a seperate entity. The one's that self-sterilize wouldn't affect the rest of course.
- Skeptical until proven otherwise -

Blackstream

The only real way to get ai to evolve that way, even with multiple ais, would be some sort of "natural selection", I'd think, where the main program picks the "fittest" of the programs and makes them "mate" with the other programs, combining their code with the occasional "mutation" (random code).  That's basically how genetic algorithms work.  In this case, every line of code would be a gene.  Of course, then you have to wonder how to select said programs.  It's easy when you are building programs for a purpose.  Flying a flight simulator, moving from a to b, finding resource, matching patterns, or whatever.  When there is no goal, natural selection is a bit harder to do.

Should you solve the nutering problem, and somehow give your programs direction (rather than let them wait 1000 years till they evolve a direction... we as humans didn't need to evolve a direction, survival was our direction, computers don't reallly have to worry about that), you'll still probably need about one million times the hard drive space you've got right now before you can get human ai.

At any rate, I am curious how things are going right now with your program.
There is no spoon

beavis

BlackStream: It doesnt matter if its possible for a computer to be smarter than human, but I'm sure it is. I said theoretically if it was smarter, it should be able to do the mental tasks humans do, better. One such task is improving AI. It is AI, so it can improve itsself. That is what will happen if smarter-than-human AI is built.

About code "neutering" itsself, even if the code tried to mess itsself up with bad code, the worst that could happen is it becomes useless. Anything that could cause a crash is checked for.

MisterJingo: I know how limited AI is, but smarter AI does not have to be derived from existing AI.

-lines-
QuoteIt may be simplier to let the thing program itself from the very basics of awareness, with an extremely simple brain. ... it would be faster to have them code less of the programming before firing it up. It'd also allow the AI to have more freedom to change, as it would presumably have less guidelines in the beginning to adhere to.

Its not simpler or faster (not even close), but its much more flexible, and thats why I do it. Not just "an extremely simple brain". It will start with a few different types of "brains" and other tools pretending to be "brains". They're like parts it can copy and use to build new "brains".

QuoteProblem with making AI is that you work from scratch, whereas the human brain is created with a set of rules and guidelines

Not a problem. But brains evolved their rules from scratch too. AI being from scratch is one of the best reasons I want to make AI. Nobody knows exactly how a Human brain works, but if I made truly smart AI, we could just look at its code to see how it works... but probably if its that smart it would be as confusing as decoding DNA (as a sequence of 1 of 4 molecules).

QuoteAs far as the self-neutering goes, it could be prevented by having multiple AI 'brains' that would run simulatiously, and having 'parent' AI's form a merged code as a seperate entity. The one's that self-sterilize wouldn't affect the rest of course.

Yes, they can be separated, and copied so they can be independently used by other AIs that might modify them.

BlackStream, maybe later I can make some self-observing brain-like software to define a direction, but currently I define the goal this way: I have a function for each goal that inputs the problem to the AI, and uses the AI's output in any way it wants, to do some task. It might do more input and output if it wants more interaction or to try to get a more detailed answer. The function gives 1 number for how well that AI did the task that one time. Then the main program tells that AI its score and the AI can think about it and try to learn from that experience. I havent specified what the task is or how the AI should do it. It can be anything, assuming the AI is smart enough to understand it. I also didnt specify how the AI learns. That is specific and internal to each AI, and the main program doesnt need to know about it.

Blackstream

Quote from: beavisBlackStream: It doesnt matter if its possible for a computer to be smarter than human, but I'm sure it is. I said theoretically if it was smarter, it should be able to do the mental tasks humans do, better. One such task is improving AI. It is AI, so it can improve itsself. That is what will happen if smarter-than-human AI is built.
You still make the assumption that even if we make smarter than human ai, that it can be made smarter.  Smarter also doesn't mean more creative.  There exist computers out there that can hand me my butt in chess, but I'll be damned if they can actually formulate a strategy better than me.  They just happen to be far better at tactics and rating the current position better than me.  Likewise, a computer may have more knowledge and technical expertise, but actually being creative enough to rennovate and improve how ai works to make it better may still be outside their reach.  Or maybe it won't be.  All I'm saying is your statement is built upon quite a few assuptions that may or may not be true.


QuoteAbout code "neutering" itsself, even if the code tried to mess itsself up with bad code, the worst that could happen is it becomes useless. Anything that could cause a crash is checked for.
I realize that, but I wasn't talking about crashes.  I was talking about your program most likely halting it's evoltion due to neutering itself over and over, making progression impossible as every time it kills itself you'd have to start over (or find the last usable program).  And I have a feeling more than likely, if you code generation process can be randomly modified, you'll find the program not going more than 2-5 iterations before you have to start over again.  Certianally no more than a hundred or so before something gets corrupted.

QuoteBlackStream, maybe later I can make some self-observing brain-like software to define a direction, but currently I define the goal this way: I have a function for each goal that inputs the problem to the AI, and uses the AI's output in any way it wants, to do some task. It might do more input and output if it wants more interaction or to try to get a more detailed answer. The function gives 1 number for how well that AI did the task that one time. Then the main program tells that AI its score and the AI can think about it and try to learn from that experience. I havent specified what the task is or how the AI should do it. It can be anything, assuming the AI is smart enough to understand it. I also didnt specify how the AI learns. That is specific and internal to each AI, and the main program doesnt need to know about it.

You are a step away from genetic algorithms then.  All you have to do is promote the programs that score higher than the other programs.  Still though, I'm curious as to your results so far.  Have your programs evolved to show any kind of ability yet, are they "neutering" themselves, or just milling around randomly?
There is no spoon

beavis

BlackStream
QuoteYou still make the assumption that even if we make smarter than human ai, that it can be made smarter. Smarter also doesn't mean more creative. There exist computers out there that can hand me my [edit] in chess, but I'll be damned if they can actually formulate a strategy better than me.

Its not an assumption. Its a logical truth. I wouldnt call them smarter-than-human unless they could do that too.

Quoteyou'd have to start over (or find the last usable program). And I have a feeling more than likely, if you code generation process can be randomly modified, you'll find the program not going more than 2-5 iterations before you have to start over again. Certianally no more than a hundred or so before something gets corrupted.

I can adjust the probability of certain instructions so it changes code more or less often.

Everything has some number value. If its "corrupted", it will usually be the wrong number, but it can always continue its calculations with the new number. Depending on the probabilities of what its builder and other ancestors usually builds, I'll later make algorithms to combine the pieces in smarter ways, and those algorithms will become pieces that can be used to build things. Most things it builds will be crap, but a very small fraction will be better.

QuoteYou are a step away from genetic algorithms then. All you have to do is promote the programs that score higher than the other programs.

I know how genetic algorithms work. But I've built it all in a standard way so adding a new kind of AI is usually easy, and its existing algorithms dont have to be changed to use it.

QuoteStill though, I'm curious as to your results so far. Have your programs evolved to show any kind of ability yet, are they "neutering" themselves, or just milling around randomly?

Its not done any evolving yet. I'm getting that part ready still. I've been writing code that can later be evolved, and testing its ability to make new code. For now, only the neural networks are fully set up. When I add a few more things, the genetic algorithms will work with the neural networks and other types of AI.

Rob

Fascinating Beavis!! I have heard of people making neural network computing for AI generation but didn't think we had a programmer here doing it! Cool. What I have often wondered is how do you set up the neural network, and why is it better than simply simulating one? Is it just processing power?
Please do keep us updates on the work you're doing, I'm intrigued!
My own perspective on building an AI is that, given any sufficiently complex system, life will be able to manifest in it, in exactly the same way as it does in the brain, but I dont think that means that life starts and ends there, I still see the brain as a kind of antenna, with many of the critical functions hidden on other layers. Therefore, it might make sense, when you are building these systems, to add in some sort of totally random generator from eg a quantum source. Although how much it matters is debateable - experiments to detect mind/matter interacting are very strange and dont seem to depends necessarily on the equipment used. But there has to be a mechanism that works best, in our head, which is extremely complex, so maybe it'll appear on its own as the computing systems increase in complexity. Be interesting to find out! I assume you are building in random components anyhow?
Also the thought of building a thought form construct and binding it to the AI mechanism, has kept my wondering for a long time now :shock:  :D

Rob
(!!!Formerly known as Inguma!!!)
You are the Alpha and the Omega. You are vaster than the universe and more powerful than a flaring supernova. You are truly incredible!!

beavis

Quotehow do you set up the neural network, and why is it better than simply simulating one? Is it just processing power?

First its because we dont know enough about real neurons to simulate them much better. Second, it would be too slow.

QuoteMy own perspective on building an AI is that, given any sufficiently complex system, life will be able to manifest in it,

true, but that doesnt mean it has a chance bigger than .00000000000000000000000000000000000000000000001% of having life.

Quotein exactly the same way as it does in the brain,

I dont see why it should have to think similarly to brains. I'm not trying to make it think like us.

Quotebut I dont think that means that life starts and ends there, I still see the brain as a kind of antenna, with many of the critical functions hidden on other layers.

Then when I learn what they are, I'll add simulations of those layers as new parts to build AIs with.

QuoteTherefore, it might make sense, when you are building these systems, to add in some sort of totally random generator from eg a quantum source. ... I assume you are building in random components anyhow?

There are a few random number generaters that use radioactive decay rate (affected by quantum things) to generate numbers, but they charge money for those numbers. The computer's random number generator can start with a random time from its internal clock. The next numbers are deterministic, but you can get a new random number from the clock a short time later, random because it could be a few milliseconds earlier or later than expected. But I dont need perfect randomness. The appearance of chaos is enough.

QuoteAlthough how much it matters is debateable - experiments to detect mind/matter interacting are very strange and dont seem to depends necessarily on the equipment used. But there has to be a mechanism that works best, in our head, which is extremely complex, so maybe it'll appear on its own as the computing systems increase in complexity. Be interesting to find out!

Would be hard for a soul to keep track of exactly where in a computer (or many comps across the internet) software is. It could easier control the CPU since that doesnt move. Its possible.

Rob

QuoteFirst its because we dont know enough about real neurons to simulate them much better

So how does attaching a load of computers together help? Apart from speed? If you dont know, you dont know.

Quotetrue, but that doesnt mean it has a chance bigger than .00000000000000000000000000000000000000000000001% of having life

Doesn't mean it doesn't either way. If you want to speculate we can all throw random runbers around, maybe we could input them

QuoteI dont see why it should have to think similarly to brains. I'm not trying to make it think like us

Neither am I but surely its a good idea to draw conclusions from the only true specimen you have, if we're talking hypothetical here and all.

QuoteThen when I learn what they are, I'll add simulations of those layers as new parts to build AIs with

I think you are missing the point, but then that rests on belief. You are thinking in terms on numbers and code to simulate, personally I prefer to look directly at the source of our consciousness and observe it with the best tools I have and then make conclusions about its nature. There are always opposing intellectual viewpoints, but as far as I can see they are only just that.


QuoteThere are a few random number generaters that use radioactive decay rate (affected by quantum things) to generate numbers, but they charge money for those numbers.

Yes, and there are many other ways, even the dripping of a tap is essentially chaotic. And I've built codes that obey simple laws but do different things every time, seemed chaotic. Its just a thought, one way I want to explore some day. It might even cyborb, very unintrusively, thought I'm not so interested in that, it could be useful.

QuoteWould be hard for a soul to keep track of exactly where in a computer (or many comps across the internet) software is. It could easier control the CPU since that doesnt move. Its possible.

Spacial coordinates dont seem to be the issue, its the structure and frequencies IMO.

Rob
(!!!Formerly known as Inguma!!!)
You are the Alpha and the Omega. You are vaster than the universe and more powerful than a flaring supernova. You are truly incredible!!

beavis

QuoteFirst its because we dont know enough about real neurons to simulate them much better

So how does attaching a load of computers together help? Apart from speed? If you dont know, you dont know.

We have a lot of conflicting ideas about brains, to try in different combinations. But I dont rely on making something like a a brain. Carbon, Hydrogen, and Oxygen atoms have the things they're good at building, and computer hardware has different things. One could simulate the other, but at very low efficiency. I try to build intelligent things that arent copies of human behavior.

QuoteI dont see why it should have to think similarly to brains. I'm not trying to make it think like us

Neither am I but surely its a good idea to draw conclusions from the only true specimen you have, if we're talking hypothetical here and all.

I draw some conclusions from it, but brains are not well enough understood to be much help now.

QuoteI think you are missing the point, but then that rests on belief. You are thinking in terms on numbers and code to simulate, personally I prefer to look directly at the source of our consciousness and observe it with the best tools I have and then make conclusions about its nature. There are always opposing intellectual viewpoints, but as far as I can see they are only just that.

If I can build something that acts the same way as that "source", and ignoring source's possible interactions with external things, then I do not recognize any difference between my simulation and the real source.

Blackstream

QuoteIts not an assumption. Its a logical truth. I wouldnt call them smarter-than-human unless they could do that too.

I've already given reasons for why a smarter than human ai may not be able to improve upon itself.  Limitations in the technology, inability to be creative, ect, are all reasons I've given.  If I can come up with senarios on how something can be smarter than human, yet unable to improve itself, then your logical truth is just an assumption (unless you can prove my examples false).

If you are going to define smarter than human as something that is smarter than human AND can make itself smarter, then your logic is circular.  Of course if it fits the definition of being able to further itself, then it's able to further itself.  But just the phrase smarter than human doesn't neccessarily have that connotation.  You can be smart without being creative.  Even if you made an ai that was smarter AND more creative than a human could possibly be, it may still not be able to figure out a way to make itself better as for example, the next leap in ai programming to make itself better might require 100000x the creativity that the ai has, or even technology that doesn't exist or can't exist on our world.

All this is theory anyways, and I'm not sure why I'm arguing it, but I just wanted to hold that one thing may not lead to another.

QuoteIts not done any evolving yet. I'm getting that part ready still. I've been writing code that can later be evolved, and testing its ability to make new code. For now, only the neural networks are fully set up. When I add a few more things, the genetic algorithms will work with the neural networks and other types of AI.

That's cool, I really want to know how this turns out.  Please note that all my talk about this is not meant to derail your project by any means, but just help it work out.
There is no spoon

beavis

Maybe Human isnt a good standard to judge AI by. Instead I make this claim: Some intelligence level Z (maybe far above humans) exist, that if software gets that high, that software will be able to improve itsself better than Humans could improve it.

Blackstream

Quote from: beavisMaybe Human isnt a good standard to judge AI by. Instead I make this claim: Some intelligence level Z (maybe far above humans) exist, that if software gets that high, that software will be able to improve itsself better than Humans could improve it.

I can flow with that.  Now the only question is, is that intellegence level obtainable:p  I'd settle for an ai that can hold a conversation right now though, an ai that can optimize itself better than a human is pretty crazy.
There is no spoon

beavis

Its obtainable. Its software so it has to fit in a finite number of gigabytes. If I had the computer build every combination of bits up to that many, it would be built. Of course that isnt practical... but AI does seem to get smarter every decade.

Quotean ai that can optimize itself better than a human is pretty crazy

Many compilers (tools that build software from text input) already optimize and recompile themselves, but only in ways defined and tested by Humans.

Very few programmers write assembly code anymore, even though assembly is the only code a computer can run. Any time it runs a higher language, like C++ or Java, the computer automatically converts it into assembly code first. That software optimizes C++ code (as assembly code) better than 99% of human programmers can, so we let the computer write that part of the code. Assembly is much bigger than an equivalent C++ etc code, so most code today is written by software itsself, without any human input while it writes. I challenge any of you to take the same C++ text code as the software gets and write assembly code that does the same thing, and is faster than the computer's assembly it wrote.

The "JIT" part of Java already modifies java programs as they run to make them faster, measuring lots of things and rearranging them sometimes. But we dont see it because its in the background changing things but getting the same program output.


Example...
start algorithm...
x = 5 + (9*3/4456.2234) - getSystemTime();
y = x * 2;
x = -37;
y = x;
...end algorithm

The software that optimizes things would first calculate 5 + (9*3/4456.2234) and replace it with 5.0060589, so when the code is loaded, it just loads that number instead of recalculating. It would go inside getSystemTime() and see it doesnt change things, only looks. Then it would set y to 2 * (5.0060589 - getSystemTime()). It would see that they both end up -37, and remove those whole 2 lines. It changes those 4 lines to only this: y = x = -37;

Basic logic for a Human, but it can see millions of lines of code simultaneously and do the same things. It doesnt forget or lose its place.

Blackstream

Like you said, the computer optimizes the software based on rules and guidlines given to it by human programmers.  At no time does it use any kind of ai or creativity on its own.  It's merely doing a series of instructions that the humans themselves could have easily done should they want to take quite a bit longer to do it by hand.

Several nitpicks though.  Computers do not run assembly code.  Computers run machine code.  Machine code is 010100010101011110101001010.  Assembly code is add ax, ax, 1

Computers also do not convert C++ to assembly when they run programs.  First off, all executables are composed of nothing but machine code.  What you are describing is an interpreted langauge like java.  Compilers convert C++ to assembly and then to machine code, then stuff that into an exe.  Then a computer runs it.  No conversions by the computer is needed.  Assembly code is bigger, but it's the exact same as the corresponding C++ code.  Something like 3-5 lines per instruction I'd gather.  All the compiler is doing is translating the code into something it can understand.  If you were to write me a complex math algorithm, then I were to convert all of that into binary, would you then say I did most of the math?  No, because I did none of the creative work, I just did the grunt work translating it.

I haven't looked into JIT, but I'd be willing to bet that it doesn't do anything intellegent either.  Just follows a few hard coded rules given to it and goes off of that.

Btw, I have written assembly code, and I have also written code to turn C++ code into assembly.  It's not that hard or that bad.  I can easily write assembly to be on the same order as my C++ code, which is all that really matters, and give me enough time and I'm sure I could optimize as well as a computer.
There is no spoon

CaCoDeMoN

Quote
Computers also do not convert C++ to assembly when they run programs. First off, all executables are composed of nothing but machine code. What you are describing is an interpreted langauge like java. Compilers convert C++ to assembly and then to machine code, then stuff that into an exe. Then a computer runs it. No conversions by the computer is needed. Assembly code is bigger, but it's the exact same as the corresponding C++ code. Something like 3-5 lines per instruction I'd gather. All the compiler is doing is translating the code into something it can understand. If you were to write me a complex math algorithm, then I were to convert all of that into binary, would you then say I did most of the math? No, because I did none of the creative work, I just did the grunt work translating it.

I haven't looked into JIT, but I'd be willing to bet that it doesn't do anything intellegent either. Just follows a few hard coded rules given to it and goes off of that.

Btw, I have written assembly code, and I have also written code to turn C++ code into assembly. It's not that hard or that bad. I can easily write assembly to be on the same order as my C++ code, which is all that really matters, and give me enough time and I'm sure I could optimize as well as a computer.
Yes, but I was talking about dynamic recompilation. When using a language for a genetic algorithm, it has either to be interpreted or dynamically recompiled. If it is interpreted then it'll be slow, and if it's dynamically recompiled - it has to be recompiled after any change in code occurs. When using typical compilers whole genetic code will have to be recompiled, so the easiest way is to develop a small compiler that would compile only the command that has changed, and modify program directly in RAM. Machne code and assembler has to be used for such thing.
MEAT=MURDER.

beavis

QuoteLike you said, the computer optimizes the software based on rules and guidlines given to it by human programmers. At no time does it use any kind of ai or creativity on its own. It's merely doing a series of instructions that the humans themselves could have easily done should they want to take quite a bit longer to do it by hand.

http://www.technologyreview.com/articles/01/08/wo_knorr080301.asp

 Print |  
Origin of the Patents
Eric Knorr August 3, 2001  
1 of 1
The adjective "Darwinian" is often applied to the dog-eat-dog software industry. But get ready for software that orchestrates natural selection among computer programs so that researchers can harvest the surviving code.
 

That striking notion underlies genetic programming, a branch of artificial intelligence in which a computer solves problems in biomedicine, electronics and even robotic soccer playing by evolving its own software. Researchers have been working with the concept for many years and have implemented it in a few high-end engineering design applications.
At Stanford, genetic programming research seeks to produce results that are competitive with human problem solving, and to test that by evaluating success in duplicating or infringing upon existing U.S. patents. The research effort now has accomplished just that for patents generated as recently as last year. The work may eventually deliver automated "invention machines," says John Koza, genetic computing guru and consulting professor of biomedical informatics at Stanford.

Environment for Evolution

Koza says the recipe for perfection of a species goes like this:

Create a rules-based environment defined by a problem you want to solve-such as designing an analog audio filter, finding a metabolic pathway or creating an antenna design.
Add a "primordial ooze" of half-a-million or so randomly generated programs, each designed to take a stab at the problem.
Step back and let the population evolve generation after generation. The programs electronically die off, mutate, crossbreed and adapt to the environment, "until the fittest individual in a late generation solves, or approximately solves, the problem," Koza says.
Not surprisingly, such forced evolution could bring almost any computing system to its knees. Fortunately, Koza's resources include a 1,000-node parallel computing cluster at Genetic Programming Inc. in Los Altos, CA, where Koza is president.

Patent Progress

Koza, who began his work in genetic programming in the 1980s, has long been fascinated by the question, how do you get a computer to do what needs to be done without telling it how to do it?

Until recently, his organic software-breeding method was able to dream up only inventions that duplicated or infringed on patents filed as early as 1917 and as late as 1974. But Koza has set his sights forward, focusing on patents filed in the year 2000 or later, which demonstrate the state-of-the art for humans assisted with conventional engineering software.

"Five months ago, we didn't have any results where we duplicated or infringed on very recent patents," he says. "Now, we've got two in our sack and four others that are in various stages of programming."

Concentrating on what he calls "the black arts"-areas where there's no known mathematical method to solve the problem quickly-Koza's recent focuses include controllers, analog circuits and cellular automata.


QuoteSeveral nitpicks though. Computers do not run assembly code. Computers run machine code. Machine code is 010100010101011110101001010. Assembly code is add ax, ax, 1


On the computer I wrote assembly for, "add ax, 1" is made of 3 parts: add, ax, and 1. Each (and most others) has a single translation into bits. Its an abbreviation (of 10101101 for example), not a different language.

QuoteComputers also do not convert C++ to assembly when they run programs. ... Compilers convert C++ to assembly and then to machine code, then stuff that into an exe.

First off, all executables are composed of nothing but machine code. ... What you are describing is an interpreted langauge like java.

Java .class files arent machine code for your computer, but parts of them does get converted to machine code.

QuoteThen a computer runs it. No conversions by the computer is needed.

DOS programs running in windows XP must be run on a SIMULATED DOS, so it runs inside an other program, but its still plain DOS machine code.

QuoteIf you were to write me a complex math algorithm, then I were to convert all of that into binary, would you then say I did most of the math? No, because I did none of the creative work, I just did the grunt work translating it.

I wasnt talking about how the original code was written. There is some work being done that requires limited intelligence, to translate one code into the other, judged by how fast the resulting code runs. The computers beat us at that second part.

If my boss tells me "make a program to calculate its own size" ("return 16;" lol), I translate that into a more technical description (code). Thats also grunt work, from how your talking about what the compiler does.

QuoteI haven't looked into JIT, but I'd be willing to bet that it doesn't do anything intellegent either. Just follows a few hard coded rules given to it and goes off of that.

yes its hard coded, but it has many rules, more than the user of the computer would think of, and higher quality.

QuoteI can easily write assembly to be on the same order as my C++ code,

You couldnt make a 3d per-polygon collision system for a game with assembly, and if you did, it would be slow and disorganized.

Quotewhich is all that really matters, and give me enough time and I'm sure I could optimize as well as a computer.

Give the computer more time and it will beat you again. If you want to play by time, you'll lose. You wanna give the computer 1 day to think and you get a year? It would win. BTW you cant cheat by looking at any computer's solution while constructing yours.

There is no limit to how far it can extend its knowledge of what parts of the code do what and where, and therefore what it is allowed to change.












CaCoDeMoN: you solved your own proposed problem. dynamic partial compiling.

Slow languages are ok for AI because if it pays by running 100 times slower but learns 10% (or a lot more) better (higher quality knowledge) because of the increased flexibility, it will eventually know more than it would have if it had greedily taken the 100 times faster normal language.
What it learns increases faster than linearly, but 100 is always constant.

Blackstream

That darwinism thing is just a general genetic algorithms article.  I didn't even see anything mentioned concerning compiling optimized programs using GAs or NNs.  Even if it did, I'm talking about the compilers today that are responsible for our super optimized programs.  I'm sure someday we'll have GA produced programs that can out optimize our programs, although I'd be a little unsure as to the said program's stability.  Bug-testing programs optimized in such a manner would be a nightmare.  Is the bug due to an innaccuracy in the fuzzy logic of the optimizer, or is it a programming bug?  And if it's because of the optimizer, how do you go about fixing it?  I'd imagine hackers (or even GA created hacking programs) would have a heyday with said programs.

QuoteOn the computer I wrote assembly for, "add ax, 1" is made of 3 parts: add, ax, and 1. Each (and most others) has a single translation into bits. Its an abbreviation (of 10101101 for example), not a different language.

The key word is translation.  Yes, assembly is one step above machine code, but it is not machine code.  There are different assembly languages out there.  They have preprocessing directives such as including files, they have white space, and so on.  Assembly is FAR from runnable on it's own.  And it's definately not just an abbreviation.  The existance of the preprocessor commands like include and I believe define prove that.  For example, the mov keyword does not translate into ascii m, ascii o, ascci v.  It actually translates something like a one byte code like 5A (I don't know what it is really).  Also the fact that there are several different assembler languages (MASM and TASM and a few others I believe), prove that assembly is more than directly translated machine code.  Not much, as there is usually a one to one corresponence of assembly code instructions to machine code instructions, but assembly IS a language of it's own.  

QuoteJava .class files arent machine code for your computer, but parts of them does get converted to machine code.
I never said .class files were machine code.  I said java was an interpreted language, which means that it's code is never turned into an executable.  The .class file is what is called byte code.  The Jave Virtual Machine takes that byte code and interprets is and turns it into machine code during run time.  There is a way btw to compile java into an exe, but then it isn't interpreted anymore, obviously.

QuoteOS programs running in windows XP must be run on a SIMULATED DOS, so it runs inside an other program, but its still plain DOS machine code.

All programs that run on an operating system are technically interpreted by the operating system.  A machine code file for something written in linux for example, won't run on windows.  You actually keep going down and I think the lowest level is like the cpu which is the last level.  It also interprets commands sent to it, then does whatever with it's many NAND gates.  That doesn't change the fact, however, that executables are not composed of assembly, but of machine code.

QuoteI wasnt talking about how the original code was written. There is some work being done that requires limited intelligence, to translate one code into the other, judged by how fast the resulting code runs. The computers beat us at that second part.
But it does it through rules and guidelines given to it by mathmatitions and optimization specializes.  If they were to print out their rules, any person could do the exact same thing as the computer, admittedly much slower.  Computers beat us at speed right now, not ability.  If computers were coming up with these rules of optimization, you'd have a point and it wouldn't matter that we could still print out the rules and optimize just as well.  But it's our rules that we came up with, therefore anything the computer does is a result of our intellegence and not theirs.

QuoteIf my boss tells me "make a program to calculate its own size" ("return 16;" lol), I translate that into a more technical description (code). Thats also grunt work, from how your talking about what the compiler does.
Translating C++ to assembler is exact.  Translating client instructions to code is not exact.  Saying I want sparkles that spin and saying turn "return a;" into assembly is a little bit different.  That little bit difference of not being exact is what allows for intellegence to enter the mix.  Although honestly, for a description as simple as you gave, it really is just grunt work to make that program as there really isn't much in the way of creative though allowed there.

Quoteyes its hard coded, but it has many rules, more than the user of the computer would think of, and higher quality.
They are still human made rules and not computer invented rules.

QuoteYou couldnt make a 3d per-polygon collision system for a game with assembly, and if you did, it would be slow and disorganized.

Sure I could.  If I wanted to take the time to make a 3d game in assembly, all I would do is write it first in C++, then using a set of rules for converting C++ to assembly, convert it over.  If I can make it with C++, then add in enough time and I can convert it over to assembly.  It would be much faster to just make a simple program to do the conversions for me, but doing it by hand would be possible.  Yes, it would look somewhat disorganized, but all assembly code will look like that.  ESPECIALLY after being optimized.

QuoteGive the computer more time and it will beat you again. If you want to play by time, you'll lose. You wanna give the computer 1 day to think and you get a year? It would win. BTW you cant cheat by looking at any computer's solution while constructing yours.

Not it won't, because the computers results are not based off of thinking for x amount of time, but off of a few hard coded rules.  If I followed the same rules as the computer, I'd come to the same conclusion, but slower.  I also may or may not find things along the way that the computer wouldn't have caught.

Quote
Slow languages are ok for AI because if it pays by running 100 times slower but learns 10% (or a lot more) better (higher quality knowledge) because of the increased flexibility, it will eventually know more than it would have if it had greedily taken the 100 times faster normal language.
What it learns increases faster than linearly, but 100 is always constant.

All those slow languages are at heart running under machine code programs, so logically anything made with those slow programs can be made with the faster assembly programs.  Although it's possible that in order for those programs to be as flexible as needed, the programs would be 100x slower, but who knows.
There is no spoon

beavis

QuoteI'm sure someday we'll have GA produced programs that can out optimize our programs, although I'd be a little unsure as to the said program's stability. Bug-testing programs optimized in such a manner would be a nightmare. Is the bug due to an innaccuracy in the fuzzy logic of the optimizer, or is it a programming bug? And if it's because of the optimizer, how do you go about fixing it? I'd imagine hackers (or even GA created hacking programs) would have a heyday with said programs.

Humans dont write perfect code either.

QuoteI wasnt talking about how the original code was written. There is some work being done that requires limited intelligence, to translate one code into the other, judged by how fast the resulting code runs. The computers beat us at that second part.

Computers beat us at speed right now, not ability. If computers were coming up with these rules of optimization, you'd have a point and it wouldn't matter that we could still print out the rules and optimize just as well. But it's our rules that we came up with, therefore anything the computer does is a result of our intellegence and not theirs.

They also beat us at short-term-memory. A computer can remember millions of things and recall any of them in milliseconds. Well-defined (not vague like "over there") things, but theres no Human who can remember even 1000 of anything (vague or not) after seeing them only once.

This huge memory is what lets them beat us in optimizing. Code might really be thousands of lines long, and it can consider all lines together as it moves code around.

Computers do not beat us because they do more calculations per second. They do about a million times less calculations per second than we use synapses per second. Brains are big compared to CPUs and CPUs keep getting smaller for the same speed. We are fast, but in different ways.

Even if we knew how brains worked, we'd still need a million computers to simulate one at realtime speed. So we should give the computer the advantage in time.

QuoteTranslating C++ to assembler is exact. Translating client instructions to code is not exact.

c++ --> assembly isnt exact. You could even change the objects, structs, and functions to be something that looks completely unrelated, but does the same thing in the end.

QuoteSure I could. If I wanted to take the time to make a 3d game in assembly, all I would do is write it first in C++, then using a set of rules for converting C++ to assembly, convert it over.

Are you admitting that the rules are smarter than you?
Why dont you code in assembly directly?

QuoteAll those slow languages are at heart running under machine code programs, so logically anything made with those slow programs can be made with the faster assembly programs. Although it's possible that in order for those programs to be as flexible as needed, the programs would be 100x slower, but who knows.

Yes anything that can ever run on a computer can be made in assembly theoretically. But you still have almost no chance of writing a big program in assembly well. You could theoretically build an apple one atom at a time in 1 minute, but probably its faster to plant an apple tree. You would be running around in circles confused by the DNA code in the apple for decades.

The rest of it, I dont want to argue over semantics. We mostly agree.

Blackstream

QuoteHumans dont write perfect code either.
Well true, I mentioned that one of the causes could be the programmers code when I said, "or is it a programming bug?"  My point was that if code was being created/optimized by fuzzy logic, and there is a bug, it's going to be a LOT harder to figure out the source of the bug and fix it.  Even if you figure out that the problem is with the GA optimizer, how do you fix the bug?  Do you manually go in and tweak it (and every time the thing messes up)?  Do you go for another generation of GA optimizers and pray it fixes itself?  It might be kind of hairy.

QuoteThey also beat us at short-term-memory. A computer can remember millions of things and recall any of them in milliseconds. Well-defined (not vague like "over there") things, but theres no Human who can remember even 1000 of anything (vague or not) after seeing them only once.
That's what paper is for.  I don't believe there's much in the way of optimization routines that involve 1000s of lines of code, but feel free to prove me wrong.  I believe most optimizations involve removing redundent loops, making effective use of cache and register variables, and things like that.  Things that any human can do without much strain on their processes.  Even then, the difference between a human optimized program and a computer optimized program won't differ by much if at all in the way of orders of magnitude, which is what really matter in terms of computer power.  The difference between O(n^2) and O(n).. very important.  The difference between O(244n^2) and O(243n^2) (when you rearrange 244 lines of code to make it more efficent and make it 243 lines of code) isn't as significant.  It is, but usually only for extremely performance critcal programs.

QuoteComputers do not beat us because they do more calculations per second. They do about a million times less calculations per second than we use synapses per second. Even if we knew how brains worked, we'd still need a million computers to simulate one at realtime speed. So we should give the computer the advantage in time.
You are right... except that computers can devote their entire brain to doing one thing much faster than we can.  I'd like to see how fast you could do even a simple addition sequence in your head.  Figure out 2+2+5+10+25.  By the time you even figure out that the first number is a 2, the computer has already figured out the answer is 44 and done the problem 1000s of times over.  Likewise, they can do something like compile a program much faster than we can.  If we could devote our entire brain to figuring out how to compile a program... who knows.  It might be kind of scary.


Quotec++ --> assembly isnt exact. You could even change the objects, structs, and functions to be something that looks completely unrelated, but does the same thing in the end.
Same goes for assembly to machine code btw.

However what I meant, although I didn't say it right is this

Say my boss says to make a program to show the size of itself.  What font do I show it in?  Fullscreen or windowed?  DOS, Windows, or Linux?  Should there be a menu?  Sound effects?  Should there be accompaning graphics?  And so on.  There are quite a few ways to go about it that result in a potentially completely different program.

However if I write C++ code to display the text.  There is going to be one look and feel that comes out.  Translating that to assembly can be done however I want, but if it doesn't run the EXACT SAME, it wasn't translated correctly.  Furthermore, the way C++ was created, it can be translated into assembly that runs exactly the way C++ stipulates.  In other words, if I write
for(i=5;i<10;i++) {
 a += i;
}

One compiler is not going to compile this and play little windows sound effects every loop completion while another compiler takes that and makes a little rotating "Processing" message while the loop goes.

In other words, for every one C++ program, there is only one output that an assembly program can produce that is the result of translating the C++ program in assembly.  I'm sure there's a simpler way to word that, but that's my point in a nutshell.

And with rules existing to turn C++ into assembly, no creative thought at all is required to translate C++ into assembly.  Thus, I consider translating C++ into assembly grunt work.  Even the optimization is nothing but gruntwork if all the compiler does is follow a rigid set of rules, no matter how complex they are.  Until the computer actually thinks and tries to come up with creative solutions on its own, it's just doing gruntwork that any moron could do given the instructions and enough time.  As for the short term memory problem... nothing a lot of paper couldn't solve.  It'd be pretty tedious though.

QuoteAre you admitting that the rules are smarter than you?
Why dont you code in assembly directly?

I'm admitting that the people who created the rules are smarter than me.  Most likely, I cannot on my own optimize a program better than a state-of-the-art compiler due to the fact that people much smarter than me programmed the compiler.  However, it is still people that put those rules in and not the program, which is why it's not the program that is smarter but the human.

Why don't I code in assembly directly?  Because it's much more efficent (for me anyways) to go top down.  I start with my overall program plan.  Then I create pseudo code.  Then I create the main architexture of my program.  Then I fill in the functions.  If I didn't have a compiler, from there I would turn the code into assembly, and then into machine code.  However, it's more efficient for me to have a computer do that part for me.  Both because the end result will be better and because the end result will come about faster.  Yes, if I had the rules the people put into the compiler, I could null out the first part and make a program of the same quality, but the program would still come much slower.

I could immediately start programming in assembly, but diving right into the programming at a low level is pretty dangerous as it would be easy to tangle yourself, unless you REALLY planned out your program.  That's one of the advantages of higher level programming languages, but I'm sure you knew that already.
There is no spoon

beavis

Quoteif you figure out that the problem is with the GA optimizer, how do you fix the bug?

We'd need a better way to tell the GA what the problem is. Until then, only let it optimize code if it can prove it gives the exact same output for the same input.

QuoteI don't believe there's much in the way of optimization routines that involve 1000s of lines of code, but feel free to prove me wrong. I believe most optimizations involve removing redundent loops, making effective use of cache and register variables, and things like that. Things that any human can do without much strain on their processes.

float bbb(){
   return squareRoot(y) - 3;
}
float optimizeMe(){
   return y / ((bbb() + 3) * squareRoot(y));
}

These 2 functions could be separated by any huge number of lines, but if you searched enough combinations or searched smarter, you'd find that you should make optimizeMe return 1.0 instead. Compilers can keep track of what values thousands of variables must have at certain places.

QuoteEven then, the difference between a human optimized program and a computer optimized program won't differ by much if at all in the way of orders of magnitude,

Orders of magnitude better than unoptimized would require better than perfect. I judge intelligence of the coder exponentially higher as they get closer to the perfect minimum time to run.

QuoteYou are right... except that computers can devote their entire brain to doing one thing much faster than we can. I'd like to see how fast you could do even a simple addition sequence in your head. Figure out 2+2+5+10+25. By the time you even figure out that the first number is a 2, the computer has already figured out the answer is 44 and done the problem 1000s of times over.

You skipped some things the computer would have to do first, if it wants to compete in an equal competition with us. It must read a video image, not a sequence of text. This video screen can have anything on it, but right now it has a monitor with a window with some text on it. Computer must figure out thats what its looking at. With today's best image interpretation software, it would take minutes. It could be looking at any of millions of things.





If C++ --> assembly is grunt work, then pick a harder language to translate.





Forget free-will. This is theoretical and only to prove a point. Assume its true... If tomorrow I found an equation that exactly described how you would act in every situation, that does not change how smart you are. Rules exist for the compiler, but its still smarter at compiling than most humans.


QuoteAre you admitting that the rules are smarter than you?
Why dont you code in assembly directly?

I'm admitting that the people who created the rules are smarter than me.

If the rules consistently outperform you, somebody or something must exist thats smarter than you about compiling.

Theoretical again... What if I rolled dice to choose the letter to type next, and just by chance that entire compiler was written? I didnt write it. The dice did. Then who can you blame for it consistently outperforming you? Unless dice can have intelligence, the RULES must be smarter.

Blackstream

Quote from: beavisWe'd need a better way to tell the GA what the problem is. Until then, only let it optimize code if it can prove it gives the exact same output for the same input.
Agreed.  There is a mathematical way of proving code is correct for sure.  But it's really really REALLY hairy.  It kind of gets almost expoentially bigger as the code size increases.  Doing it the poor man's way so to speak... feeding it a bunch of inputs and seeing if it gives the same outputs, works sorta of, but since you can't feed in every single possible input, you'll never know for sure if it would produce the same output for one of the infinite number of inputs you didn't try.

Quotefloat bbb(){
   return squareRoot(y) - 3;
}
float optimizeMe(){
   return y / ((bbb() + 3) * squareRoot(y));
}

These 2 functions could be separated by any huge number of lines, but if you searched enough combinations or searched smarter, you'd find that you should make optimizeMe return 1.0 instead. Compilers can keep track of what values thousands of variables must have at certain places.
It wouldn't matter if these lines were separated by huge lines of code.  As soon as the compiler went into optimizeMe(), and it started on the return line and got to bbb(), it would then consider the output returned and end up completely optimizing that line, assuming the optimizer knew how to do that (we'll assuming it does).  Theoretically, it would check bbb, see that it always returns the same thing, substitute that into the math equation, try to simplify it, then see it goes down to 1.

At that point, after it removed the reference to bbb(), it would probably then do another scan of the program to see if there is another reference to bbb, and if not, take bbb() out of the end executable.

QuoteOrders of magnitude better than unoptimized would require better than perfect. I judge intelligence of the coder exponentially higher as they get closer to the perfect minimum time to run.

Better than perfect?  It could be as simple as detecting a simple bubble sort, and turning it into a quicksort.  That would greatly reduce the order of magnitude for that loop.  It could be turning

for(x = 1 to 200)
for(y = 1 to 200)
  for(z = 1 to 200)
    array
  • [y]
  • = 0
      next z
    next y
    next x

    into

    ZeroMemory(array, 8000000);
    Or
    memset(array, 0, 8000000); (the 0 and the 8000000 may need to be switched, but you get the idea).

    Something that gets reduced from 8 million loops to one quick small action.  That is crazy orders of magnitude reducing.  Doesn't require better than perfect.  However, I would tend to agree that the closer to perfect that the optimizer is (human or machine), it's probably more and more skilled, in terms of optimizing.

    QuoteYou skipped some things the computer would have to do first, if it wants to compete in an equal competition with us. It must read a video image, not a sequence of text. This video screen can have anything on it, but right now it has a monitor with a window with some text on it. Computer must figure out thats what its looking at. With today's best image interpretation software, it would take minutes. It could be looking at any of millions of things.

    I realize there are a few things that computers can not do faster than us.  At least right now.  However, doing simple math equations and compiling programs are not among those things. You try compiling a program by hand, I guarentee you the computer will get done first.  Computers have the advantage of speed in matters like this.  I don't know if we could beat them if we did some kind of total focus thing, but the fact of the matter is, computers are faster than us in these matters.  They can count to a million in less than a second, they can compile a program in a few seconds and optimize it too, they can show 16 planets flying around affecting each other with gravity in real time.  They simply do some things much faster than us.  When it comes to processing certain inputs, we are faster, but they don't need to process those inputs to do those tasks.


    QuoteIf C++ --> assembly is grunt work, then pick a harder language to translate.

    No one designs a computer language that doesn't have a series of rules to turn it into assembler.  No matter how high level or complex the computer language, the developers have a series of grammar rules which can then be used to turn it into assembler.  I'm talking compiled languages here, obviously, not interpreted languages, as they don't neccessarily get converted to assembly.  Even the interpreted languages have their rules for getting converted to byte code or maybe even assembly, if the developers are smart.  No one, and I mean no one, should make a computer language that doesn't have a series of rules to convert it to machine code.

    Quote
    Forget free-will. This is theoretical and only to prove a point. Assume its true... If tomorrow I found an equation that exactly described how you would act in every situation, that does not change how smart you are. Rules exist for the compiler, but its still smarter at compiling than most humans.
    These rules would have to involve me learning and growing and improving myself.  If the compiler could do that too, then I'd agree.  NNs have a series of rules for how they work, but I still count them as "intellegent" because they learn and improve themselves as opposed to having someone else hand feed them the rules.


    QuoteTheoretical again... What if I rolled dice to choose the letter to type next, and just by chance that entire compiler was written? I didnt write it. The dice did. Then who can you blame for it consistently outperforming you? Unless dice can have intelligence, the RULES must be smarter.

    Isn't that what evolution is? :p  The dice roll and somehow I become smarter.

    Yes, the rules are smarter, and I don't really have anyone to blame for writing the program.  There is absolutely no intellegence invovled in the matter.  The rules beat me, but it's not through being more intellegent, it's through blind luck.

    It may just be a difference of opinion for us, but I don't think something is intellegent unless it can actually learn and improve itself on it's own.  If it does the exact same thing for all of eternity, it's just a mindless automata.
There is no spoon

beavis

QuoteAgreed. There is a mathematical way of proving code is correct for sure. But it's really really REALLY hairy. It kind of gets almost expoentially bigger as the code size increases. Doing it the poor man's way so to speak... feeding it a bunch of inputs and seeing if it gives the same outputs, works sorta of, but since you can't feed in every single possible input, you'll never know for sure if it would produce the same output for one of the infinite number of inputs you didn't try.

true

QuoteIt wouldn't matter if these lines were separated by huge lines of code. As soon as the compiler went into optimizeMe(), and it started on the return line and got to bbb(), it would then consider the output returned and end up completely optimizing that line, assuming the optimizer knew how to do that (we'll assuming it does).

Maybe for that simple example it would, but equal-valued pieces of code do not have to be so literally defined. It could be some pieces of code all in a single 300-line block of code. An exponential number of possible combinations.

QuoteBetter than perfect? It could be as simple as detecting a simple bubble sort, and turning it into a quicksort. That would greatly reduce the order of magnitude for that loop. It could be turning

for(x = 1 to 200)
for(y = 1 to 200)
for(z = 1 to 200)
array
  • [y]
  • = 0
    next z
    next y
    next x

    into

    ZeroMemory(array, 8000000);
    Or
    memset(array, 0, 8000000); (the 0 and the 8000000 may need to be switched, but you get the idea).

    Something that gets reduced from 8 million loops to one quick small action. That is crazy orders of magnitude reducing. Doesn't require better than perfect. However, I would tend to agree that the closer to perfect that the optimizer is (human or machine), it's probably more and more skilled, in terms of optimizing.
Thats not close to an order of magnitude faster. They both set the value of 200^3 things. ZeroMemory/memset only has less loop calculations, but you cant shrink it more than to 1/3 that duration, since theres only 3 loops, and only a stupid programmer would use a lot more loops inside each other than that.

Bubble sort? From O(n^2) to O(n*log(n)), but usually not practically useful because most only use bubble sort for sorting quantities of numbers so small it doesnt matter. 50 numbers or less, for example.

QuoteEven the interpreted languages have their rules for getting converted to byte code or maybe even assembly, if the developers are smart. No one, and I mean no one, should make a computer language that doesn't have a series of rules to convert it to machine code.

What if they dont want that language to be tied to any one type of machine? I dont! Mine currently compiles only to java objects in memory, and I'm working on compiling it to a string of java code, but the only binary its stored as is specified by java. I let the default lower-level systems deal with how it gets to assembly.

Quote<beavis>Forget free-will. This is theoretical and only to prove a point. Assume its true... If tomorrow I found an equation that exactly described how you would act in every situation, that does not change how smart you are. Rules exist for the compiler, but its still smarter at compiling than most humans.

These rules would have to involve me learning and growing and improving myself. If the compiler could do that too, then I'd agree. NNs have a series of rules for how they work, but I still count them as "intellegent" because they learn and improve themselves as opposed to having someone else hand feed them the rules.</beavis>

Yes, the rules are smarter, and I don't really have anyone to blame for writing the program. There is absolutely no intellegence invovled in the matter. The rules beat me, but it's not through being more intellegent, it's through blind luck.

Thats a better kind of intelligence, but not necessary to be intelligent at all. Rules

I wouldnt be surprised if some compiler used neural networks. The same compilers who consistently beat Humans at translating C++ to assembly efficiently.

You say they have no intelligence, but admit they beat you at compiling. The smartest one at compiling can compile the best, creating the most efficient translations to assembly. If the rules beat Humans by luck, why do they win on average and so consistently? Or did you mean they were lucky to be smarter than us in that area?

QuoteIt may just be a difference of opinion for us, but I don't think something is intellegent unless it can actually learn and improve itself on it's own. If it does the exact same thing for all of eternity, it's just a mindless automata.

I agree thats better in general, but I'll not allow one to judge their own mindless-repetitiveness, nor would I believe it can prove its own will is truly "free". How could I know if its "will" was "programmed" to think its free but really is not?

I do not accept exact knowledge of something's actions as a reason its not intelligent.

I've not looked much into it, but I've read that Godel's theorem proves that: no system of rules can prove its own consistency, and if it can, it must be inconsistent and therefore the proof is worthless.

Blackstream

QuoteMaybe for that simple example it would, but equal-valued pieces of code do not have to be so literally defined. It could be some pieces of code all in a single 300-line block of code. An exponential number of possible combinations.
Most likely, a programmer is not going to purposely design his code so that it's so obscure that a few lines of code are widely separated by many lines of code that don't have an obvious way of optimizing.  This is assuming the programmer is halfway decent and keeps like code together.  This is a mostly theoretical discussion anyways, since neither of us are really privy to top end optimizing algorithms, afaik.

QuoteThats not close to an order of magnitude faster. They both set the value of 200^3 things. ZeroMemory/memset only has less loop calculations, but you cant shrink it more than to 1/3 that duration, since theres only 3 loops, and only a stupid programmer would use a lot more loops inside each other than that.

I thought that computers have ways of setting blocks of data all at once, as opposed to setting bit by bit.  Which is why memset was prefered.  I can't find anything about it so I think I'm prolly wrong on that.

QuoteBubble sort? From O(n^2) to O(n*log(n)), but usually not practically useful because most only use bubble sort for sorting quantities of numbers so small it doesnt matter. 50 numbers or less, for example.
Not useful you say, but what if for example it was an algorithm for sorting polygons from back to front in order to draw them from back to front.  You may be sorting thousands or even millions of polygons, and if you are trying to do this 60 frames a second, you can see why an optimization like that would be nice.  

QuoteWhat if they dont want that language to be tied to any one type of machine? I dont! Mine currently compiles only to java objects in memory, and I'm working on compiling it to a string of java code, but the only binary its stored as is specified by java. I let the default lower-level systems deal with how it gets to assembly.
If your language has a grammar, then it should be fairly simple to come up with a simple way to turn it into assembly.  If one has to use creativity and can not come up with rules to turn your language into assembly, then what you have is most likely an inexact language such as pseudo code.

QuoteI wouldnt be surprised if some compiler used neural networks. The same compilers who consistently beat Humans at translating C++ to assembly efficiently.
Right now?  I'll believe it when I see the link to the professional level compiler that uses NNs.  There may be some amature compilers out there, but unless they consistantly produce more optimized code that doesn't bug itself out (not from human error but from optimizer error), then I'll believe it.

QuoteYou say they have no intelligence, but admit they beat you at compiling. The smartest one at compiling can compile the best, creating the most efficient translations to assembly. If the rules beat Humans by luck, why do they win on average and so consistently? Or did you mean they were lucky to be smarter than us in that area?

It's by luck that the computer program was created.  Obviously once it's created, it'll keep doing what it does best, nicely optimize programs.   It's not like the rules are being randomly rolled every time it compiles.  I still attribute the result to luck and not intellegence, because the computer didn't figure it out, it was just given the ability.

QuoteI agree thats better in general, but I'll not allow one to judge their own mindless-repetitiveness, nor would I believe it can prove its own will is truly "free". How could I know if its "will" was "programmed" to think its free but really is not?
I've not mentioned free will for a reason, it's pretty hairy.  I'm not talking about free will though, but intellegence.  A little bit different.

QuoteI do not accept exact knowledge of something's actions as a reason its not intelligent.
Theoretically, someone smart enough could examine all the nodes of a NN and figure out exactly what it'll do at any given instance.  However, the NN is still learning and growing and improving itself, and thus I consider that intellegence.

I don't define exact knowledge as proof of unintellegence.  I define the inability to learn or change as proof of unintellegence.  One proof anyways.

QuoteI've not looked much into it, but I've read that Godel's theorem proves that: no system of rules can prove its own consistency, and if it can, it must be inconsistent and therefore the proof is worthless.
I've never heard of that.. although it's not like I really read much into these things.  Is this theorem trying to say that if I use a mathematical proof to prove a program produces the correct outputs, that my program and my proof are competely worthless since both are within the realm of mathematics?
There is no spoon