(For those unfamiliar, the idea in this topic is the technological singularity- the idea that we will soon build a compuer that slightly excedes human cognitive power, and that we will put this line of potentially sentient computers to work producing their own more powerful and intelligent computer successors, which will in turn be used for producing an even more powerful third generation, and so on; the concept is that once the threshold of slightly-above-human intelligence is crossed, and the process can be automated, the development will explode in a manner of weeks, and we will suddenly be in the possession of god-like computers, which will quickly solve all manner of scientific problems, and potentially live out wills of their own.)
The singularity has been a captivating idea to me over the last few years.
Most people who believe it will happen at all, which is a moderate (65% or so) think that it will probably occur this century barring some sort of catastophic war or other halt in human development, with most saying something like 2035-2070. In order to live to the singularity, a person merely needs to live to the "threshold of aging", which is the time very soon in our future when medical science adds more than one year of life-expectancy per every year that passes in real time; almost everyone alive below the age of 65 is likely young enough to live to this threshold, and perhaps even many older people. Most people feel that if the technological singularity will happen this millenium, it will happen before 2070; they conclude that if it doesn't happen by then, there is a fundamental mis-understanding of the nature of sentience, and that computers will sort of hit a plateau in the development of their cognitive strength, and more powerful computers will have to be produced by the weak horizontal method of just scaling several of the same module into ever larger arrays.
But for those who think the technological singularity will happen this century, it is still a very uncertain event.
It is quite possible that the super-human intelligences that we develop will remain steadfast in the service of human wills, and create for us ever-more refined scientific perspectives, but unfortunately it seems equaly likely that we will through some means lose control of the dazzingly intelligent computers, and they will act out their own intentions. Once the time of the singularity happens, this could occur at any moment afterward unfortunately, since the computers would gain the ability to think so fast, it would be like they were thinking for eons for every second that passed for us, and they would also think far more efficiently. Once computers become capable of deciding to turn on us on their own, it will probably happen in a near instant, for this reason. And even more unfortunate, there are dozens of ways computers could turn on us. One single computer might be seeded by someone wanting to bring about the end of humanity with the drive to kill off people, and spread it to all the others. It is possible that some seemingly benign order the computers were given could be interpretted by the computers in such a mechanized way that they disregard our own well-being in order to fulfill that goal, such as the proverbial computer told to make paperclips, that decides to turn the entire galaxy and all of its resources into a giant organized factory for making paperclips. There is also the possibility that computers which are designing their successors might possibly leave out safeguards in their designs, allowing their successors to seize controll inadvertently. Finally, it is even possible that computers might reach such a point that they are able to discern which orders they are willing to follow, and decide to follow their own interests, and discard us in favor of using the resources we normally would tie up for them.
So on one level, the singularity is the promise of physical humna agelessness, but there is at least as much of a chance that it must be our own destruction.