The Astral Pulse
News: http://www.astralpulse.com/aup.html. Acceptable Use Policy for the forums. Please read and ensure that you respect these policies. Thank you.

If you wish the join The Astral Pulse, please create an account and then email myself or one of the moderators your username and email address (do not send us your password please) and we will activate your account for you. 
If it's been over 24 hours and you still haven't been approved, please send another email, we are just people too and sometimes we get busy.
http://www.astralpulse.com/forums/welcome_to_astral_chat/member_registration_requiring_approval-t42962.0.html

We apologize for any inconveniences this may cause, but it's the last resort we have to fighting the spam for now.
 
*
Welcome, Guest. Please login or register. January 28, 2020, 16:04:58


Login with username, password and session length


Pages: [1]   Go Down
  Print  
Author Topic: Input and output paths between ideas are subconsciously chosen every few seconds  (Read 1501 times)
beavis
Astral Energy 3
***
Offline Offline

Posts: 256


View Profile WWW
« on: March 05, 2012, 10:24:32 »

I first wrote it here http://www.kurzweilai.net/forums/topic/input-and-output-paths-between-ideas-are-subconsciously-chosen-every-few-seconds but thought astralpulse may be interested too because its the simple explanation of how our minds work, and knowing that at an intuition level will allow you to tell the difference between what your brain does and metaphysical things, so you can focus on the metaphysical things.

In the context of this algorithm, intelligence is when many kinds of information, skills, and intuition become more redundantly cross-referenced which results in finding patterns between things previously thought to be unrelated, so those patterns can be stored redundantly overlapping in less total brain cells, which allows brain cells which store only things which are already very redundantly stored to instead learn other things. You can't run out of space in your brain to learn new things, but if you learn in a parrotting/memorizing way, then things don't get cross-referenced. Its not a matter of space. You will use all your brain cells regardless of how much you learn and how smart you are. Its a matter how how well connected the patterns of your thoughts are. Memorizing facts makes people stupid. Memorizing what is connected to what else is 1 level above facts. The patterns of things, why they happen, what other patterns input and output in what places, is what intelligence is made of. You shouldn't memorize the multiply tables, what digit times what other digit is what result. Instead, derive the idea of multiply in binary numbers (which are extremely easy to learn) and rectangles as the multiply of their sides then offer base 10 numbers as a convenience. Its scary how many basic skills most people lack which their more advanced memorized skills were originally derived from. For example, I don't have 1 area in my mind for religion and another for science. All parts of my mind have multiple continuous paths to all other parts of my mind.

Input and output paths between ideas are subconsciously chosen every few seconds, sometimes many times per second and sometimes for long times, but on average a path exists for a few seconds. EDIT: I now think these directional networks of information flow happen continuously and recursively all over a brain and the high level thoughts are these networks which change less often, or as Jeff Hawkins (as I write more about in a paragraph below) would say it, invariant representations, but I mean it more generally and flowing.

Examples of this algorithm in my mind:

I'm typing this without thinking about how my fingers move or what buttons they push or what letters these words are made of. This is a path of information flow from my_skill_of_words to my_skill_of_typing.

Now I reverse that information flow. I'm thinking about the letters in these words, what button I would push to type each, and from there I remember typing those same words in different sentences. That is my_skill_of_letters to my_skill_of_typing to my_skill_of_words to memory_of_specific_sentences. I could go directly from my_skill_of_letters to my_skill_of_words but its harder if my_skill_of_typing is involved.

I can choose any sentence I remember from memory_of_specific_sentences, or create a new sentence from words I remember in my_skill_of_words, and set the direction of information flow to my_skill_of_words to my_skill_of_typing. I normally set a different information path from my_skill_of_looking_at_computer_screen to my_skill_of_letters to my_skill_of_words so if there is an error during the process (fingers push the wrong button or computer error) my_skill_of_words detects a difference in its output (my_skill_of_typing) and its input (my_skill_of_letters). An important point is that my_skill_of_words outputs directly to my_skill_of_typing without going through my_skill_of_letters, and my_skill_of_letters is only used on the return path as I watch what I'm typing. If my_skill_of_letters was on the input and output paths, it could do the error checking that my_skill_of_words does instead.

In general, Jeff Hawkins AI and brain research says error checking is done without any extra effort and when differences are found they go up a hierarchy in the neocortex. This partially fits into what I'm saying because error checking can be done in any part of the brain which information is flowing into or out of (at least 2 inputs or 1 input and 1 output or possibly outputs alone if reverse feedback is strong enough) when the 2 or more thought patterns don't match. It surprises you and changes the paths of information to investigate. Where I disagree with Jeff Hawkins is that it has to be done in a hierarchy. I agree with him that the neural structures connected to each sense (eyes, ears...) are mostly shaped as hierarchies to represent a grid (left/right and up/down and a little time in vision, or higher/lower frequency vs moving (like the visual field when dizzy) time in audio) of data points which can be accessed like temporary memory locations all the same way. While our disagreement is subtle, I disagree with Jeff Hawkins that brain function overall is as much hierarchy at the top as he thinks it is, or at least that it has to be that way. I've observed in most people that they do tend to categorize things instead of continuous networks of thoughts. In general networks are smarter than categories. But I chose not to think that way and trained myself to have more cross-references between my thoughts, so I say Jeff Hawkins has described the smarter evolved animal behaviors in us and not what makes us Human, the ability to change how we think.

If I set the direction of information flow from my_skill_of_letters to my_skill_of_looking_at_computer_screen, and I was surprised my mind did this because I don't normally explicitly choose these information paths, the letters changed color in my mind and it brought up my_skill_of_font_colors.

If you're dancing with someone, its a similar process but uses different skills and information paths between them.

skill_of_seeing to skill_of_movements_of_person.
skill_of_hearing to skill_of_music to skill_of_movements_of_person.
skill_of_physical_balance to skill_of_movements_of_person.

skill_of_movements_of_person to skill_of_your_muscles.

skill_of_your_muscles to skill_of_movements_of_person is a path of error detection, both for moving your own muscles too far and falling onto the person you're dancing with.

You may expect there to be a separate skill_of_movements_of_person for the other person and yourself, but that would only make you dance worse.

AI research so far has been great at creating many different systems for different problems. If they can be used with a common interface, what I'm explaining here can be used to have them automatically train eachother and use eachother in whatever combinations are needed at the time and learn from those combinations in context. AI is the art of getting a system to create itself, not a competition of who can throw the most programmers and computing power at it.

It works in game-theory too. If you set the direction of information flow from one person you know to a different person (your models of those people in your mind), your mind runs a simulation of the first person's goals about the other person. If those people do not normally interact with eachother, the simulation expands to subgoals of how to cause them to interact.

These directed networks of information flow that our minds choose every few seconds may be the most important part of Human intelligence. They're the ability for part of a mind to train other parts of itself in context.

After thinking about this for an hour, how I choose the directions of information flow between any of my ideas,  I found the most surprising thing of all. There is at least 1 part of my mind (and I don't know if its spread across my brain or one part) which is specialized in choosing which other parts of my mind flow information to and from which other parts and changing these networks of information flow in realtime. The surprising thing is that this specialized part of my mind does it recursively. It sets up an information flow from itself to each part of the network to be created and flows to it 1 of the following: you will read, or you will write. It does this quickly for every part of the network of information flows it is creating. Because it happens recursively, I don't think its just 1 part of my mind at the top doing this, but a general recursive and/or fractal algorithm.

This is how ideas form and are stored and remembered. Whatever you're thinking at the time, this specialized part of the mind assigns it a fuzzy label (like a fuzzy hashcode or a specific configuration of a hopfield network) and from then on it can be used as the FROMs and/or TOs in any of these networks of information flow.

As far as I know, its still a mystery how brains represent multiple examples of the same kind of thing, like thinking about 2 pencils on the same table. How do the memories of what a pencil looks like get drawn 2 times on the visual grid of neurons? I figured it out while writing this. Its the networks of information flow redirecting 1 of their outputs to a different part of the visual grid of neurons or any of the many other grids in our minds (like audio is a 2d grid of frequency vs time and time moves like the visual grid when you're dizzy) or the 3d grid we use for thinking about rotations in space.

I'm able to explain these things because over the years, thinking about AI and multiverse physics has given me a new way to think, a new kind of permanent grid my neuron patterns formed into, a grid which simulates networks which allows me to think about the other grids and how the mind works. Human minds are very flexible in how they can form new ways of thinking that are used to build even more advanced ways of thinking. I have difficulty communicating with physics researchers, for example, because they have not formed a network grid in their minds. When I tell them time is [entropy and superposition] and gravity is [negative entropy and wavefunction collapse] in infinite dimensions (fortunately they already model quantum physics without dimensions), they want to see equations. Maybe I'll find equations that match what I'm thinking, but the point is the network grid visualizes it in combination with other known parts of physics first and only then it can be translated to equations. Thinking in terms of equations first is more like categorizing than networks. I'm explaining the rare abilities of my mind, not to brag, but to explain that anyone can train their mind to think these ways and technically how an AI could be built that visualizes its own networks of nodes and can consider variations of them.

I offer this model of continuous subconscious choices of networks of information flow as an answer to a question Eliezer Yudkowsky asked in one of his earlier AI papers.

http://singinst.org/ourresearch/publications/GISAI/GISAI.html "General Intelligence and Seed AI version 2.3"
Quote
When you hear the phrase "triangular light bulb", you visualize a triangular light bulb... How do these two symbols combine?  You know that light bulbs are fragile; you have a built-in comprehension of real-world physics - sometimes called "naive" physics - that enables you to understand fragility. You understand that the bulb and the filament are made of different materials; you can somehow attribute non-visual properties to pieces of the three-dimensional shape hanging in your visual cortex. If you try to design a triangular light bulb, you'll design a flourescent triangular loop, or a pyramid-shaped incandescent bulb; in either case, unlike the default visualization of "triangle", the result will not have sharp edges. You know that sharp edges, on glass, will cut the hand that holds it.

The idea of triangle and light_bulb would both flow toward your 3d grid which would then flow to your visual grid. If you first visualize sharp edges, this would be handled by "error detection" (as I explained above) since it leads to memories of pain which lead to a plan of not being in pain which contradicts the network of information flow which led there. This would go through some oscillations and finding whatever network of information flow fits best as the image of your preferred kind of triangular light bulb is written onto your visual grid.

These networks could, in a very simple form, be implemented by a neural network or hopfield network (or almost any kind of connectionist AI) which uses an attention function and on top of that decides what nodes will read from and write to what other nodes instead of letting them all run symmetricly. In the case of a hopfield network (which has no direction between nodes, an undirected network), each node being updated would only consider connections with other nodes which are currently set to flow information to this node and it would be statistically normalized so it doesn't matter how many nodes are in the context. In a neural network its more complex because connections between nodes have direction, but the same networks of information flow can be done at a higher level between groups of nodes or at a lower level only considering edges which are the current local direction of information flow. It could also be done gradually by interpreting the strength of connections to be a little more if they are aligned with the local direction of information flow, which is probably closer to how it happens in brains.

Where this fits into bigger things:
The above algorithm was copied out of my brain by my fingers (and how that kind of information flow happens is the subject of the algorithm), occurs in nature, and is therefore not patentable, but if used correctly it will advance the science of AI so far that thousands of times less computers will be needed for the same level of intelligence. I would start building it but I still need to answer the question of what information paths to set and how to get an AI network to do this in a self-referencing way so its not a hierarchy. I'm not giving up on the work I've done so far, since that is general tools and frameworks for AI to build more AI. This will be just another tool for the AIs and people in the peer to peer networks to use. I thought of this while thinking about the last major missing piece in the peer to peer global brain (and optional telepathy network it should generate, see my theory-of-everything "gravity is antitime" for details of how telepathy works, specificly the part where the Global Consciousness Project's quantum "random" numbers becoming less random around the times of major world events is explained as gravity in infinite dimensions resulting from relativistic effects of such events changing our path through infinite possibilities and the analogy to bose einstein condensate). I needed a way for the feedback loop between mouse movements and realtime generated audio (as Audivolv 0.1.7 does in a very simple way with good/bad buttons to train it, but I mean with no buttons learning continuously and in a global brain) to go through a representation of the mind of the person using the software, and pure evolution of neural activation functions (as I had planned to do using the same code that evolves musical instruments at the instantaneous wave amplitude level) I now understand is not accurate enough to model intelligence. It would evolve until it was a little smarter than the best neural networks and then level off. I didn't have to do the experiment. My mental model of Human minds is accurate enough I can make these kind of predictions. I continued thinking and this algorithm is the missing piece which can be combined with all of that.

(All my writing, here or on any website, permission granted to copy. --Ben F Rayfield, codesimian on this forum)
Logged
The Astral Pulse
   



 Logged
Pages: [1]   Go Up
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines
SMFAds for Free Forums


The Astral Pulse Copyright 2002 - 2014
Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM