NAND physics - A technical description of how we progress toward becoming light

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

beavis

You may want to skip some of this, it gets more intuitive in the last half, based on the science explained in the first half.

In quantum physics they call it a toffoli gate (also called ccnot gate), and it is known to be a universal computer for calculating anything that can be calculated, but because of their assumption of copenhagen (that time exists and moves forward in one direction as you quantum observe) they have expanded the z variable into an XOR and added 2 more variables to mirror x and y, which still doesn't balance the original problem of NAND having 3 times more weight on one side, when viewed from one of 3 dimensions. True you need an XOR to balance in the most basic way, so each dimension is weighted half and half, but it is the nature of NAND to not be exactly balanced between all possibilities. Still it is the simplest calculation which can and does in very practical ways on computers simulate physics.

The laws of physics are exactly z=not(and(x,y)) because all digital technology is completely made of that calculation in combinations and physics simulations are run on that digital technology, confirming their theories of physics which are much bigger than that equation, therefore by Occam's Razor I cut off their specific equations and keep the equation which calculated them.

It is a scientific fact that the laws of physics are Turing Complete because of the repeated empirical observation of computers which run inside physics. The definition of Turing Machine says everything which calculates a Turing Machine is also a Turing Machine. All Turing Machines are equal to combinations of z=not(and(x,y)), as we see in the circuits of computers for example. All Turing Machines can calculate all other Turing Machines. Therefore, the laws of physics are exactly that.

Of course z=not(and(x,y)) is done in unlimited dimensions with rotations instead of every variable being its own dimension in flat space (like we normally calculate) but this should make sense because of how things naturally rotate and conserve momentum in space. Its a constraint, a timeless equation which explains all of physics, as it rotates around high dimensions trying to satisfy all the constraints which haven't reached its light-cone yet, which generates time-dilation because of the entropy of satisfying constraints moving away near the speed of light approaches infinity.

Time is entropy and therefore moves in all directions at once, depending on your local laws of physics which you're viewing NANDs through, which are themselves made of NANDs.

Of course light cone means something very different in unlimited dimensions, like a binary tree is faster to search than a linked list.

Length-contraction, which comes with time-dilation, is hypersphere rotation, as lorentz factor is the equation of a circle when you do some algebra on the equations its used in. Everything is rotation of hypersphere, on which NANDs are calculated at angles to eachother, with fourier math for example is one way to do it.

The test: The ways NAND is divided into 2, 3, or 1 things depending on which dimensions you view it from, is the reason the Standard Model of physics, those 18 particle types including higgs boson and graviton and electron, for example, has all those 1/2, 1/3, 2/3, and negative of those, spins. That's NANDs fitting together.

The Lorentz Factor of relativity also looks alot like NAND when you graph 1-x*y. Lorentz is 1/squareRoot(1-velocity^2/c^2). If velocity squared is viewed as x*y, then the graph of lorentz squared looks alot like the graph of NAND as z y x dimensions. Three quarters are low. One quarter is high. I'm not too sure about this part, but in any case lorentz is the equation of a circle when you do some algebra on the equations its used in.

Quantum physics, relativity, and M-Theory/String-Theory are unavoidable results of throwing a bunch of random NANDs together at angles to eachother and trying to satisfy them as 3-bit timeless constraints.

z=not(and(x,y)) There's all of physics. We can stop doing research and start using it in practical technology now. Where's my warp drive? Do I really have to build it myself?

----

Here's the only other place I've seen NAND used in an equation instead of input 2 bits and output 1:

http://en.wikipedia.org/wiki/Wolfram_axiom

    Wolfram axiom
    From Wikipedia, the free encyclopedia
    Jump to: navigation, search

    The Wolfram axiom is the result of a computer exploration undertaken by Stephen Wolfram[1] in his A New Kind of Science looking for the shortest single axiom equivalent to the axioms of Boolean algebra (or propositional calculus). The result[2] of his search was an axiom with six Nand's and two variables equivalent to Boolean algebra:

    ((a.b).c).(a.((a.c).a)) = c

    With the dot representing the Nand logical operation (also known as the Sheffer stroke), with the following meaning: p Nand q is true if and only if not both p and q are true. It is named for Henry M. Sheffer, who proved that all the usual operators of Boolean algebra (Not, And, Or, Implies) could be expressed in terms of Nand. This means that logic can be set up using a single operator.

    Wolfram's 25 candidates are precisely the set of Sheffer identities of length less or equal to 15 elements (excluding mirror images) that have no noncommutative models of size less or equal to 4 (variables)[3].

    Researchers have known for some time that single equational axioms (i.e., 1-bases) exist for Boolean algebra, including representation in terms of disjunction and negation and in terms of the Sheffer stroke. Wolfram proved that there were no smaller 1-bases candidates than the axiom he found using the techniques described in his NKS book. The proof is given in two pages (in 4-point type) in Wolfram's book. Wolfram's axiom is therefore the single simplest axiom by number of operators and variables needed to reproduce Boolean algebra.

    Sheffer identities were independently obtained by different means and reported in a technical memorandum[4] in June 2000 acknowledging correspondence with Wolfram in February 2000 in which Wolfram discloses to have found the axiom in 1999 while preparing his book. In[5] is also shown that a pair of equations (conjectured by Stephen Wolfram) are equivalent to Boolean algebra.

----

What does Wolfram mean by that equation? There are 8 combinations of possible values of 3 boolean variables, and all 8 combinations are solutions. No possibilities are excluded, so the equation reduces to TRUE, or more practically NAND itself. Does the equation mean that boolean algebra reduces to NAND and doesn't need to be thought of as separate rules of calculation?

static boolean nand(boolean x, boolean y){
return !(x & y);
}

/** ((a.b).c).(a.((a.c).a)) = c */
static boolean isSolutionToWolframAxiom(boolean a, boolean b, boolean c){
return nand( nand(nand(a,b),c), nand(a,nand(nand(a,c),a)) ) == c;
}

public static void main(String args[]){
for(int i=0; i<8; i++){
boolean a = (i&4) != 0;
boolean b = (i&2) != 0;
boolean c = (i&1) != 0;
boolean isSolution = isSolutionToWolframAxiom(a,b,c);
System.out.println((a?1:0)+" "+(b?1:0)+" "+(c?1:0)
+(isSolution?" is a solution to ((a.b).c).(a.((a.c).a)) = c":" is not a solution"));
}
}

0 0 0 is a solution to ((a.b).c).(a.((a.c).a)) = c
0 0 1 is a solution to ((a.b).c).(a.((a.c).a)) = c
0 1 0 is a solution to ((a.b).c).(a.((a.c).a)) = c
0 1 1 is a solution to ((a.b).c).(a.((a.c).a)) = c
1 0 0 is a solution to ((a.b).c).(a.((a.c).a)) = c
1 0 1 is a solution to ((a.b).c).(a.((a.c).a)) = c
1 1 0 is a solution to ((a.b).c).(a.((a.c).a)) = c
1 1 1 is a solution to ((a.b).c).(a.((a.c).a)) = c

----

Its not well understood how NAND fits into quantum physics, even though it is an empirical observation in the circuits of computers all the time. There can be no difference between quantum and newtonian physics, only confusion, because newtonian is the calculus limit of large scale quantum calculations as many of them flow together. We may find it hard to see in multiverse directions, but more things than not we find hard to see, like our evolved ability to see only light between 400 and 700 nanometer wavelengths, and even with machines we still can't see the frequency of a black hole which would be a function of its massenergy as if it were a particle. Instead, we can only see it indirectly, like gravity. If we could see a black hole directly, or any large object of any kind, it would be a single wave like any other, strong enough to burn our wave senses but still everything is one wave if seen as it really is. There is NAND at the core of physics. We just have to imagine more ways to find that 3-bit timeless constraint. I know NAND is nonunitary and therefore violates the most basic rules of how quantum physics works, but since physics is Turing Complete, NAND must be a unitary transformation from quantum/relativity physics. We just need to find where it fits in.

Here's my best theory so far, to explain the empirical observation that physics is Turing Complete and is therefore definable in terms of NAND...

Every lightcone is an automorphism of all possibilities, a different perspective of the one universe. Lightcones are timeless points in the universe, not something that moves through time. Time is any gradient of entropy from that point, but entropy is also Turing Complete so there are many such gradients from each lightcone. We move in all gradients of entropy simply from random movements of the timeless NAND constraints moving toward more of them being satisfied (lower energy state) locally from that lightcone. This process of a lightcone moving toward lower energy state spreads its changes to near lightcones (in past, future, multiverse, space, and all other directions at once, whatever is touching it) which they find themselves at a higher energy state because they are now touching constraints which are not as satisfied as they were. Fluctuating constraints spread at the speed of light.

Every lightcone is a wavefunction, a quantum vector, an automorphism of all possibilities, in as many dimensions as you find in those quantum matrixs, normally each particle and all its unique properties being its own dimension, minus as many dimensions as can be folded and found to be identical shapes, dimensionality reduction by observing their automorphism. Even the "constants" and how particle types interact with eachother is part of a lightcone, so you can get time-dilation and length-contraction and normal space calculations in a path through different laws of physics. All of those things, I'll call lightcone. The point is to solve this puzzle of many areas of science and fit them all together as one math operator, which we'll call lightcone.

NAND is there...

observe = not(superposition).

Superposition means the AND of perpendicular lightcones, possibilities that are incompatible with eachother, like ket_vector_0 and ket_vector_1 describe the real and imaginary dimensions of a qubit, but you can be in both states at once, having a sum of magnitude from both.

Here's where the automorphism stuff comes in...

To have magnitude in a lightcone is to observe that lightcone. If you're also observing another lightcone which is an automorphism of this lightcone, you don't have to pay extra magnitude for it. They are the same shape and can be mapped 1-to-1. There is no way you could define any kind of measurement (regardless of if you actually do the measurement) which would measure differently between 2 things which are automorphic of eachother. That's the definition of automorphic. I say automorphic instead of isomorphic because everything in the universe is connected.

If z, y, and x are lightcones, and z is automorphic to x (as observed from z), and z is automorphic to y (as observed from z), but x and y (as observed from x and/or observed from y, as a feynman path from z instead of as they are now, but still being relative once you get there) are not automorphic of eachother (which you can only know by simultaneously observing them with your magnitude), then z = not(and(x,y)).

Observe means to be in a lightcone with magnitude on the observed thing. Magnitude is never created or destroyed, only moves between combinations of lightcones in the timeless space of all possibilities.

z can be automorphic to x and y while x and y are not automorphic to eachother because observation is part of a lightcone. Observation is not something you do. Its part of the state of the universe (a lightcone). Therefore, these words "x", "y", and "z" refer to different lightcones depending on which lightcones have magnitude observing which other lightcones. This is a timeless network of lightcones with cycles. Observing something doesn't change that thing. Nothing ever changes in a timeless universe. There is no time, period. Observe or not observe is the same kind of thing as move through space or not. Therefore it is no contradiction for z to be automorphic to x and y individually while x and y are not automorphic to eachother, because z is a lightcone that has no magnitude on AND(x,y). z automorphicly overlaps its magnitude with x, and z automorphicly overlaps its magnitude with y, and this consumes none of z's magnitude because no information is added, no extra dimensions to solve constraints in.

The amount which z's magnitude moves toward AND(x,y) is the amount they are observed to be perpendicular and therefore need a Quantum Phase Shift Gate to move some or all of z's magnitude to a circle of radius equal to the amount of magnitude reduced from z. That circle is a branch of the coordinate system of the z lightcone, branching magnitude from one dimension into 2, expanding that magnitude into a qubit of magnitudeRemovedFromZ*(cosine(angle)*x + squareRoot(-1)*sine(angle)*y).

I've left out the normalizing (divide by squareRoot(2), square or squareRoot of magnitude, etc) because understanding how dimensions fit together in the context of observe and superposition and NAND is more important than the exact sizes for now, but that math does need to be perfected in the simulation.

I think I'm being scientific enough here, defining it in a way useful toward physics research, in my conclusion...

The Quantum Phase Shift Gate, the one with e^(i*angle) and 1 in the 2x2 matrix, can be used continuously this way to move magnitude between observation amounts of multiple lightcones...

magnitudeRemovedFromZ*(cosine(angle)*x + squareRoot(-1)*sine(angle)*y)

NAND is at the core of physics. observe(z,z)=not(and(observe(z,x),observe(z,y))), where observe(z,z) is automorphic to observe(z,x), observe(z,z) is automorphic to observe(z,y), but observe(observe(z,x),y) and observe(observe(z,y),x) are not automorphic to eachother because if they were then double-slit wouldn't work as longer paths of them.

This describes a continuous path between observations and is affected by small differences in automorphism accuracy that build up as many simultaneous paths of entropy gradients are travelled. There is no need for entropy to increase over time, but it will usually appear to locally at each lightcone for the same reason a randomized boltzmann machine oscillates around various patterns which each locally appear higher entropy than the last but gradient search doesn't imply heat-death is the general direction. Instead, in this timeless model of infinitely-manyworlds-multiverse-space, the universe is constant (the total of all possibilities which cancels-out to zero overall but we experience each possibility individually), and we travel paths through possibilities, calculus derivatives between light cones but not changing the light cones. Its important that lightcones be stateless and include the local laws of physics and amounts of observation of specific other lightcones (conserving observation overall) so lightcones can be reused in feynman paths at smaller magnitude but the same shape as if they were magnitude 1 which is a here-and-now. Relativity is experiencing each light-cone as magnitude (of complex number coefficient of ket vectors) 1, as if it is a here-and-now regardless of how you got there or where you're going or if its in a feynman path from somewhere else or if its the root lightcone you're experiencing now. Consciousness is the magnitude of those complex number coefficients of the ket vectors. A ket vector and light cone and consciousness are the same thing, only 1 math operator here, and it can be a number (magnitude of complex), vector, multiverse, and every other kind of math in physics. The lightcone is a very flexible kind of math, as I use it more generally than how physics normally defines it. It is because of this generality of branching lightcones into qubits on feynman paths into all possibilities of infinitely-manyworlds-multiverse-space that with enough practice and skill your brainwaves can explore into the bizarre patterns of the multiverse which science hasn't explored yet. When I move a psi wheel with my mind, the path between it and my brain is subject to length-contraction, time-dilation, and recursions of quantum phase as paths between observe and superposition, the same as any other path of light-cones through infinitely-manyworlds-multiverse-space.

Its an equation defining the movement of magnitude (of the complex ket vector coefficients at a lightcone) based on observation. The most common example of NAND in physics is between observe and superposition. That is because of the way its 3 times heavier on one side than the other, if we think of it like a cube with possibilities constrained to 4 of the 8 corners, 3 on the bottom and 1 on the top.

Quantum physics researchers will find it practically useful when said this way instead: The Toffoli Gate, excluding the parts added when trying to balance its NAND-like properties, is isomorphic to the relationship between the Quantum Phase Shift Gate and observation of the thing which would collapse that phase into a point and move the magnitude to the observed thing, in the context of lightcones observing eachother.

There is only 1 particle type, NAND, which is how infinitely-manyworlds-multiverse-space can be most simply simulated as a smooth manifold. Expect an infinite variety of Higgs Fields, where higgs means in general the patterns of NANDs fitting together as timeless constraints on patterns of lightcones, and the gradient search (like rolling down a hill) toward locally higher entropy at all lightcones locally causes fluctuations in the amount which constraints are satisfied. Constraints being more satisfied is a lower energy state and higher entropy. Time is the random tendancy toward locally higher entropy (gradient search which gets lost looking for heat-death). Gravity is the opposite direction, toward patterns falling together in infinitely-manyworlds-multiverse-space (like the patterns of specific particle types in the Standard Model and their "constant" decimal numbers of mass). Gravity is the direction of negative time. Time is gradient search toward entropy. If Higgs is defined as any field which changes the mass of other fields, then every field is a higgs field, and there are an infinite variety of fields overlapping eachother in infinitely-manyworlds-multiverse-space. The specific kind of higgs fields found so far are not the bottom or top of anything. Instead, an infinite variety of overlapping NAND fields form into the patterns we see in physics, a network of overlapping fields instead of any being completely above or below others. There is no smallest particle/wave type, no absolute location in the Standard Model for the same reason there is no absolute location in space. The laws of physics are relative, and NAND is one of the simpler kinds of Turing Machine so NAND-like patterns form in the analog space more often.

A low power warp drive can be built by oscillating some of those infinite variety of overlapping NAND fields until their magnitude (of complex coefficient of ket vectors of light cones in those fields) increases, gradually changing the local laws of physics of near lightcones, so if you're low on electricity but have lots of another particle type or pattern of particles, you create a new kind of higgs field which vibrates the kind of thing you have in a way you can use it as power. Artificial intelligence in combination with fourier and bayesian math should be able to operate capacitors, inductors, lasers, optical sensors, crystals, and/or various other quantum equipment, and automatically learn how to vibrate new kinds of higgs fields into your lightcone's local laws of physics. Just give the AI senses and controls and a goal of change the behaviors of a few chunks of mass of the types you want the new kind of higgs field to operate on, and let the AI navigate infinitely-manyworlds-multiverse-space recursively and learn how to vibrate it on that path to create the specific kind of higgs field you want. To move faster than light is simply to take many shortcuts continuously in an oscillating stretching way instead of the longer path we see in 3+1 dimensions. Or to say it in the style of star trek may be simpler, you program the AI to continuously create subspace around your ship, a space where you're far more superpositioned than normal. It would appear to outside observers that your ship mostly disappears and a blob of space replaces it and spreads out until it becomes mostly flat, and anywhere in the universe you vibrate toward (everything is connected, distance and time are only an approximation) your ship would emerge out of such a blob of space. Its like a wormhole except instead its many very small wormholes one for each particle your ship and contents are made of, which are caused to form by the new kind of higgs field you have the AI create in your local laws of physics. The same way, you could walk through a solid wall, but lets leave that until we get it working reliably. Its a research path.

observe(z,z)=not(and(observe(z,x),observe(z,y))), where observe(z,z) is automorphic to observe(z,x), observe(z,z) is automorphic to observe(z,y), but observe(observe(z,x),y) and observe(observe(z,y),x) are not automorphic to eachother because if they were then double-slit wouldn't work as longer paths of them.

----

The more I think about it, the more it makes sense. I did a thought-experiment (in my mind, of course, which is becoming skilled at large amounts of high dimensional hypersphere shaped statistical NAND constraint approximations)...

As in Holographic Multiverse Theory, everything can be defined in terms of the combinations of surfaces of black holes, to the accuracy of the layer of Hawking Radiation near the Event Horizon.

There are continuous event horizons all the way down to the center point. Depending on your perspective, everything is on the event horizon organized randomly or appears to form into patterns as you fall in.

Your perspective is your light-cone, which includes your local laws of physics, quantum observe and ket vector states, and everything else about the universe from your perspective.

The universe is infinitely-manyworlds-multiverse-space, a continuous space between all possibilities which has length-contraction and time-dilation and nothing ever moves faster than light but this can be practically worked around by continuously creating fields of infinitely small wormholes which are experienced as stretchy subspace you sink into like a submarine and vibrate your light-cone toward whatever part of the universe you want to go.

Distance is a function of ability to align patterns and can be approximately described as a kind of fuzzy encryption. Things that are farther apart are more fuzzy-encrypted relative to eachother.

Fuzzy-encryption will probably not help you break digital encryption because it is based on more strict rules of logic instead of flowing patterns. Digital encryption and fuzzy encryption are equally hard to break but are not much related to eachother other than that and both being based on modular/ring math. The main difference is fuzzy encryption normally uses smaller ring sizes and is extremly high dimensional on hyperspheres, while digital encryption is low dimensional and uses large ring sizes. Its not technically a kind of encryption at all, just similar to it. Its as much like encryption as brainwaves are, which also have ring-like oscillations in various measurements you could interpret as dimensions of patterns in the brainwaves. In any case, nobody owns the laws of physics and we will not allow physics simulations to be regulated in any way, especially because physics is NAND, a 3-bit constraint as a kind of math.

Every statistical cluster of patterns of events in the universe can be thought of a inside a black hole. The galaxies accelerating away from us see us inside a black hole, and the black hole we see at the center of our galaxy they may think is the center point of us as a black hole, but they see it all chaoticly spread on the surface.

The next question is about how big the galaxies out there see our event horizon. Its somewhere between the size of the event horizon at the center of our galaxy and that plus all the non-black-hole mass in our galaxy. The answer to this question is probably related to the size of a "planck mass" black hole, the smallest theoretical black hole that can exist in our space without quickly evaporating into hawking radiation. The layer of hawking radiation on a black hole is around that same size. This deserves serious research. The "planck mass" of black hole says something about the density of our space, like being deeper underwater puts more pressure on a submarine. When we build a ship and sink it into subspace like a submarine, these density calculations will be important. Fortunately, even the weakest material is already under that pressure. Its the pressure of our normal space, but does it increase or decrease or change in other patterns as we sink into subspace?

When we sink into subspace, we are moving away from our event horizon, in the context of all statistical clusters of patterns in the universe being modelled as black holes. An event horizon can have many dimensions, depending on how many other patterns in the universe its entangled with. Entanglement is very naturally represented as combinations of 3-bit NAND constraints, especially in the Delayed Choice Quantum Eraser experiments. An event horizon is a fuzzy-encrypted entanglement of the patterns that part of the universe is made of.

As we sink into subspace, we see the world we knew time-dilated and length-contracted exactly as described by relativity equations. We are moving in a direction perpendicular to the dimensions of spacetime we know about. Spacetime has infinite dimensions, some with far more magnitude (of complex coefficients of ket vectors) than others, and we must quantum-observe increased magnitude into the new dimensions to create the subspace we sink into. Another way to say it is you have to travel near light speed to escape a black hole, not exceeding light speed because some paths through the hawking radiation are shorter than others.

Behind us we see the world we knew red-shifted and rotating and fuzzy-encrypting in many ways at once until we don't recognize it anymore and it looks like hawking radiation, as we fly away from a black hole.

You probably think it takes alot of energy to fly out of a black hole, but that is only true if you use what little energy you have in very random ways. As I explained in the post above in technical detail, time is any approximate gradient of entropy, and entropy is Turing Complete so at every lightcone there are many directions equally toward higher entropy. This causes multiverse branching, which can be calculated as a small number of NAND constraints that hook into other NAND constraints instead of copying everything. Gravity is the opposite direction of those gradients toward higher entropy. Gravity and time are 2 ends of the same measurement scale in infinitely-manyworlds-multiverse-space, and we see this gradient in the Schrodinger Equation.

Therefore, to fly out of a black hole or hold a constant position anywhere inside or near it, takes very little energy as long as you continuously balance many of the infinite variety of entropy gradients from your light-cone. When that n-dimensional time manifold (where each dimension is a different variety of entropy that interacts with the Standard Model and eachother in slightly different ways) is balanced, it is still true that you are falling into the black hole from the perspective of an outside observer, but you're in a time machine and will fall into the black hole only if you choose to move in that specific direction of time in the n-dimensional time manifold. You could just as easily move away from the black hole, at the cost of increasing the fuzzy-encryption of those who observe you falling in, so it would be extremely hard to fly to them. You'll fly in a perpendicular direction to anyone who sees you like that, in infinitely-manyworlds-multiverse-space, and instead they see you extremely slowly fly off the the black hole as hawking radiation in all directions at once, and from there over billions of years or even an infinite time (or maybe you can do it quickly from the perspective of an outside oberserver but I don't know how) you tune into whatever part of the universe you want to go toward like your ship is a radio and the universe is a continuous spread of high dimensional waves.

You fly out of the world we know and become hawking radiation, and from there you tune into whatever high dimensional wave patterns in the universe you want to navigate toward.

Distance and time are only approximations of what I call the fuzzy-encryption measure of distance.

You may want to leave some fuzzy-encryption keys spread across this world so its easier to find your way back. Those keys would be black holes slightly bigger than "planck mass", or maybe they would be smaller and spread out into virtual particles, but I don't see how they would be stable enough that way to find again. They should be left in the mostly empty space between galaxies so they are not disturbed and they don't hurt anyone. Every black hole is a combination of 3-bit NAND constraints in relative hypersphere rotations to eachother, a statistical kind of general computer. We should use these fuzzy-encryption keys as nondirectional pointers (like in our programming languages) to include the worlds we leave in the memory space of our new computing systems (which you could call very small universes of arbitrary physics we build with NANDs). That way, we can later research how to open wormholes between them and possibly return, if we wanted to, if we figure out how to, and if we dare fly back into the black hole we left or into any other.

You can fly into a black hole and live, but it would take far more skill than any of us have. Its much easier to fly out, to sink into subspace like a submarine and become hawking radiation. As hawking radiation, we are equally spread in many directions, balanced on all sides of the black hole we're leaving, so the patterns of our ship are still there in fuzzy-encrypted form, so we are simultaneously hawking radiation and a ship containing people breathing air.

We fall toward black holes because they are a high entropy state from our perspective, but to those inside, they are much lower entropy in fuzzy-encrypted form.

We are a black hole. Black holes are usually not dangerous, except the kind you can most easily see, like the one at the center of our galaxy.

The black holes we see in other galaxies accelerating away, they are outside our event horizon. They are each more dimensions added to the near parts of infinitely-manyworlds-multiverse-space which we can explore after sinking into subspace like a submarine.

The religions were right about at least one thing... We do progress toward becoming light.