As browser for checking layout and symbol formatting I used e.g. Internet Explorer.
Preliminary note and hint for reading
Particularly the last considerations to the topic are included. The text is easier to understand for readers with mathematical entry knowledge. Especially for those readers there are precise formulations of the introducing arguments and building up on this quantitative considerations among others to the nature of proper time (TimePerception). However, it isn't necessary to understand all details at once. Additionally, during first reading the lot of footnotes may be skipped. Independently of this some interesting passages are marked by (***) . A concise formulary is separately accessible.
(InformationDef) The word "information" resp. "information quantity" plays an important role and is sometimes interpreted variously. Therefore it's emphasized, that here the word "information" is used in the standard sense of technical literature (information theory) [lish1] [1].
(InfinityNotApriori) Currently (at the beginning of the 21st century) the (obvious[2]) understanding of the infinite resp. infinite diversity as something, which permanently is emerging due to differentiations and decisions[3], which cannot be classified as something a priori (i.e. before present resp. in past, i.e. as completed totality) existing, is still unusual. Basis of mathematical physics at present are the axioms of traditional set theory[4] which start out of (a priori) existence of infinite sets resp. infinite diversity. But if something exists in the physical meaning, it's already past and thus fixated and naturally restricted (cf. a. [liro]).
To avoid misunderstandings: I don't think, that's all. Obviously there also is the potential to always new decisions and I think this potential isn't finite (InfinitePotential), just because it doesn't exist in the physical meaning (i.e. as measurable information) and therefore isn't determined [InfinityConcernsFuture].
Unlike the infinite the (within finite time) measurable (as information perceptible[5]) reality, which is the physical reality, is just characterized by being finite. Particularly it's information content is finite.
Concerning the physically existing reality (resp. physical reality) a scientific consensus should be possible. Even Hilbert comes to the following result ( [lihi] p.165, translated from german):
"Now we have established the finiteness of the reality in two directions: to the infinite small and to the infinite large."
Every physical measuring needs a finite, different from zero measurement time and provides information (cf. a. InfoConcrete) in form of the choice of a measurement result from all possible measurement results. If infinitely many (different) measurement results would be possible, the choice[6] of a measurement result could deliver infinite information (an infinite information quantity [InformationDef]). But the results of physical measurings (of finite[7] duration) never deliver an infinite quantity of information. Therefore the set of all possible measurement results a priori is finite[8].
This (natural) fact has been mentioned in literature already long time ago (cf. e.g. [lipe] p. 195). Nevertheless even in quantum physics analytical models and derived concepts (exponential functions, operators with continuous spectra...) are still usual. - May be that many people (also scientists) cling on the model of continuous reality (as something which already exists) not only because of the macroscopic impression but also because they think that "continuum" is necessary for freedom of decision - because it's subdivisible infinitely often. I think there can be a bridge: Future as the subdivisible together with freedom of decision along proper time as primary axiom, and finer and finer approximation of continuum as consequence - so fine subdivision not a priori, but in the course of time as result of decisions. The order (Order) is important.
In the physical reality only a finite information quantity can be processed within a finite (proper) time[9] interval. For mathematical models whose representation requires a processing of an infinite quantity of information, for example irrational numbers, no (exact) equivalent exists in the physical reality. So mathematical calculations, which have an equivalent in physical reality, can only be rational combinations of rational numbers. Conclusions arise from this for the foundations of mathematical physics.
Difficult[10] analytic functions like
etc. are frequently used for description of physical (natural) processes. The question arises for a fundamental explanation of the fact, that those functions can be used to make approximative prediction of physical measurement results (that is a limited forecast of perception resp. future). Such explanations should[11] base on as simple as possible axioms.
It is now started out from the assumption, that those axioms permit only a finite number of elementary combinations per time unit, in which elementary combinations (ElementaryCombination) are defined as addition resp. subtraction or multiplication resp. division of integer quantities resp. numbers. This assumption seems to be justified, because such elementary combinations are within finite time exactly conceivable (comprehensible), at least in the potential sense, for example by counting. That's important, because something, which is perceptible, is also conceivable at least in the potential sense.
There are common mathematic models, e.g. numbers, which at least aren't conceivable within finite time and therefore aren't perceptible (sometime, in some representation, in complete exactness[12]). Therefore these models deviate from perceptible reality[13] after some time and in principle are unsuitable for an exact elementary approach to it[14], even if these models deliver and further will deliver good approximative results.
Of course also mathematical partial models can be very helpful (helpful) (especially for approximative[15] calculations) and are thus fully justified[16], for example in case of to macroscopic particle numbers greater 10^26 and still much larger number of combinations n per proper time unit (ProperTimeUnit)… We only should guard against over interpretation[17] of our models of thought because they are not equivalent to reality (AnalysisAtBestApproximative). Particularly, if we forget the simplifications contained in our model, we would block[18] a "thinking beyond the model". This problem of course also affects my mathematic suggestions. Also here occasionally approximative considerations are used (as bridging) e.g. the use of the Stirling formula. I hope that it remains clear, where simplification begins. Perhaps sometime there is a possibility that we can speak about this.
At first the following preceding chapter should clarify the mentioned difficulties of current models: They orient themselves too little to our fundamental decision and perception process. Particularly the natural sequence (Order) of the combinations which are (in the large and in the small measure) cause and result of our decisions and perceptions (resp. measurings) isn't taken into account in these models.
For example the axiom of choice (on which basic analytical concepts build up) postulates a priori the existence of infinitely many decisions - from the physical point of view a contradiction in terms, because "a priori" means "before presence" resp. "in past", but past is finite.
In mathematical physics analytical approaches and concepts (i.e. approaches and concepts of analysis) are quite common, and with that also the usage of continuous, a priori infinite sets. Most important examples for those sets are the complex and real numbers. They form a metric space equipped with the absolute value norm. Hilbert[19] spaces play a central role in quantum physics. Important quality of these metric Hausdorff-spaces is the completeness i.e. any Cauchy sequence converges towards a limiting value which is contained in the space. This is problematic if used for description of nature, because for the exact description of the limiting value of a Cauchy sequence it's mostly[20] necessary to carry out (isolatedly, before any interaction with the surroundings[21]) an infinite set of approximation steps, if one allows respectively only elementary combinations [ElementaryCombination]. This implicitly means (uncoupled from the natural order (Order)) an infinite number of decisions[22] (application of the axiom of choice[23] [limy]), therefore the processing or production of an infinite large quantity of information,[24] which isn't possible under natural conditions within in finite time[25] (at finite availability of free energy (FreeEnergy)).
So from the retrospective point of view the appearance of quantum phenomena in the physical reality [PhysicalReality] would have been predicable (cf. a. [NoAnticipation]): It confirms not only the fundamental limitation of the quantity of information, which is perceptible within finite proper time, it also shows (among other things), that basic analytic concepts (i.e. continuous sets of numbers) are models, which deviate from reality (AnalysisAtBestApproximative).
It isn't a miracle therefore that mathematical models which are based on the completeness of the underlying metric spaces only can be restrictedly valid for the description of actual natural events[26]. The problem quite similarly also lie in other models which among others are also based on the axiom of choice (often indirectly and hidden)[27] or which start out in some other way of the infinite (of infinite diversity) as something which already exists [InfinityNotApriori] .
Specially at calculation models, whose validation is not guaranteed because of missing or only indirect experimental possibilities, there is a strong[28] likelihood that the sequence of calculation steps (implicitly connected to decisions and perceptions[29] within the restricted model) differs from the natural order (Order) of the combinations connected to decisions[30] and perceptions. So nonsensical calculation results are the consequence. The difficulties lying in these results are (more or less) known by many insiders and also should be evident in publications[31].
If our considerations should not be superficial, the mentioned problem is fundamental and severe. Therefore we have to accept, that a mathematical approach, which is faithful to physical reality, also is finite (finite according to http://arXiv.org/abs/quant-ph/0108121 ). There are two possibilities for the method on the way to this approach:
1. One continues to work with the usual mathematical models, which presuppose a priori existence of infinite sets and the axiom of choice, and hopes, that sometime the infinities can be in a way reduced, so that the resulting approach finally is finite.
2. One starts with plausible approaches, which don't presuppose a priori infinite sets (which are finite from the beginning and so among others also discrete) and looks for ways, which (in the borderline case "n to infinite") join into current (quantum) physics - a start from the other side to meet in the middle (again).
As far as I know up to the beginning of the 21st century only the 1st possibility is considered in literature[32]. Due to the variety of publications it's difficult for me to estimate, how far the hope for complete reduction of all infinities is legitimate, but I get the increasing impression that unnecessarily much intellectual capacity is invested in this possibility - the method "explore and hope for reduce" allows many wanderings.
Without guideline there are (too) many possibilities (TooManyPossibilities). Already today there are many special subjects (and special-purpose languages) so that an increasing hindrance of communication is ascertainable. Due to the nature of the matter substantial progress on a way to an exact[33](and therefore also finite) approach probably only is possible, if at this research is done without the axiom of choice, continuous number sets and all from it derived model concepts, even if it is difficult[34] at first.
This is one of the reasons for this which caused me to try the 2nd possibility. At this it's comprehensible to work on the principle that the finiteness is reflected in the fact, that the number of combinations (mapped on elementary arithmetic steps) leading from a decision (to measure) to a perception (of a measurement result) is finite, exactly if measuring time is finite. This is the case in the approach discussed below[RecombinationCountFinite], in which the progress of proper time is related to a meeting of (own) pattern with (formerly separated) counter pattern (TimePerception).
No doubt, the way of the information from our decisions to our perceptions depends on a physical law. Because of the shown problems this law can not be of analytical and therefore continuous (for example geometric[35]) nature. Primarily it must be a discrete, combinatoric law, for which the name combination law is adequate. Of course in case of a macroscopic measuring many combinations happen, so that the borderline case of the continuous, geometric appearance results.
Abbreviated prehistory: About 1992 I noticed, that we could only recognize codes, for which we have the counter code (the decoding code). So before every measurement we have to send away something from us like an "anti pattern" or "test pattern" (later I recognized that this action lays in our decision for the measurement) and the change of the returning pattern (relatively to the original) contains the information of the measurement result. I studied the probabilities for return of the test pattern and noticed, that they correspond to the coefficients of the taylor series expansion of the function 1/Ö(1-x^2). It is well known that for x=v/c this function is proportional to relativistic time dilation. So, shortly spoken, proper time is proportional to the sum of probabilities for return and it is plausible to assume, that progress of time is necessarily connected with return events, i.e. "central meetings" (in the middle, in the vertical symmetry axis of a symmetric binomial distribution, cf. (Q0Triangle).
One important progress connected with this approach is, that it gives first insight in the (of course finite) ways of information from decision (to measurement) to perception (of measurement result) and that it contains only finitely many arithmetic steps - from start in the present center until return to the center. There is no a priori necessity[36] of analytic models [which implicitly uncouple[37] physical reality from consciousness, which hide the connection between our frames of reference and so can lead to a wrong (egoistical) philosophy [egoism]].
The next chapter should give insight into the nature, way and sequence of the combinations, which are connected to our decision and perception process and make first mathematical suggestions, of course only as far as I could get ideas of it. I hope you then will understand why I have named these combinations "recombinations". I have approached the topic in a similar way in thoughts like the next chapter is written.
It is already known that the concept "(smallest) particle" is only a model idea which doesn't correspond (exactly) to reality. Of course it's understandable that use of this model concept is frequently made: It is easy to imagine matter as composition of smallest particles or "building blocks”[38] (with firm, absolute sizes) and helpful for calculations. But it isn't consequent to be left with this model (or the mathematically equivalent wave model[39]).
When we try to free ourselves from this model idea, first of all the question after the basis of metric[40] sizes arises: What is small, what is big? It is known that measurement results regarding this are dependent on the observer's "point" of view. In this context the functions
play both in geometry and in the physics an important role, e.g. QV(v/c) [41] as factor for relativistic time dilation or QW(v/c) as factor for length contraction (cf. a. [GyroscopeModel]). These functions contain only an approximative approach but they after all are used for calculations which don't start out of absolute scales but of a non-linear[42] connection of the sizes of the observer system and watched system.
(BridgesToRel) Relativity theory (problematically) presupposes continuous geometry and the frequently used functions QV and QW lead to irrational results. The following discrete considerations (among others to finite series expansions of QV and QW, cf. (TaylorQV), (TimeDilation) and (TaylorQW)) avoid this from the beginning and at the same time offer approaches for a bridging (***) from relativity theory to quantum physics.
Correlation means flow of information and this is necessary for every observation (to and for finished perception also back[43] (FinishedPerception)) The conservation laws [Cons0Sum] show that calculation is exact in the end.
(To be new the transmitted new information component should lie in (physical) quantities, which are temporarily solved from this connection (correlation coefficient resp scalar product is 0, geometrically: orthogonal). Probably therefore the individual flow of the information must change the direction[44] (DirectionChanges) at every new observation.)
As generally known, information (initially) is transmitted by photons[45]. These information packets move to the following target point [FollowingMeeting] (where the are absorbed) with light speed and no information exchange is possible in between(otherwise we would regard the "in between” as following target, [WayTimeConstantTillNextMeeting]. Especially in elementary consideration at the moment of the start we don't know, which of two elementary directions[46], which spin the photon will choose resp. which possibility is predestined (by an earlier decision). In abbreviated formulation[47]: We don't have information about the destination of the photon or its next decision.
We come to the (combinatorics) now (InformationPath):
The elementary information unit is a bit which means the information about the choice for one of two possible alternatives. Let us assume that we have no information about the next decision resp. the next direction. No direction is preferred in the beginning (DecisionFreedom). Thus both alternatives have equal probability p=(1-p)=1/2. Now, each of these possibilities again is starting point for a new decision etc... The so emerging probabilities for the different ramification possibilities resp. recombination points have a symmetrical binomial distribution (cf. [lifa] p. 245-281, [ligr] p. 153-256, also[48] [lied] p. 3), whichshall be called "Q0 triangle"[49] in the following.
n k-> -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9
¯
0 1' *1/1
1 1 1 *1/2
2 1 2' 1 *1/4
3 1 3 3 1 *1/8
4 1 4 6' 4 1 *1/16
5 1 5 10 10 5 1 *1/32
6 1 6 15 20' 15 6 1 *1/64
7 1 7 21 35 35 21 7 1 *1/128
8 1 8 28 56 70' 56 28 8 1 *1/256
9 1 9 36 84 126 126 84 36 9 1 *1/512
...
The probabilities in the vertical symmetry axis of the triangle are indicated by " ' ". These are the probabilities of meeting events (CentralMeeting) in the center, the "central meeting probabilities” and dependent on the row number n. We shall call them Q0Z (n) and obtain
The resulting table for Q0Z(n) is
On the other hand the Taylor[50] series expansion of QV(x) is
Remember that QV (x) is the factor of relativistic time dilation (TimeDilation) (cf. e.g. [lifl] p. 26 and 27).
(CounterPattern) The (vertical) sum[51] of the "central meeting probabilities" (probabilities for return) corresponds (because[52] of 4p(1-p) = x^2) to the Taylor series expansion of QV(x) in the case x-> 1 or v-> c. In the case v=c (photon speed resp. speed of light) the probability of a step to the right is exactly equivalent to the one of a step to the left, so the next step direction is completely undetermined and in the middle resp. in the center (the vertical symmetry axis k=0) the probability is maximal (VisCinMiddle). Because it's just the speed of light resp. the flight speed of information, which is assigned to the central meeting probabilities, it is natural to use the word "information" for that, which arrives here and is (after recombination) sent out again. [InfoConcrete]
If 0 < x < 1, then we have a correspondence to the case that the probability p of a step to the right differs from the probability of a step to the left, i.e. p is unequal 1/2. This means that we here already have more or less information about the next decision there i.e. already more or less information exchange[53] has been possible between here and there. The proper times[54] of here and there are not orthogonal, i.e. the "correlation coefficient[55] of decision resp. perception" isn't zero, but has a parallel, common component [CommonComponent]. Because of inertia this is valid along a series[56] of steps, in which proper time here seems to run QV(x) times faster then there [TimeDilation], where the factor QV(x) is equivalent to the sum of the own central meeting probabilities [PerceptionInCenter].
In analogous way also from our own (lokal, individual) point of view the central meeting probabilities correspond to the probabilities (per double step n->n+2), that the information, which had been separated[57] (in form of free energy [FreeEnergy]) by our decisions from us resp. the information starting from us returns to us and is perceived[58] by us again in recombined[59] form. This can be understood indicating, that with our temporal perception (TimePerception), with each proper time progress necessarily the re-union[60] of our own (lokal, individual) pattern and counter-pattern[61] [CounterPattern](which has been separated by decisions of us before) is connected – that in the end exactly that pattern is perceptible for us, which is descended from us[62][OwnPerception] (separated by our own former decisions[63]), whereby (outside of present consciousness) in between more or less many recombinations happened[64] (***) (PTimePropSumQ0).
May be, that on the second look much of this conclusion is obvious without much arithmetic, already due to the constancy of the light or information speed relatively to us personally.
So the formulae indicate, that all information[65], which we now receive (input) is descended from information, which we formerly have sent (output), and, of course, that also future input completely will be consequence of former output (***).
(TPropN2) If we define
then holds
So if we start out from the assumption that (in case of no reference system change, flat model) the sum the central meeting probabilities Q0Z (return probabilities) is proportional to the proper time t, then (for large n) the step resp. row number n increases proportionally to the square of the proper time just like the length of the distance covered during constant acceleration. Also gravitation conveys the impression of constant acceleration, in case of "constant" distance (and missing centrifugal force). To the judgment of the "constancy" of distance has to be considered, that with the row number n also squarely the row length (and with that the own length scale) increases. Of course more exact considerations should take into account also the distance dependence. The probability for way there and way back between recombination points is the greater, the smaller the relative distance between them.
The presented model, which works with the Q0 triangle, surely is incomplete (and also too flat, cf. a. [NotFlat])), questions like "From where comes the source in the start?" cannot be answered so. A modified Q0 triangle shall be introduced now, in which the central meeting probabilities are "flowing out" and therefore in the actual system for the present cannot be sources any more (by the fact that they are put on 0).
They then could flow out to another (orthogonal) direction and come back after[66] and after. Symmetry considerations (attention of conservation laws) can give first hints, where and how this happens. If for example something flows out in the center (concerning both sides exactly symmetrically) then the total effect of the returning must also concern both sides in symmetric way (e.g. output in k=0 <-> input in k=0 or symmetrically round k=0; generally the number of drains isn't necessarily equal to the number of sources. Multiple points (PerceptionOfMultiplicity) of in flow and of out flow (also "back flow"[67]) are possible per proper time unit (ProperTimeUnit).).
The following "Q1 triangle" bases on the assumption, that during measure resp. perception process all of the central meeting probabilities[68] is taken away, so that they can't be sources (for superposition, interference) in the same triangle any longer (directed flow of information).
So they get incompatible with each other: The definition of incompatibly can be formulated miscellaneously, e.g.:
1. Incompatibly of two events means that they cannot happen[69] simultaneous.
2. Events not appearing at the same time are incompatible with each other if the first excludes the following.
The second definition applies to our case: If a single quantum is "flown out" centrally in the previous row, it cannot flow out in the next but one row again (DistinguishableOrder).
The more central probabilities "flow out" (are set to 0) with increasing row number n, the more disconnected become left and right side of the triangle. Therefore a so described perception resp. measurement (QuantumPhysicalObservation)causes (quantifiable) separation (***) resp. separability (according to the perception[70]), which a decision makes possible in turn, in this model between the left and the right[71], in multidimensional[72] approach perhaps among others also between "inside" and "outside"[73] or past and future [IOtime]. Geometrical concepts like orthogonality are portable to [information] theoretical concepts, cf. (orthogonal)
If one provides the probabilities in the normal Q0 triangle on the left side with negative[74] sign (cf. [ProbabilityAmplitudes]), the following "Q1 triangle" results. It is a modified Q0 triangle with central probabilities[75] set to 0:
n k-> -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9
¯
0 ±1 *1/1
1 1 -1 *1/2
2 1 0 -1 *1/4
3 1 1 -1 -1 *1/8
4 1 2 0 -2 -1 *1/16
5 1 3 2 -2 -3 -1 *1/32
6 1 4 5 0 -5 -4 -1 *1/64
7 1 5 9 5 -5 -9 -5 -1 *1/128
8 1 6 14 14 0 -14 -14 -6 -1 *1/256
9 1 7 20 28 14 -14 -28 -20 -7 -1 *1/512
...
It is obvious that the amounts of the probabilities decrease (centrally they are flowing out[76]).
For even n the sum of all probabilities in row n equals the central meeting probability Q0Z(n) of the normal Q0 triangle, the sum of all squares of them equals Q0Z(2n). The simple sum yields 0, what matches the conservation laws well (Q1RowSumIs0).
One can arithmetically regard this Q1 triangle as discrete differentiation [DiscreteDiff] of the Q0 triangle along k, the horizontal direction. So the removing (perceiving) of the Q0Z results in a differentiation along the horizontal direction (difference left-right, d/dk). Perception surely also means differentiation along time axis (difference future-past, vertical, d/dn) and the correspondence[TimeSpaceCorrInMiddle] of vertical and horizontal differentiation in the middle is remarkable.
The graph of a multiple (discretely) differentiated Q0 function yields an continuous seeming wave-like picture after a larger number of recombinations [wavelike].There is a far-reaching analogy between those multiple discretely differentiated functions and the solutions of the quantum mechanical harmonic oscillator: By multiple discrete differentiation construction of orthogonal systems is possible, analogous to the Hermite polynomials [HermPolDiscrete]. At this the Hermite polynomials[HermPol] are (except sign) special cases of pre-factors resulting from multiple differentiation in the analytic borderline case. There are further considerations possible concerning integration and differentiation. Some can be found in the download files.
Particularly demanding: How can several triangles[77] be combined in different (how much?) directions (NotFlat)? At this the system must remain (open)[78] (***). Do multiple application of the Maxwell-equations give partial hints? How can the recombination points (in symmetric way) be connected, to get broad analogies to the physical measure-, distinction and decision process (cf. [PauliMatrices][79])? Which recombination points are (dependent on observer location) differentiable in time[80], which one differentiable in localization, which seem to be an unit (cf. [ElementaryCoordinates])?
If there is a physical interaction between two systems, there is a way between them over more or less many recombination points. The shorter the passage, the more probable it is in the average (per proper time unit(ProperTimeUnit)), the stronger it appears. For example the strong interaction probably goes over only relatively few recombination points. This also permits more symmetries. The weak interaction however probably needs more recombinations, so that this complex connection hasn't left-right symmetry any longer. It needs more time, from which the possibility for a comparison with past left-right definition arises.
For every even row number n>0 let be the number |Q2Z(n)|=-Q2Z (n) the "outflowing probability", i.e. the probability for flowing out[81] centrally. Q2Z(n) is equivalent to the 1nd (discrete) derivative of Q1(n,k) in k=0 along k, i.e. Q2Z(n) = (Q1(n-1,1)-Q1(n-1,-1))/2; so Q2Z(n) is in k=0 the 2nd derivative of Q0(n,k) along k. It holds:
The resulting table for Q2Z(n) is
On the other hand the Taylor series expansion (TaylorQW)of QW(x) is
The coefficients of the Taylor series expansion of QW(x) correspond to the negative probabilities centrally flowing out. If a system is separated by a potential x^2 (if it moves for example with v/c=x relatively to us), so this expression is proportional to the component (CommonComponent) resp. part of time (reality) which is common between us and observed system, which is also belonging to own proper time[82] and present and therefore will also become own past [ToPast]. It is the greater, the smaller the separation potential x^2 is. One also can think about a matching, more exact description of the initial situation in the Q1 triangle [StartQ1]. The formula of all probabilities in the Q1 triangle can be found in the addendum [FormulaQ1].
A discrete differentiation (DiscreteDiff), here along (proper) time is clearly dependent on the relation subject/object.
(hAsConstantProduct) We know, that the Planck effect quantum can be understood as product t*E of proper time t (of a measurement) and the energy uncertainty E (of the measurement result). Proper time resp. measuring time is proportional to the function QV(x) resp. to the sum of the return probabilities in the Q0 triangle (PTimePropSumQ0). The partial sum of the Taylor series expansion of QV(x) (TaylorQV) up to the 2n-th power of x corresponds to the sum of the return probabilities up to the row 2n in the Q0 triangle. Analogously the partial sum of the Taylor series expansion of 1/QV(x)=QW(x) up to the 2n-th power of x (TaylorQW) corresponds to the probability to reach row 2n without return to the center (in k=0). Because the not returning part isn't measured it remains uncertain, so we could understand QW (x) as proportional to the uncertainty of the energy. With QW (x) * QV (x) also t*E=h is constant. Even in the borderline case v->C resp. x->1 the product of the partial sums (TaylorQV) und (TaylorQW) is constant:
Due to the shown coherences of course it is reasonable to examine finite partial sums of the taylor series expansions of QV(x) resp. QW(x) more exactly, also in the case of imaginary[83] x, |x|=1 and even for |x|>1 . The corresponding "probabilities" for steps to the right or to the left wouldn't be limited within the real interval [0,1] any more[84] because of 4p(1-p)=x^2.
The mean vertical reach (1st order momentum) of |Q2Z(n)| from row n=1 on up to the outflow is equivalent to the sum of the Q0Z(n) from row n=2 on (cf. [DefQ2Z]), i.e. the mean reach from row n=0 on is equivalent to the sum of the Q0Z(n) from row n=0 on and therefore approximates QV(x). At more detailed occupation with the topic many coherences stand out (cf. [DeviationQ1Equal1], the formulary in wqm or the concise formulary). A more exact definition of concepts like "simultaneity" resp. "concomitance", a more exact analysis of the process of new creation[85] of information and the process of copying[86] (also parallelizing) information would be necessary. The formation of scalar products (Scalarproduct) in horizontal and vertical (and even sloping) direction in the triangle could serve as bridging to the common mathematical framework of quantum physics[87]. Because of the underlying recombination principle and for study of different branching resp. connection possibilities in the triangle the consultation of specialists in [combinatorics] and graph theory [RecommendedGraphTheoreticalResearch] can be helpful (perhaps even of geneticists or in the field of the genetics active mathematicians).
(DeviationQ1Equal1)An example of arithmetical coherences:
In connection with the constance of the Planck effect quantum
also the constance of the mean deviation (1st order momentum) of the Q1 in horizontal direction is interesting (but the in [hAsConstantProduct]described information theoretical consideration seems more reasonable for me). For example the following (surely abbreviated) interpretation may be a first suggestion for further thinking:
Here the summation goes over both horizontal halves, the impulse was decomposed in time * force. The time ET was interpreted as a not sub-divisible unit, as elementary time between the beginning and end of the summation (the integral) and the force as current probability for flowing out, as probability for leaving within the time ET the relative[88] location with maximum impulse (with maximal speed v=c).
(A possible bridging to quantum physics: One could consider the Compton effect (***), (the interaction of a photon with matter; I also think of the generalized Compton effect as interaction of long wave photons with matter) as outflow event (or a result of connected outflow events) within the Q1 triangle. At this the energy of the photon is reduced, a part is flowing out analogous to the reduction of the horizontal sum (over k) of the probabilities Q1 (n, k) [89]. Nevertheless the angular momentum remains the same due to mentioned formula, the photon "is stretched".)
The Plancksche effect quantum hq still was interpreted here quite graphically (as product of physical quantities). [A priori] An information theoretical interpretation is more consistent, though (hAsConstantProduct) [lish1]. One also can understand hq as energy * time and so as information * proper time. On the average much Information can be given [give] resp. transferred only along short time intervals, little information along longer intervals. Of course at this has to be considered, that the conversion factor for energy * time to information * proper time isn't a constant but a function, dependent on the extent of branching depth and on renormalization .
The exact formula of the Q1 function is:
Let be n a natural number, k an integer with absolute value smaller or equal n, and p a number contained in the interval [0,1]. If we define
and
then holds
The Q1 triangle results from a superposition of two Q0 triangles with opposite sign, starting in position n=1, k=±1 after multiplication by 1/2. Addition of both means a discrete differentiation (DiscreteDiff)[91] along k. The formula then arises from the difference quotient with minimal dk, i.e. dk=2:
(Q0 (n, k + 2) - Q0 (n, k)) / 2 = Q1 (n + 1, k + 1) .
Analogously one can make m-th order (discrete) differentiation by using row n=m of the Q0 triangle, equipped with along k respectively alternating sign, as starting row[92] (BinCoeffDiffMatrix). The initial zigzag of the accompanying function graph flattens in the following rows to m+1 continuous seeming waves (wavelike). Discrete alternating state functions can also cause wave-like phenomenons (probability distributions), if the initial situation (e.g. the binomial distributed discrete analogue of "phase angle"[PhaseAngle]) is not sharp. The superposition of many rows with even (or odd) row number n and alternating sign respectively yields a wave-like picture.
Still to mention is, that a second order discrete differentiation (finite difference) along k means[93] a first order discrete differentiation (finite difference) along n:
A connection to the (***) Schrödinger equation (Schroedinger) seems reasonable.
Remarkable is the correspondence of vertical and horizontal differentiation in the middle(TimeSpaceCorrInMiddle):
Q1(n+1,1) = Q0(n+2,0) - Q0(n,0) = (Q0(n,2) - Q0(n,0)) / 2
The middle (the vertical symmetry axis k=0) represents the relativistic borderline case[VisCinMiddle] - also in the common theory arise equal rights of time and location coordinates in the relativistic borderline case.
(ProbabilityAmplitudes) [liba] [libo] [lifi] [liko] [lipa] In quantum physics usually one talks about complex valued probability amplitudes, the corresponding probabilities are calculated secondarily from the square of the absolute amount, i.e. from the scalar product with the corresponding complex conjugated probability amplitudes. Just also the central meeting probabilities Q0Z(n) (resp. |Q2Z(n)|) correspond to such a scalar product. In the presented example (cf. Scalarproduct) real probability amplitudes are used - even if the currently used probability amplitudes are complex, their scalar product has to be real, if measurable. As bridge to current concepts one also could e.g. identify the absolute value of Q0(n, k) resp. Q1(n, k) as amount of a probability amplitude, whose phase angle is varying with n and k (cf. RowSumAsWave). It has to be remarked here that in context of discrete considerations the phase angle (PhaseAngle) cannot be calculated (exactly) and so doesn't have an equivalent in the reality. The (continuous) trigonometric functions and of course also the (complex valued) exponential function are approximative. However, exact discrete representations[94] [DiscreteRepresetations] are possible, which don't imply infinite series expansions.
(Scalarproduct): In discrete considerations [like] of course integrals have to be replaced by finite sums[95]. Particularly important sums are scalar products. In the compendium of formulas wqm several dependencies are described for different possibilities of scalar product formation in the Q0 triangle. Also in the Q1 triangle there are such. An particularly obvious example, which allows remarkable simplifications[96] (***), is listed here. For m smaller or less n holds:
For example one gets for m=n=3 (ScalarproductExample)
The list of the accompanying recombination points in the graph for the ways from A to B is:
n k-> -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6
¯
0 A *1/1
1 1 -1 *1/2
2 1 0 -1 *1/4
3 1 1 -1 -1 *1/8
4 1 2 0 -2 -1 *1/16
5 1 3 2 -2 -3 -1 *1/32
6 1 4 5 B -5 -4 -1 *1/64
Every way from A to B contains an only finite sequence of recombination points (FiniteRecombinationSequence). Abbreviated said it can be subdivided into a (Dirac) "ket" part (e.g. from A to row n=3) and a remaining "bra" part. The recombination points on the ways from A to B are marked by underlining. A simultaneous perception of row 3 (with all way possibilities coming from the decision in A, cf. (DecisionCenter)) is at the earliest (earliest) in the center (k = 0) of row 6 = 3+3 possible, i.e. after new not perceived branchings[97] already have arisen again (in not underlined recombination points). The system remains [open], though for every (also arbitrarily great) n the simultaneous perception of row n always is possible from row 2n on (an analogous argumentation is possible for the Q0 triangle). From row 2n on for every way to a point in row n also a corresponding way back exists, which is necessary for finished perception (FinishedPerception).
Perhaps the spread of row n resp. 2n is cause of the in principle unsharpness of perception (the very last past) - the mean deviation in the Q1 triangle is constant like the Planck effect quantum hq (DeviationQ1Equal1), and the scalar product can correspond to the integration over one period. Sharpness however can exist concerning the origin (point n=0, k=0) of this distribution (completed past).
If for a system in A because of the initial decision resp. differentiation there a special way is predestined, for example a way over point 1', a probability gradient [ProbabilityGradient] arises relative to the distribution listed here, which could result in a relative force (working on this system, to and back again).
It will be shown shortly, how Heisenberg's uncertainty relation results, if Q0(n,k) is used as state function and k is identified with the location coordinate x. At first we introduce the variances of the location x and the impulse p
in which the dash marks the quantum mechanical expectation value. We can always transform the coordinates so that we get
.
Then we may assume
.
If y is the not normalized state function of a quantum mechanical system, one defines (cf. [liha] p. 434)
If we replace y by the discrete function Q0 (n, k) (cf. [DefQ0]) and k by x, we get, if we use the same notation for analytical and discrete differentiation
Because of
we get
which had to be shown.
Here it's worth mentioning, that the discrete scalar product
disappears in case of odd l. Particularly consecutive derivatives are orthogonal resp. [uncorrelated].
Under con-attention of the discussion to the discrete scalar product the following qualitative interpretation is possible: The operator (multiplication by x, discrete differentiation) works in all points of row n on all ways from point k=0 of row 0 of the last perception to point k=0 in row 2n of the current perception. All way possibilities[98] have to be taken into account (AllPossibleWays): The more exact the measuring resp. the finer the differentiation, the greater is n (,the smaller is the renormalization factor in the denominator), the more way possibilities exist, the greater is the variance of the complementary quantity in line 2n.
(InfoConcrete) The following interpretation is possible: As mentioned the location coordinate x was identified with the horizontal coordinate k in the Q0 triangle (cf. [LocationXasK]). Remembering the discrete [Schroedinger] equation the vertical coordinate n can be identified with the time coordinate. The distribution of the probability amplitude valid in row n is a function of the variable k and can be understood as state function over which a discrete scalar product is formed. At perception in row 2n (cf. [FinishedPerception]) the won information resp. the measurement result (for example the location) lies in the result of the scalar product over row n.
We can represent reality conform information units as choices of an element from a finite set M, as choice functions on this set - after distinction of the elements of M is possible and so freedom for new information (which needs separated resp. free energy (FreeEnergy)). Suppose M has 2m+1 elements which represent the possible values of a discrete variable. We choose in our example m=3 and M={-6, -4, -2, 0, 2, 4 6}
(The elements may represent e.g. multiples of the half effect quantum hq/2).
Now let's look at the information of the choice of one element of M. We can represent this information by a choice function f(M), whose result is a vector
f(M)=(k_6, k_4, k_2, k0, k2, k4, k6)
whose components k.. are binary variables with possible values 0 or 1. Exactly one[99] of these components is equal to 1, which means, that the corresponding element of M is selected resp. "true"; the other components are equal to 0. For example f(M)={0, 0, 0, 1, 0, 0, 0} would mean: "Element 0 is selected" resp. "Element 0 is true".
Because in this case the information concerning the choice within the set M is complete (the remaining indefiniteness resp. entropy concerning this choice is zero), M cannot contain all variables of physical reality and therefore the function f(M) can only provide a partial description of reality. There must be at least one additional free variable after the moment of perception resp. measurement of f(M) - a complementary variable. For the meaning of "after" and the together with time increasing number of possibilities see (RecommendedGraphTheoreticalResearch).
The creation of the set M has to be done step by step, beginning with an minimal initial set M0, e.g. with M0={0}. If creation of M and choice of an element is done very often, a probability distribution of the possible results arises. For symmetry reasons we can assume a binomial distribution of the probabilities around the initial element "0". So we get the probability distribution
p(k)="Probability, that element k of M is true"
of the result vectors f(M) of the above experiment, e.g. p(k)=Q0(6,k). We can regard p(k) as one of the first functions within a sequence of exact discrete state functions, which can be approximated by the currently usual continuous state functions in case of very large M. It can be easily shown (ScalarproductExample), that one can understand the values p(k) also as scalar products of probability amplitudes or (later) as probability amplitudes (from which scalar products are formed after progress of proper time). Even if the probability amplitudes are complex, their scalar product is real, if measurable.
Now an example for an operator on p: (discrete) differentiation. We make a discrete resp. finite difference along k. In our case |M|=7, there are 7 possibilities for kÎM:
-6 , -4 , 2 , 0 , 2 , 4 , 6 , and
1/64, 6/64, 15/64, 20/64, 15/64, 6/64, 1/64
are the corresponding values of p(k)=Q0(6,k). The sum of these is 1, so we can set p(k)=0 for kÏM. Now let's call the Operator for discrete differentiation D; it maps the function p to Dp:
Dp(k):=(p(k+1)-p(k-1))/2 .
So we get
Dp(k)=1/128, 5/128, 9/128, 5/128, -5/128, -9/128, -5/128, -1/128 for
k = -7 , -5 , -3 , -1 , 1 , 3 , 5 , 7,
otherwise Dp(k)=0. We notice, that these are the values of Q1(7,k), cf. (DiscreteDiff). They aren't any longer probabilities in the ordinary meaning, but can be probability amplitudes (ProbabilityAmplitudes) after normalization.
In the mentioned example we can interpret f(M) as sharp special case of the result vector of a physical experiment. Usually the result is not sharp - then similarly the experimental result can be represented by an probability weighted result vector pf(M) with index k, e.g. pf(M)=(1, 6, 15, 20, 15, 6, 1)/64. That's equivalent to representation as state function p(k) and can facilitate the combinatorial understanding in case of small |M|. If we "artificially" increase the dimension of pf(M) by adding zeros we can represent an operator as square matrix even if it increases |M|. Example for pf(M) and operator D as finite difference "along index" in form of a matrix, completed in suitable way with zeros:
¦ 0 1 0 0 0 0 0 0 0 0 0 0 0 0 -1 ¦ ¦ 0 ¦ ¦ 1 ¦
¦ -1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 ¦ ¦ 1 ¦ ¦ 0 ¦
¦ 0 -1 0 1 0 0 0 0 0 0 0 0 0 0 0 ¦ ¦ 0 ¦ ¦ 5 ¦
¦ 0 0 -1 0 1 0 0 0 0 0 0 0 0 0 0 ¦ ¦ 6 ¦ ¦ 0 ¦
¦ 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 0 ¦ ¦ 0 ¦ ¦ 9 ¦
¦ 0 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 ¦ ¦ 15 ¦ ¦ 0 ¦
¦ 0 0 0 0 0 -1 0 1 0 0 0 0 0 0 0 ¦ 1 ¦ 0 ¦ 1 ¦ 5 ¦ 1
Dpf(M) = ¦ 0 0 0 0 0 0 -1 0 1 0 0 0 0 0 0 ¦·——— • ¦ 20 ¦·———— = ¦ 0 ¦·—————
¦ 0 0 0 0 0 0 0 -1 0 1 0 0 0 0 0 ¦ 2 ¦ 0 ¦ 64 ¦ -5 ¦ 128
¦ 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 0 ¦ ¦ 15 ¦ ¦ 0 ¦
¦ 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 ¦ ¦ 0 ¦ ¦ -9 ¦
¦ 0 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 ¦ ¦ 6 ¦ ¦ 0 ¦
¦ 0 0 0 0 0 0 0 0 0 0 0 -1 0 1 0 ¦ ¦ 0 ¦ ¦ -5 ¦
¦ 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 1 ¦ ¦ 1 ¦ ¦ 0 ¦
¦ 1 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 ¦ ¦ 0 ¦ ¦ -1 ¦
Similarly to [BinomialCoeffMatrix] also multiple powers of above matrix exhibit binomial coefficients [BinCoeffDiffMatrix].
Measurement is only possible when time advances. Due to our considerations to perception of time (TimePerception) we can assign each progress of proper time to the reunion of two ways within some symmetry center, a reunion which is necessarily connected with emission and absorption of free energy (photons) on rest mass. In the simplest case the symmetry center has only one dimension s (e.g. spin)[100] less that the (discrete) global surrounding space (the "global lattice"). Then after start (i.e. decision resp. separation) of the ways there is for each way-point (s,...) a corresponding (anti-)symmetric point (-s,...) "on the other side" and absorption resp emission of energy occurs exactly in s=0.
There is a remarkable parallel to the ways of particles with entangled state: These may be spatially separated in between times, but on the ways of both particles until measurement no energy exchange with surrounding is possible, i.e. s<>0 from beginning of separation, and sign of s cannot change until measurement. If we measure that s was greater 0 for one incoming particle, we know that s>0 has been true since separation (i.e. during a more or less long period of time), i.e. we gain information. We also know, that s<0 since separation must have been true for the other particle because in between crossing of s=0 is not possible for particles with entangled state or - in another formulation - due to conservation of s. So we can interpret the measurement results on entangled states as a consequence of
(anti-)symmetric behavior from start until first return
to a symmetry center. The word "first" signals, that in between times the objects with entangled state are separated from the symmetry center, i.e. in between times there is no information exchange (resp. absorption or emission of free energy on rest mass, resp crossing of s=0). From this point of view entangled states are special cases, which are (due to missing energy exchange with rest mass in between times) simple enough, that symmetry becomes clear (symmetry around s=0, around a symmetry center, in which absorption and emission of free energy occurs).
Of course nature is not restricted to such simple cases. If there is exchange of energy with rest mass in between times, e.g. in row n0, in this moment all ways of the exchanged energy quanta (photons) pass the symmetry center s=0 of the row. Descended states can be represented as a product [of the form å(amplitudes of ways to n0, s=0) * å(amplitudes of ways starting from n0, s=0)]] and therefore aren't entangled.
If in between times, until reunion, exchange of energy (crossing of s=0) is possible with systems, which are at the moment separated from the measuring device (rest mass), (combinatorics) becomes more difficult, but study of these more complicated ways can become necessary to gain deeper insight.
Two-dimensional rotations can be represented as multiplication by rotation matrices and as multiplication by complex numbers. We choose for reasons of better clearness latter possibility, i.e. we represent the rotation with assumed[101] angle w as multiplication by a complex number with amount 1 and phase angle w. To consider probability amplitudes (instead of the resulting probabilities), we define in generalization of (Q0Pvar)
from which
and the following distribution results:
n k-> -4 -3 -2 -1 0 1 2 3 4
¯
0 1
1 s c
2 ss sc cc
3 sss 3ssc 3scc ccc
4 ssss 4sssc 6sscc 4sccc cccc
...
The (horizontal) sum over row n just corresponds to the binomial expansion of (s+c)^n. So if we set[102]
s := i sin(w) und
c := cos(w)
this sum simply corresponds to (i sin(w)+cos(w))^n = e^(inw) = i sin(nw)+cos(nw), i.e. to the complex number with amount 1 and phase angle nw. The smaller ïwï, the finer is the gradation of the realizable rotations. In detail we get (RowSumAsWave)
and the components can be separated by addition resp. subtraction of the corresponding row of the (for symmetry reasons existing) triangle with opposite phase angle (GetComp). One receives for instance
For deduction of this representation only simple arguments are necessary, nevertheless it is unusual (***). Remarkable is among others the separation of the cases of even or uneven line numbers following in natural way from this (in analogy to the natural separation of the spin cases of Bosons and Fermions) and the now obvious possibility for summation also in vertical direction of the triangle.
In the case ï4scï=ï2sin(2w)ï=1 and therefore wÎ{mp/2 ± p/12 ï m integer} the central amounts ïQ0SC(2n,0,s,c)ï just correspond to the central meeting (CentralMeeting) probabilities Q0Z(2n)=Q0(2n,0), in the case ï2sin(2w)ï<1 and therefore wÎ{]mp/2-p/12, mp/2+p/12[ ï m integer} the vertical sum
converges absolutely, otherwise (for n->¥) there is no upper bound for ïQ0SC(2n,0,s,c)ï. So there is convergence just if W is within a 30 degrees broad interval around the coordinate axes, a third of all possible angles.
In quantum mechanics a state with determined energy E is associated with the wave function
,
with [RowSumAsWave] (und TimePerception) it is fair to assume in first approximation
and for example Eµw und nµt .
If we choose[103] s=sinh(w) und c=cosh(w), the sum over row n ([Q0SCTriangle]) corresponds to the binomial expansion of (sinh(w)+cosh(w))^n = e^(nw) and we get
The (assumed) number w can be complex (ProbabilityAmplitudes). We can get the functions sinh and cosh analogously to[GetComp]. With that multiple Lorentz transformations can be chained in a clear way (cf. e.g. [limi] p. 67-68 and [lifl] p. 24-27).
Analogously to the normal exponential function we can construct discrete representations of matrix exponential functions by choosing s and c as matrices (e.g. matrices from SU(n) groups with numbers from Q+iQ). If the matrices commute, we get for example[104] the matrix exponential function (cf. e.g. [lime2] p.113-115 or [liwa] p. 228). If not, nevertheless a clear discrete representation can be possible, e.g. like in case of the Pauli matrices (PauliMatrices). So discrete considerations can facilitate a deeper understanding.
(PauliMatrices) The three Pauli matrices are frequently used in quantum physics. Together with the 2x2 identity matrix I they are defined by
When making the symmetrical binomial distribution [Q0Triangle], per step to the right or to the left a multiplication by the accompanying probability 1/2 is done. Instead of this for
we can multiply by
per step to the right and by
per step to the left. From this results because of
a "zoomed Q0 triangle":
n k-> -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8
¯
0 1' *I
1 1 1 *s/Ö2
2 1 0 1 *I/2
3 1 1 1 1 *s/(2Ö2)
4 1 0 2' 0 1 *I/4
5 1 1 2 2 1 1 *s/(4Ö2)
6 1 0 3 0 3 0 1 *I/8
7 1 1 3 3 3 3 1 1 *s/(8Ö2)
8 1 0 4 0 6' 0 4 0 1 *I/16
...
At this s symbolizes one of the Pauli matrices respectively.
In the general one can also multiply these by the sine and cosine of an angle different fromp/4 and an asymmetrical "zoomed" binomial distribution results. Multiplication by pre-factors whose square sum is smaller resp. greater than 1 yields exponential decrease resp. increase of the row sum in case of increasing row number n. Further modifications are possible, if instead of the hermitean Pauli matrices pseudo-hermitean or real matrices are used, from which easily different types of so-called "quantal bits" (qubits) could be derived ([ligre] p. 17). The interpretation (***) of this might succeed best by those readers who have profound familiarity in the application and meaning of these matrices and who remember the remarks to the Q0 triangle (e.g. those to the central meeting events, proper time unit, cf. CentralMeeting,ProperTimeUnit).
Note: In the above "zoomed Q0 triangle" the occurrence of irrational numbers like 1/2 is problematic. But we can avoid these numbers by further modification with introduction of alternating pairs of multipliers, e.g. 1/2, -1/2 followed by 1, 1 according to the natural sequence of decision and perception.
The rows (columns) of the m-th powers of some important Cartan matrices correspond to the 2m-th rows of the binomial coefficients. For better clearness at first an excerpt from Hazewinkel's Encyclopaedia of Mathematics ([liha2], article to "Lie algebra, semi-simple") and then an example:
... Simple Lie algebras that correspond to root systems of types A-D are said to be classical and have the following form.
Type An, n>=1. g=sl(n+1,k), the algebra of linear transformations of the space k^(n+1) with trace 0; dim g=n(n+2).
Type Bn, n>=2. g=so(2n+1,k), the algebra of linear transformations of the space k^(2n+1) that are skew-symmetric with respect to a given non-singular symmetric bilinear form...
...
The Cartan matrix of a semi-simple Lie algebra over an algebraically closed field also determines this algebra uniquely up to an isomorphism. The Cartan matrices of the simple Lie algebras have the following form:
| 2 -1 0 ... 0 |
| -1 2 -1 ... 0 |
An:= | 0 -1 2 ... 0 |
| . . . ... . |
| 0 0 0 ... -1 |
| 0 0 0 ... 2 |
Bn:= ...
...
For example in case the exponent m=3 and the matrix
| 2 -1 0 0 0 0 0 |
| -1 2 -1 0 0 0 0 |
| 0 -1 2 -1 0 0 0 |
A6 := | 0 0 -1 2 -1 0 0 |
| 0 0 0 -1 2 -1 0 |
| 0 0 0 0 -1 2 -1 |
| 0 0 0 0 0 -1 2 |
we get (BinomialCoeffMatrix)
| 14 -14 6 -1 0 0 0 |
| -14 20 -15 6 -1 0 0 |
m 3 | 6 -15 20 -15 6 -1 0 |
A6 = A6 = | -1 6 -15 20 -15 6 -1 |
| 0 -1 6 -15 20 -15 6 |
| 0 0 -1 6 -15 20 -14 |
| 0 0 0 -1 6 -14 14 |
and we recognize the absolute values of the numbers in the rows resp. columns again in row 6 = 2m = 2*3 of the Pascal triangle (enumeration of the rows beginning with 0 like in the Q0Triangle).
The connections to special relativity theory are clear [BridgesToRel].
Due to the aggravated experimental checkability here much is speculative. Even less I am competent in this topic. However, the following thoughts are worth an outline:
In the quantum mechanics probabilities result from scalar products (ScalarProduct). We can transfer this concept to gravitation. Here in simplified way the first step is outlined: Let be n very great and
proportional to the probability of interaction, with positive resp. negative sign depending on attractive or repulsive interaction. Due to symmetry we can assume (in first approximation, no charge) that a2kund b2k are positive and negative in equal frequency, so that the sum on the right side of above equation disappears and only ng2/r2 remains.
We could understand g as starting point of the present reference system, which can be outside the center as result of an initial decision, e.g. on one side (OneSide) with g > 0, and n as proportional to the gravitating mass. If we furthermore assume, that the frequency of this scalar product it proportional to the second interacting mass N, we receive the desired result, namely a total interaction proportional to Nng^2/r^2.
Let dp be a change of momentum. We can understand the horizontal coordinate k also as coordinate in the momentum space and dkµdp as number of steps to one side corresponding to dp. With that the accompanying speed change dv results from
and dv is proportional to 1/n, i.e. n is proportional to the inert mass.
Preliminary note: At last this approach seemed to me less relevant than the one of (small) correlation.
The own lengths and time scale is apparently constant, the probabilities of the recombination points however aren't. Already simple model concepts (cf.[TPropN2]) lead to an square increase of row number (and length scale) in comparison with proper time. At more detailed treatment of the topic the with falling relative distance increasing probability of interaction between recombination points also has to be taken into account.
(RecommendedGraphTheoreticalResearch) Of course progress of proper time is connected with every moment of perception (of information). Only an approach, in which the number of possibilities of experimental results grows together with the duration of the observed experiment, i.e. together with time, can be adequate (ETmGreatEnough). Our considerations [InformationPath] (InfinityNotApriori) suggest a graph theoretical description of the ways of information, in which the progress of proper time physically is connected with pairs of emission/absorption of photons on rest mass [PhotonEmissionAbsorptionAsTimeUnit], mathematically with subsequent "central meeting events" (CentralMeeting), events in the symmetry center (the vertical symmetry axis) of a binomial distribution or some (e.g. differentiated, multidimensional) modification of it. Such an extending, directed graph (DirectedGraph) automatically contains the more possible ways and free variables, the more steps are done within it.
It is necessary, to deepen important things with high priority, to tidy up and systematize them. At important considerations more consequence[105] is necessary. Perhaps we can help ourselves mutually also in doing this.
(Maxwell) Since in recombination points changes of direction [DirectionChanges] happen, which are in case of p=1/2 [orthogonal] from local point of view, on the other hand magnetic and electric fields pass into each other[106] locally[107] orthogonal, a combinatoric, multiple application of the Maxwell equations should be very interesting (***). It admittedly is difficult to keep track after multiple discrete differentiation (DiscreteDiff) under consideration of the numbers of the order possibilities, also of the vector potential of the magnetic field (combinatoric computer simulation? [lipo1], cf. also wq2). But this chapter is expandable.
There are electric charges, i.e. outflows and inflows of an electric field also can be perceived for themselves, after each other resp. temporally separated. That's not so in case of a magnetic field. Magnetic sources resp. charges (monopolies) weren't found till now[108], so let's now[109] start out from the assumption that there aren't any magnetic charges, i.e. that the poles of a magnet are locally, but not temporally separable. Then the combinatoric ways which go through the (locally separate) poles of a magnet should together (locally simultaneously) and anti-symmetrically flow into resp. out of the observer's point of view, relative to his local time direction. Due to a possible change of the local time direction no contradiction arises to the assignment of matter (Matter) und anti matter (Antimatter). The measurement result is namely dependent on the observer's point of view which is connected narrowly with the way of the measuring. Also remember that the common concept "point of view" of the observer is but clear but not precise since the point concept doesn't suffice for the description of the observer's "point" of view [ObserverViewPoint]. Subsequently a possibility of the change of the combination sequence in case of magnetic field measuring is mentioned.
The perception simultaneous in a point P1 can contain several events which are separated locally. From another point P2 seen the information ways starting out from these events may arrive subsequently, i.e. the same can be distinguishable (discretely differentiable) once in time[110], once in location, although the physics is equivalent. Such equivalences are expressed among others[111] in physical equations. There are many such equations which assign (multiple) locally differentiated perception (measuring) to temporally differentiated perception. Due to their combinatoric information content the Maxwell equations are particularly interesting. The show among others that temporally varying charge (electrical current) plus a temporally varying electric field is equivalent with a magnetic rotational field. Since there aren't any magnetic sources, all magnetic fields are rotational fields i.e. every (simultaneous) magnetic field is equivalent to the temporal change of something. In this case therefore the combinatoric ways must start out from two subsequent events and flow in the observer's point of view together (locally simultaneously) and anti-symmetrically[112] (resp. symmetrically flow in and out). So it's reasonable to assume the time direction of the magnetic field measuring as orthogonal to the time direction of electric field measuring (***). The following outline with the first three rows of a triangle should illustrate the consideration:
n0k0
n1k-1 n1k1
n2k-2 n2k0 n2k2
The points n0k0 and n2k2 subsequent central meetings, i.e. temporally separable resp. discretely differentiable (regarding the vertical as time axis). They are anti symmetric to n1k-1 and n1k1. We can for example n1k1 to an observer point of view of the magnetic field measuring (DecentralMeetings) and understand the way n0k0 to n1k1 an inflow and the way n1k1 to n2k0 as outflow. If n0k0 and n2k0 should seem locally simultaneous we have to assume a time direction orthogonal to the vertical axis, for example horizontal to the right for the one who measures the magnetic field in n1k1 (with e.g. n0k0 as North Pole, n2k0 as South Pole). Together with this time direction one can understand n1k1 locally as central meeting, in which the magnetic field is perceived.
The advantage of this consideration it, that it explains the missing of magnetic monopolies, in addition is compatible with the assignment of matter Matter and antimatter Antimatter to right and left side of a primary triangle. However, it's still speculative and unsecured. If there is more security, then of course it's reasonable to carry out further combinations with the help of the Maxwell equations (***), also under consideration of the [Poynting] vector.
(NotFlat)From the changes of direction which are connected with the Maxwell equations we can win clues to supplement the flat Q0 triangle with further dimensions. We know that the electric field corresponds to the differentiation of an electrical potential. Of course here we make discrete considerations. In the Q0 triangle or the Q1 triangle every decision is the choice between two alternatives, which we at first called "step to the right" resp. "step to the left ".
Now every decision is the choice between two orthogonal directions of discrete differentiation of the potential which are furthermore orthogonal to the incoming direction (the direction between last and actual recombination point). If for example the z-axis is the incoming direction, a decision now is the choice between a derivative of the potential either along the x-axis or along the y-axis. A little sketch to this:
QÄ
n0k0
/ \
y x
/ \
n2k-2 <- Ä= -z- n1k-1 n1k1 -z- =Q -> n2k2
\ /
x y
\ /
n2k0
Here the symbols QÄ represent the positive resp. negative z-direction, orthogonal to the drawing plane, the direction from n0k0 to n1k1 represents d/dx as well as the direction from n1k-1 to n2k0, the direction from n0k0 to n1k-1 represents d/dy, as well as the direction from n1k1 to n2k0. Due to the necessary orthogonality the direction from n1k-1 to n2k-2 resp. n1k2 to n2k2 now is anti-parallel or parallel to the z-axis; to take this into account by way of a hint, in the mentioned sketch the points n2k-2 resp n2k2 are moved a little up, compared with the usual triangle.
If for example in n0k0 we decide to make a step to n1k1, we decide in favor of discrete differentiation of the potential along x, and such differentiation corresponds to the electric field E(x) along x. If we now step from n1k1 to n2k0, we differentiate along y, i.e. the electric field E(x) is differentiated along y: d/dy E(x). The stepping sequence from n0k0 over n1k-1 to n2k0 analogously yields the derivative of the electric field E(y) along x: d/dx E(y). If we now assume a sign inversion left and right of the vertical center (like in the Q1 triangle), so in n2k0 so we have
d/dx E(y) – d/dy E(x)
what just
corresponds to the z-coordinate of the rotation of E. That, which starts out of
n0k0 and doesn't arrive in n2k0, exactly arrives in n2k-2 and n2k2, it just
contains the difference of n0k0 and n2k0 [113]. It is the discrete resp. finite difference along
the row number n, i.e. a differentiation along time direction of something
(whose the physical phenomenon is dependent on the observer point of view), if
we identify the direction of increasing row number with the time direction.
Also the Maxwell equations say
. A differentiation
along time direction is found here, too. For example is valid
d/dx E(y) – d/dy E(x) = d/dt B(z) .
The mentioned sketch shows that the respectively last step to n2k-2 resp. n2k2 also means a differentiation along the z-axis[114]. If we still consider that the in n2k-2 and n2k2 yielding quantities mean a discrete differentiation (of the vertical center) along n, in which we have identified increasing n with the time direction, so the correspondence with the Maxwell equations is conspicuous (***).
The considerations surely must be made more precisely, but then it would be also interesting to look at further branchings, using the Maxwell equations repeatedly.
No (a priori) existence of (isolated) model[115] concepts is presumed. (***)
When we speak of renormalization (AxiomP1) , we already mentioned the (essential) axiom, which means "we are here" or:
The probability of our presence and of our present perception is 1 (ProbabilityOnePerPresence)
or
The probability of our consciousness is 1 .
This perhaps seems to be trivial at first sight, however, it isn't trivial due to the properties of consciousness resp. life. The following chapter deals with one essential property:
(NewInfo) Due to the lot of information which has already arisen from life, we know that life can create information. Since any information transfer is connected to a transfer of free energy, this means that life or our consciousness can send out (separate) from itself positive (free) energy temporarily in the context of the energy-time-unsharpness (give information[116]), if it decides so. Due to conservation of energy this is connected to emergence of negative (bound) energy somewhere else, for example in form of any negative potential, which among others can be noticeable as electric field[117] or gravitational field and at last as force[118] resp. acceleration. Important is the sequence order: Because we can only perceive free (resp. positive) energy, we as conscious units must (cf. a. [consciousness])
(***)
1. at first (give)[119] information resp. send out free (positive) energy (FreeEnergy) (means for us decision, self-insertion, effort, work (Work), lack of information), thereby going into lower potential level[120] necessarily due to conservation of energy so that somewhere else free energy is available relatively to us, which is
2. then perceptible of us.
So essential steps to get new information are:
1a. at first subdivision [Subdivision] of an unit[121] (of presence resp. us) into at first two[122] parts, differentiation and taking a
1b. primary decision (PrimaryDecision) for one part [OneSide]. In this part we then are temporarily localized, in the other the free energy which has been sent out by us at first.[123]
2. Perception of this free energy[124] (as present) from the other part in recombined shape and sequence (InfoBack).
The primary decision yields information coming from us, the information about the temporal order (Order) of the parts or (equivalent) the own localization within the subdivision. We can receive now information from the surroundings (since we are localized on lower potential level). Its quantity is increased because of diversification (Diversification) due to coincidental recombination, said more exactly because of lively recombination of the free energy on the way back[125]. The multiplied information might be represented in division and permutation (along proper time) of the returning free energy.[126]
Due to the conservation laws we must (or want to) go into the other part later (so to speak we shall be decided into this part, having decision liberty in way at determined aim [FunnelOfDecision] – At last we cannot contradict ourselves.). Therefore the information about the initial order might be lost[127] or difficult to find (UncertaintyOfOrder) within the much greater quantity of information received in between.
It is tried in the following chapter to outline the initial sequence of the recombinations, which are connected to our decisions:
(StartQ1) Let be k the column number, that is the integral index of the horizontal position within a row, in which k = 0 in the row center, k < 0 on the left of this, k > 0 on the right of this (k makes double-steps, there is no k = 0 in rows with odd numbers n, 2 is divisor of n+k). A (surely still correction requiring) outline of a description of the possible initial situation in the Q1 triangle follows:
In row n=0, k=0 a primary decision (PrimaryDecision), an experiment to get new information, a simple decision "out of the belly", a decision within us personally is done[128]. "After"[129] that one side (OneSide), right or left to column k=0 in the triangle (Matter) (Antimatter), beginning in n=1, k=1 or n=1, k=-1, is our new localization. Due to the initial symmetry it would have come to the same, if we would have decided in favor of the other side[130]. It is questionable, whether this primary decision is perceivable in usual way. Here the word subdivision[131] (Subdivision) (distinction, differentiation[132]) probably meets better. Only due to this initial subdivision a complete decision in the ordinary sense, which selects one of two (different) possibilities, can be done:
In row n=1, k=1 (n1k1) (without restriction of generality we chose k=1) we take a definitive decision resp. we make the initial subdivision to a completed, perfect[133] decision for which there are 2 possibilities:
In row n=2 exists a distinguishable (differentiable) situation:
k=2 means no further perception - a situation, which exists in this 2D model also in the following rows in the border (k=±n) respectively. Is necessary always somewhere, that new information can emerge from "decisions out of the belly" - in the reality however probably in alternating places: On one hand the probability to remain in the border, quickly becomes very small, on the other hand the 2D model is too flat, multi-dimensional approaches could allow e.g. a inversion of the border what would lead to inflow of information (perception) at increasing row number n also there.
k=0 means progress of proper time and (conscious) perception of a part of the in n=0 not chosen alternative as new present (1/2 of n=1, k=-1), and a part (1/2) of the chosen alternative (n=1, k=1) becomes past[134] - this part is proportional to the past proper time, proportional to the sum the negative central outflow probabilities Q2Z(n) resp. QW(x) (***).
So in row n=2 a part of the present not chosen in n=0 becomes own present (the part becomes more and more complete with increasing n.) Therefore that, which we would have measured, if we would have had the other alternative (the alternative of our environment) as present time in n=0, is not perceptibly and/or presently, but potential future, perceptibly in following rows after return to the middle (the vertical column of the central outflow probabilities, marked with ´) with increasing probability. The sum of the |Q2Z(2n)| over n, starting from row 2n=2, goes against 1.
Since for the present (in n=0) always only a part[135] of the not chosen, free alternative is perceived (and therefore flowing out, therefore becoming fixated past with delay), in between remains enough decision liberty for new information, without contradictions to us (to the information existing in us, to our past and present). So by deciding ourselves we simultaneously create separated (for our present) unknown areas and therefore liberty for further decisions (DecisionFreedom) and new way possibilities. Later we can perceive (represent) also these areas again after such magnification and diversification (Diversification), at least step-by-step in small portions. This is quantifiable by the number of way possibilities which lead back to the (common) presence (center column).Interesting is , that this number of remained way possibilities gains with growing row number n (IncreasingWayCount) also in the Q1 triangle (the Q0 triangle differentiated in horizontal direction), in which the center column k=0 is taken away [The number of way possibilities to row 2n of the Q1 triangle is for great n approximately 4^n/Ö(pn) ]. So decision liberty resp. (increasingly growing) possibility for gaining information remains despite perception, if the perception is done after delay (after n=2) (DelayedPerception); even multiplication is possible, the more, the bigger delay is[136].
(Probably a similar argumentation also can serve as objective proof, that it's nearsighted or even stupid to tear everything (information, safety) back to himself, that voluntary renunciation automatically is worth it and even necessary for a (more extensive) future. Such acceptance of temporary information gaps resp. delayed feedback also requires confidence (ConfidenceNecessary)because it means the renunciation of unnecessary (hindering) control. So confidence is part of every decision - and lack of confidence means a lack of decisiveness in form of missing ability to release. The magnitude of information which we already can perceive shows that confidence is not only necessary but also well-founded [ConfidenceWellFounded].)
The liberty connected to information gaps[137] is a chance for new information (for more abundance in details between start and aim). Just if there is enough confidence, the period "in between" without full information don't need to be a hard slog. Understood correctly (if easily possible for all together), it of course also can (should) mean fun[138] - the possibility of the fast-forward key would contradict the life.
By implication (sure) past is defined as information which is no longer changeable. Then we can copy it and measure it repeatedly in the physical reality. But by perception of information we change the physical reality. So something new (the information created by a new decision) is only then (sure) past, if its perception is carried out after delay[DelayedPerception], late enough, that the number of the way possibilities starting out from the decision increase more quickly[IncreasingWayCount]than the number of the ways which lead back to a (consuming) measurement resp. perception.
The close connection between (creation of) free energy (FreeEnergy), proper time and (creation of) information can also be shown by quantum physical argumentation:
(ETmGreatEnough) Of course free energy is necessary for information transfer. Every information transmission means the transfer of free energy from a transmitter to a receiver. A short quantum physical consideration can give more quantitative details. Let's denote by E the available free energy, by T the total available proper time, by m the maximal number of parallelly available information channels (systems with rest mass like atoms, which can emit and absorb free energy). We show a minimum of the product ETm necessary for transmission of a given quantity of information:
At elementary level information is transferred by photons which are emitted and absorbed, simultaneously at most one photon per information channel. We assume the best case, that the probabilities of absorption and non absorption are the same (maximal information capacity of the code; cf. [lifa] p. 61), that all information channels are distinguishable, entirely used and all photons have the same minimal energy, so that absorption of them is possible just within time T. Let's denote by j the number of photons, which can be absorbed by every information channel during this time. The maximal time for emission resp. absorption of a photon is t:=T/j, i.e. the minimal energy of the photon is (hq)/t=j(hq)/T, in which (hq) is the effect quantum. With that maximal l:=jm photons can be transferred, in which jm j(hq)/T £ E . In the best case jm j(hq)/T=E, from which follows (jm)^2=l^2=ETm/(hq). Since there isn't an a priori preferential treatment of absorption or non absorption and the distinction of smaller energy differences than u:=(hq)/t isn't possible during time t, we can gain at most 1 bit information in the receiver per photon with energy u. This means, that even in case of usage of photons with minimal energy u at most Ö(ETm/(hq)) bits can be transferred, and the transfer of n bits is only possible in case of ETm ³ (hq)n^2.
The square of n is interesting. Is ETm/(hq) a square number? E and T may be proportional because of simultaneous determination by our decision (PtimePdecision), but I haven't deepened this further.
(consciousness) Only after I have written about information creation by (our) decisions, I noticed an interesting "definition"[139] of consciousness in [ligre] on p. 203: There consciousness is "defined" (described) as a synthesis of information acquisition (hence perception resp. measurement) and information creation (hence decision with accompanying expression of will) (of course a complete description isn't possible). There is written literally:
· ”Consciousness is a synthesis of awareness and volition”
· ”Awareness is the acquisition of information”
· ”Volition is the creation of new information”
So the approach there is similar to the one here. Here in addition further details are given, among others to the primary order (decision resp. information acquisition before perception resp. information giving), to proper time (connection of proper time progress with central meeting events (TimePerception) (ProperTimeUnit)), to concrete physically measurable equivalents (assignment of decision resp. information giving to allocation of free energy (FreeEnergy)).
With this approach also is suggested, that the first possibility for the complete perception of all results of a decision again lays in the in the location resp. center of gravity of the decision [DecisionCenter]. That's quite natural because everybody is the first who becomes aware of the own (at first mental) decisions resp. thoughts. Everybody is the fastest resp first in the individual local system (MaxLocalFrequency).
Let's use the term computersystem for any fault-free working computer with software. A computersystem can be (exactly) copied because it is working completely determined, without place for own free decisions or chance in it. But that's also the reason that it cannot create new information. Information quantity is well defined. Assume a computer system S1 with given input data creates x Bit information during the time interval dt. There might be an exact copy S2 of S1 at another location, starting with the same input data and working at the same time (parallelly). So S1 and S2 are equal Computersystems and because of assumption S2 also creates x Bit information during the time interval dt. We know, that S1 and S2 are doing exactly the same calculations, so they come to the same result and therefore together produce as much information like one of them, namely x Bit during the time interval dt. So x+x=x from which follows x=0. No new information has been created.
There is common agreement [ConApproach], that conscious systems can create new information. As shown, computersystems cannot do that[140].
Shortly: Because computersystems are working in a determined way, they can be copied, but for the same reason they cannot make own free decisions and so cannot create new information. So there is no place for consciousness in computersystems.
Also without usage of definition (ConApproach) this is immediately plausible: Consciousness has the ability to think, i.e. the ability to choose thoughts, i.e. the ability to make decisions. Because computer systems cannot do this, there is no place in them for (free) thoughts and therefore also no place for consciousness.
(We know, that every computersystem can only exist for a finite time. No computer can work fault-free infinitely long.)
To avoid many misunderstandings connected with the concept "determinism", I suggest the following definition:
(determinism) A development Q in System B is determined (deterministic) relatively to System A,
(a) if Q has a result (which means that Q is completed after finite time) and
(b) if in System A is the full (1:1) Information about the result of Q already before completion of Q in System B, i.e. if there is early enough a stabile pattern in System A, which can be mapped 1:1 to the full pattern of the result of Q in B (perfect correlation) after completion of the development Q (roughly said: "if A is faster and/or earlier.").
If the development Q has no end, or if there is no system A which fulfills (b), the development Q is not determined.
We cannot neglect the fact, that we need time and energy for information transfer (EtmGreatEnough) and therefore for constructing equivalents of observable results. It is worth mentioning, that according to this definition the process of calculating an exact (1:1) representation of an irrational number (starting from 1) is not determined (deterministic), because this needs infinite time and energy. The calculation is never completed, the result never exists.
The measuring proper time is defined in dependence to the respective arrangement as the proper time interval of the beginning of a measuring up to the perception of the measurement result, that is the time of a decision (to measuring) up to the accompanying perception. The concept "probability" implicitly is connected to this, because the total probability of all possible measurement results only after the variable measuring proper time is 1.
We mentioned [TimePerception], that proper time progress is connected to central meeting events [CentralMeeting]. Now we can define the proper time unit (ProperTimeUnit) as the shortest possible proper time interval, as the time interval mathematically lying between two subsequent own central meetings[141] (physically lying e.g. between emission and absorption of a photon (PhotonEmissionAbsorptionAsTimeUnit)).
Longer time intervals arise from joining of such central meeting events resp. proper time units. A finer subdivision of proper time isn't possible[142]. The experimental results also confirm this. For example in the double slit experiment all way possibilities are equal not only with regard to location but also with regard to time. So it turns out, that we cannot say that the passage of the slits is done "before" absorption at the detector (possibility of the destruction and reconstruction of interference) - after emission the way of the photon together with the event of absorption is one unified moment of simultaneity. So causality violation is impossible. For example think of astronomical observations of the light of far away objects, e.g. quasars, which passes a gravitation lens (e.g. a big galaxy). By the manner, in which the astronomers register the photons of the quasar, they are able to determine, whether the photon has taken both ways round the gravitation lens or only one way "billions of years ago". The photon just doesn't has taken this or that way "billions of years ago" and then was absorbed, instead of this after emission the complete way of the photon together with the moment of absorption is equal in time, since no other central meetings (e.g. absorption) occur in between (WayTimeConstantTillNextMeeting) - quantum phenomena aren't determined until the moment at which they are measured.
The mentioned axiom [ProbabilityOnePerPresence] then shortly means "one (central) meeting per proper time unit". Hereby the central meeting probability Q0Z(2n), which is valid for a current row number 2n, becomes renormalizated to 1. At this has to be noticed, that for the probabilities within the Q1 triangle (FormulaQ1) and for the horizontal scalar product of the Q0(n,2k) is valid:
The formula permits several interpretation possibilities. One is mentioned:
With the renormalization of Q0Z(2n) also the altogether probability of perception from an multitude of outer (not central but by us caused) meetings totals 1, in the gyroscope model (GyrocospeModel) one would say "the probability of the reception of one (recombined) light pulse per emission totals 1" [OneOutOneIn].
So the frequency concept can be a reality near expansion of the probability concept, which is related more clearly to proper time and also permits results greater than 1.
Unity and uniqueness of consciousness (the conscious presence) are connected to the unity and uniqueness of every proper time unit resp. of every recombination point. The singular position relatively to the origin indicates the uniqueness of every recombination point, the indivisibility of this point indicates its unity. Contradiction liberty means that the unit of the consciousness holds. This requires a more exact explanation:
The concept of information stands before the concept of contradiction, with contradiction we mean "contradictory information" in which the simultaneity is important. Important quality of information is
(1) temporary constancy.
Since with every central meeting, i.e. with every even row number a growth of proper time is connected, (sure, exact) information has to be assigned to the rows with uneven row number n.
Further essential quality of information is
(2) assignment to the past
because we have (sure) information only about the past, i.e. sure information comes of recombination points from rows with smaller row number than the current row number[143]. Concretely this information can lie in the value of k, which was true in row n.
If we localize ourselves for example in row n=2, k=0, we can assign sure information only to row n=1, for example the statement "we were in row n=1 in point k=1" (and not in k=-1), i.e. "k=1 is true. We cannot say, we were in point k=1 and k=-1 simultaneously, the separation of the two points would contradict the unity of consciousness. With the previous decision in n=0, k=0 we had to decide for exactly one value of k=1 or k=-1 and both possibilities are respectively unique.
Possible analogy: Only a single fermion can have one and the same state. This also fits to the assignment of fermions [Fermions] to rows with uneven row number.
Here still a remark to the principle of the excluded third and possible unnecessary misunderstandings which are connected with it:
We look again at (exactly) one (simultaneous) row n, which contains information (therefore n odd, n smaller than current row number), so is valid either "we were in k=1" resp. "k=1 is true" or "we weren't in k=1" resp. "k=1 isn't true". So the principle of the excluded third is valid in case of simultaneity in past; the past up to the fixed row n is finite, its construction already is completed and so proved also from intuitionistic point of view (IntuitN3). Without simultaneity in past this principle isn't assured. For example from view of row n=2 we can say "k=-1 isn't true" (in row n=1) and "k=1 isn't true" (in row n=3 due to our decision just taken). Essential is the time coordinate which grows because of our decisions. This coordinate isn't negligible.
Physical laws are either not valid or they are valid in all coordinate systems, i.e. everywhere and at every time. So we have to assume a total sum 0 and therefore symmetry with regard to physical quantities[144], to which conservation laws apply (Cons0Sum) [145] [Q1RowSumIs0]. Well, 0= -0, i.e. 0 is symmetric in itself, there is an initial, primary symmetry[146].
On the way of a central meeting (CentralMeeting) to the next always also decentral meetings [DecentralMeetings] happen, which can cause asymmetric perception. But our asymmetric perception also is result of our own last start point outside[147] the primary (initial) symmetry center. This makes it so difficult for us human beings to recognize the primary symmetry (FullPrimarySymmetry). Our starting point again results from prior decisions. So these cause with temporarily asymmetric perception also (physical) facts, which are the more generally (e.g. parity violation in b-decay, CP-violation) valid, the greater priority of the causing decision is.
Besides the special role of the central meeting probabilities also the experimental results of quantum physics indicate the fundamental symmetry of nature. We for example know in case of symmetric "annihilation" of matter and antimatter into two photons alone by measuring or perception the state of one photon also the simultaneous state of the other, even if it seems to be separated in location. So if we measure one side, we measure also the other side, what information theoretically means unification of both sides (e.g. of row 3 in the [ScalarproductExample] ) because of a perception in the a symmetry center (point B [148] in the center of row 6 in the ScalarproductExample.
The ScalarproductExample also shows the nature of the symmetry. Point B forms the symmetry center between right and left half of row 6, but only row 3 is completely accessible there as presence. The horizontal symmetry between left and right side besides B most likely has to be interpreted purely information theoretically, i.e. |k| quantifies (as minimal number of required elementary decisions) the length of the (not necessarily straight) information way to the center. The information of the two sides is equivalent, even synchronized due to the symmetry (however only perceptible from row 12 on).
The in B (ScalarproductExample) present row 3 forms the center of gravity of rows 0 to 6 only with regard to the sum of the probabilities. There isn't vertical symmetry [DirectedGraph], the future (n > 3) is of course different from the past (n < 3). The greater n, the greater the path length is (with regard to the number of the decisions), the more way possibilities to and back exist. Therefore every way contains the more information, the greater n is. There isn't a conservation of information quantity, an increase of (per proper time unit (ProperTimeUnit) perceptible) information quantity is possible.
In the ScalarproductExample one can in case of fixed sum s:=m+n sum vertically (cf. [skahove] ). Also valid is
.
Summation before scalar product formation also has interesting results, for example
.
Regarding possible physical interpretation I had only vague, too speculative ideas till now.
(ElementaryCoordinates) A hierarchical constellation of (decisions making) systems resp "triangles" may be a reasonable assumption (hierarchical). One may identify elementary "particles" with defined (coordinate sets of) recombination points within defined triangles. There is a lot of free variables. One could for example assign matter (Matter) to branching possibilities[149] going out from the right side of the primary triangle, antimatter (Antimatter) analogous to those from the left, for photons one could assume the (vertical) center column k=0 (PhotonColumn) (***), as (only asymmetrically from one side visible) "location" of photon absorption (PhotonAbsorption). Within subordinate triangles which are starting out of the superordinated triangle respectively orthogonal[150] one could assume for bosons (horizontal) rows with even row number, for fermions[151](Fermions) rows with uneven row number (n/2 as complete angular momentum, k/2 as angular momentum component orthogonal triangle). It has to be considered that single "particles" can have several degrees of freedom (e.g. polarization), so that a point can certainly play a special role (e.g. as center) in the description, but the relation of several points (especially start, destination point[152]) must be taken into account. A couple of initial considerations to this are also found in wq3. Special states of particles e.g. quantum numbers of atom electrons might correspond to the branching possibilities within further subordinate triangles. Among others the periodic system of the elements could give hints.
(RemarkToHeadline) Initially one can say that recombinations are the caused by a primary decision (PrimaryDecision). But subsequently decisions are also consequence of recombinations (of recognition resp. perception connected with them). Because of separation which is caused by decisions (differentiations) then a definition of location is possible. As human beings we are already separated resp. located. So every moment there is exactly one local (individual, inner) reality resp. presence for each of us, depending on our localized (individual) point of view[153]. That’s quite usual, it's everyday occurrence. Our inner realities are separated according to our decisions[154]. Because of vast speed of light the macroscopic outer reality seems to permit an location independent definition of outer time direction[155] and therefore an location independent definition[156] of "before" and "afterwards". But the fact, that speed of light is finite, is enough that as long as[157] the observed systems are separated (by a potential barrier, locally) there is no location independent possibility to define "before" and "afterwards" exactly. In the end that's enough to avoid contradictions (because we know, that small causes/results can mean great results/causes at another time)[158].
(ExtrAstroPhys)It is quite difficult to get free from familiar lokal geometrical view and e.g. to turn the consideration way inside out. For example in astrophysics one speaks of the (by implication absolute, "rigid, frozen") diameter[159] of the border (horizon) of black holes. On the other hand one speaks about gravitational lenses: In the proximity of black holes light beams are turning into the hole. What is (our impression of) the diameter of the hole particularly if it is very heavy (visible universe) and it's border is close to us or if we are even in the border (to another hole)? What's real in the decisive (***) situation in the border between two holes? Here we make a decision on which hole forms the future of our proper time coordinate (starting from present, from the current recombination point). Isn't just here[160] an exact calculation necessary again?
If we want to use common formulas: Are masses still almost constant (radiation losses), if there is a great[161] potential difference? Do higher degree derivations along special directions become great relevance?There are many open questions. So the probability of great mistakes grows towards 1, if one calculates too far, especially if one uses analytical and therefore approximative models [AnalysisAtBestApproximative].
We should not make (like in currently usual cosmological theories) an isolated extreme extrapolation of a single aspect into areas far away from the experimentally checkable, particularly if this extrapolation leads to a dead end[162]. I think, that similar to the big bang model also the model of a point-like, isolated black hole with spherical[163] gravitational field is so simplified that the danger at least of misunderstandings is great. One remember that all experimental tests of general relativity theory exclusively refer to very weak gravitational fields ([lifl] S. 178). I frequently won the impression that models which describe a (restricted) part of reality, are confused with reality, particularly from laymen (and everybody is layman almost everywhere). This is problematic because the consequences can be unnecessarily restricted views of the world, restricted philosophies of life and connected with this insufficient realization of the painful consequences of egoistical behavior (egoism).
Evidently the usual geometric approach leads to errors in the end. Of course in astrophysics also relativistic calculations are carried out, but these (4D) geometric analytic approaches also are approximative[164]. Naturally particularly at observations of far away objects with partially very indirect interpretation possibility of measuring results there is a large probability, that the measurement result also is influenced in relevant way by factors, which remained unconsidered in often heavily checkable, on the current state of knowledge based interpretations. This can lead to extreme distorted results. Caution is appropriate especially at extreme extreme results whose physics is hardly known because there is no experimental possibility for verification. Perhaps it would be helpful, if the common publications also exhibit the difficulties joined with the models more clearly and uncoded: then it's easier for a broad readership to find aimed suggestions for improvements and we altogether can advance better. Of course many astrophysical considerations are valuable, even if they contain gaps. The approved four-dimensional approaches give interesting hints. It's not necessary to extrapolate them extremely.
(PerceptionOfMultiplicity) Here a crude outline to the topic multiplicity and consistency:
As before, QW(x) and QV(x) are defined by
We now study a symmetric arrangement (a simplified, flat "gyroscope model” (GyroscopeModel)): Two relatively small transmitter-receivers spin around a very heavy black hole between them, located in the center of gravity (point concept simplified). The centrifugal force compensates the gravity, therefore each of them forms roughly an inertial system (since each is relatively small). Each moves with speed v/c:=x (about as quickly as in the border of a nucleon) relatively to the other and (after one has begun[165]) sends out a light pulse to everywhere (he "expressed himself") as soon as he has received a light pulse[166] (from the "other").
Due to the symmetry of the arrangement each says about the other: "the other sends only QW(x) times as often to me as I to everywhere send, and I receive only QW(x) times as often as the other to everywhere[167] sends (because of time dilation). So each would have the impression, that from the other only (QW(x))^2 times as many spectral shifted[168] light pulses come back as he to everywhere sends out. How can this be compatible with the fact, that by definition each sends once per reception?
The contradiction dissolves, if one imagines, that each sees 1/QW(x))^2=(QV(x))^2 copies of the other simultaneously (per proper time unit (ProperTimeUnit)) or successively[169]. So he receives altogether just as many light pulses, as he sends out (OneOutOneIn) and can say nevertheless, that each copy sends only (QW(x))^2 times as often.
with
(The last limit value results, if we insert the Stirling formula into [DefQ0Z])
Therefore each sees 1/(QW(x))^2=(QV(x))^2 resp. approximately pn/2 copies of the "other", which is as much, as on a quadrant with radius n (or on a circle with radius n/4) has place. The circle is the two-dimensional[170] approximative area of the simultaneously perceptible (***). It's also interesting that the black hole can have the effect of a gravitation lens and so can cause (for n-> ¥) the impression of a circular distribution of the light pulses coming back, because the light beams from the transmitter to the receiver don't run "straight" (the hole stands in the way) but "bend around"[171] the hole and therefore lay rotation symmetric around the "straight" connecting line through the hole between the two.[172]
(Interesting can also be to think about the minimum number of (independent) decisions necessary to realize the mentioned two-dimensional arrangement from a unit or a "point". Is it in the 2D-model two and therefore one per model dimension)?
Because n is integral, implicitly the mentioned analogy of (QV(x))^2=1/(QW(x))^2 and np/2 raises the question, what shall be valid in the case of inter-results on the left side. Conceivable would be a bridging to quantum physics, e.g. in the form that in the case of inter-results the left side is unstable and transmission/reception of energy results up to next integer level.
Similar, more consistent considerations could lead further. Of course the geometrical model can give hints, but it's secondary, at last one must leave it.
In the end all statements should alone base on primary axioms of our decision and perception process. It would already be remarkable, if we succeed always better.
To demand for more may be too much because it is probably in principle impossible for us human beings to recognize[173] these bases exactly and completely (again).
Recognition (perception) in everyday life is incomplete. It refers to patterns from parts of our lokal past or to associated (partially copied) patterns. A complete perception would also include ourselves as perceiving human beings and therefore is impossible for us.
(OwnPerception) I am occasionally also surprised, in how far-reaching way this rule applies, particularly at longer-term consideration. Experiences from everyday life seem to confirm that it concerns all decisions and so already starts with thinking (inside us).
From quantum physics we have learned (cf. e.g.[AllPossibleWays]), that also the possible influences the ways of nature. Just the possible is that, which is fast detected by parallel foresight within many small units,
(i.e. by quick extrapolation the local[174] inner (partial) models or maps[175] of reality[176], for preselection),
faster than it could be done alone by a "global experiment". By combination of the "global large-scale experiment" with many parallel running (approximative) local experiments in thoughts optimization can be done very fast.
The concept "only the strongest will survive" may be correct for the short term within a restricted frame (in case of restricted definition of "survival"). In the long run thoughts are decisive, because they initiate all decisions. They are dominated by that which we freely like to remember. At this we don't like to remember contradictory things, we like the truth:
(ConfidenceWellFounded) Perceiving the great quantity of information already created we see that in the long run the altogether non contradictory (the truth) dominates over the contradictory.
(NoAnticipation) The following argumentation is not new, it's repeated here only because of some references to the other text:
The classical physical models (before introduction of quantum physics) theoretically would have permitted the exact calculation of future measuring results. But (theoretical) models which permit an exact anticipation of the future are from the beginning unsuitable for elementary (exact) descriptions of natural occurrences because this anticipation doesn't happen in nature. If this would be possible, there wouldn't exist any liberty. One can also say conversely: Since we can predict the approximative (short-term) future, restraints (basic conditions) exist for our freedom.
Inertia (and therefore even gravitation) plays an important role in the mediation of elementary decisions, for instance decisions between "spin positive" and "spin negative" (At the moment we don't need more exact definitions.). One should not be disturbed about different orders of magnitude - the elapsed time between decision and perception implies a (depending on duration of time, more exactly said depending on number of elementary times or recombinations in between, more or less large) lot of (way)possibilities and therefore small probabilities per possibility, so that a great renormalization factor [Renormalization] results. (Some considerations to this topic (special role of the third power) also can be found in the download files. If you are interested, please search for (***) particularly, as usual)
By the fact that we (as most future observer in our own individual, local system) determine definite destination points by localization of our counter pattern [LocalizationOfCounterpattern] due to our decisions (FunnelOfDecision) allowing in between only free choice of way [FreeChoiceOfWay], elsewhere restraints of freedom exist, which affect a moment later also us personally - as soon as our decision becomes irreversible also for us because of our interaction with surroundings.
It is clear that the own decisions are the more directive to surroundings, the more contradiction-free they are. The non-contradictory part magnifies itself in the long run (Diversification), and many things, also symmetry considerations (cf. [OwnDecisiveContribution]), signal particularly, that we must not contradict ourselves, if we want to cause something directive.
Faced with the huge size differences of different reference systems already within the visible universe the short-term size of the own sphere of influence is probably less important, more important however is a durable connection with the right guideline (give), that it leads our decisions consistently and permanently, so that on average the number of new possibilities and the quantity of new non contradictory information per common proper time (ProperTimeUnit) can grow permanently [InfinitePotential].
We know that every perception is connected with a finite sequence of decisions (FiniteRecombinationSequence). We can influence them the more, the greater the natural room for interpretation of the truth is. Often in case of doubt there is the possibility to choose the prettier alternative, without fooling oneself, for example at estimation of the thoughts of other human beings. If we use this with best knowledge and conscience, we create a new even prettier (and also more detailed) truth.
There are good reasons to be grateful. One of them:
We have reason to be grateful for the great variety which we perceive, and for the large effort (Work) which has been necessary to create this variety.
L. E. Ballentine,
The statistical interpretation of quantum mechanics,
Rev. Mod. Phys. 42, (1970) 358-381.
M. Born,
Zur statistischen Deutung der Quantentheorie,
Stuttgart: Ernst Battenberg, 1962.
D.S. Bridges,
Constructive functional analysis - Research notes in mathematics 28;
London, San Francisco, Melbourne: Pitman, 1979.
L. Brillouin,
Science and Information theory,
New York: Academic Press Inc., 1967.
L. E. J. Brouver,
Over de grondslagen der wiskunde,
Thesis, Amsterdam \& Leipzig. 183 pp.
(cf. N. Archief v. Wiskunde (2) 8 (1908), 326-328).
L. E. J. Brouver,
Zur Begründung der intuitionistischen Mathematik. I-III,
Math. Annalen
93 (1925), 244-257;
95 (1926), 453-472;
96 (1927), 451-488.
D.W. Cohen,
An Introduction to Hilbert Space and Quantum Logic,
New York, Berlin, Heidelberg: Springer 1989.
A.S. Eddington,
Fundamental Theory,
Cambridge: Cambridge University Press 1946.
R.M. Fano,
Informationsübertragung: Eine statistische Theorie der Nachrichtenübertragung,
München, Wien: R. Oldenbourg Verlag, 1966.
A. Fine,
Theories of probabilities, an examination of foundations,
New York: Academic Press, 1973.
T. Fließbach,
Allgemeine Relativitätstheorie,
Mannheim, Wien, Zürich: BI-Wiss.-Verlag 1990.
A. A. Fraenkel, Y. Bar-Hillel, A. Levy,
Foundations of Set Theory,
Amsterdam, New York, Oxford: North-Holland, 1958.
R.L. Graham, D.E. Knuth, O. Patashnik,
Concrete mathematics: a foundation for computer science, Second Edition,
Reading, Massachusetts: Addison-Wesley, 1994.
H.S. Green,
Information Theory and Quantum Physics,
Physical Foundations for Understanding the Conscious Process,
Berlin, Heidelberg: Springer, 2000.
H. Haken, H.C. Wolf,
Atom- und Quantenphysik,
Einführung in die experimentellen und theoretischen Grundlagen (5. Auflage),
Berlin, Heidelberg, New York: Springer 1995.
M. Hazewinkel,
Encyclopaedia of Mathematics,
Dordrecht, Netherlands: Kluwer Academic Publishers 1997.
A. Heyting,
Intuitionism: an introduction,
North-Holland, 1970.
D. Hilbert,
Über das Unendliche,
Math. Annalen 95, (1926) 161-190.
A. M. Jaglom, I.M.Jaglom,
Wahrscheinlichkeit und Information, vierte Auflage,
Thun; Frankfurt am Main: Harri Deutsch, 1984.
G. Joos,
Lehrbuch der theoretischen Physik, 15. Auflage,
Wiesbaden: Aula-Verlag 1989.
J. G. Kemeny, J. L. Snell, G. L. Thompson,
Introduction to Finite Mathematics,
Englewood Cliffs (N.J.): Prentice Hall, 1959.
A. Khrennikov,
Non-Archimedean Analysis: Quantum Paradoxes, Dynamical Systems and Biological Models,
Dordrecht, Boston, London: Kluwer Academic Publishers, 1997.
A. N. Kolmogorov,
Probability Theory, in
Mathematics, its Contents, Methods and Meaning, Vol. 2,
Cambridge: MIT Press, 1964.
U. Krengel,
Einführung in die Wahrscheinlichkeitstheorie und Statistik, 3. Auflage
Braunschweig: Vieweg, 1991.
W. Kuhn,
Quantenphysik, Band III E,
Braunschweig: Westermann, 1978.
A. Messiah,
Quantenmechanik, Band 1, 2. Auflage,
Berlin, New York: Walter de Gruyter, 1991.
K. Meyberg, P. Vachenauer,
Höhere Mathematik 2,
Berlin, Heidelberg: Springer, 1991.
C. W. Misner, K. S. Thorne, J. A. Wheeler,
Gravitation,
New York: W. H. Freeman and Company, 1973.
J. Mycielski, H. Steinhaus,
A mathematical axiom contradicting the axiom of choice,
Bull Acad. Polon. S. 10, (1962) 1-3.
J.R. Myhill,
Towards a consistent set theory,
Journal of Symbolic Logic 16, (1951) 130-136.
K.R. Parthasarathy,
An introduction to quantum stochastic calculus,
Basel: Birkhäuser, 1992.
J. Peters,
Einführung in die allgemeine Informationstheorie
Berlin, Heidelberg, New York: Springer 1967.
I. A. Poletajew,
Kybernetik, 3. Aufl.,
Berlin: VEB Deutscher Verlag der Wissenschaften, 1964.
E.L. Post,
Formal reductions of the general combinatorial decision problem,
A. J. of Math. 65, (1936) 197-215.
L. Rieger,
On the consistency of the generalized continuum hypothesis,
Rozprawy Mat.31, Warszawa. 45pp., 1963.
J.B. Rosser,
Constructibility as a criterion for existence,
J. of Symb. Logic 1, (1936) 36-39.
I.R.Shafarevic,
Mathematical reasoning versus nature,
Comm. Math Univ. Sancti Pauli (Rikkyo Univ.), 43, No.1, (1994) 109-116.
C.E. Shannon,
A mathematical theory of communication,
Bell System Techn. J. 27, (1948) 379-423, 623-656,
Reprint in \emph{The Mathematical Theory of Communication}, Univ. of Illinois Press 1949.
G. Takeuti,
Axioms of infinity of set theory,
J. Math. Soc Japan 13, (1961) 220-233.
A.S. Troelstra,
Choice Sequences; A Chapter of Intuitionistic Mathematics,
Oxford: Clarendon Press, 1977.
A.S. Troelstra, D. van Dalen,
Constructivism in mathematics, an introduction, Vol. 1-2,
North-Holland, 1989.
V.A. Uspenskii,
Pascal’s Triangle,
Chicago, London: The University of Chicago Press, 1974.
M. Wagner,
Gruppentheoretische Methoden in der Physik,
Braunschweig, Wiesbaden: Vieweg, 1998.
H. Weyl,
Das Kontinuum. Kritische Untersuchungen über die Grundlagen der Analysis,
Leipzig. 83 pp. Reprinted 1932.
H. Weyl,
Über die neue Grundlagenkrise der Mathematik,
Math. Zeitschr. 10, (1921) 39-79,
Reprinted, with \emph{Nachtrag Juni 1955},
in Selecta Hermann Weyl, (Basel \& Stuttgart 1956) 211-248.
H. Weyl,
Randbemerkungen zu Hauptproblemen der Mathematik,
Math. Ztschr. 20, (1924) 131-150.
[1] This allows a well-defined quantification of information into the known unit "bit". Therefore one can roughly understand the information quantity as the "necessary number of bits for coding the information". It is a relative size (cf. e.g. [lija] p. 86). An exact definition can also be found in http://arXiv.org/abs/quant-ph/0108121 . It uses an entropy concept which is closely connected to the one of thermodynamics [lipo] [libri].
[2] Justification of skips in argumentation by words like "obvious", "clear", "easy", "simple" can be sometimes appropriate for abbreviation purposes, but we shouldn't forget: These words aren't generally valid. Instead of this they are relative, dependent on our point of view (e.g. our previous knowledge: A posteriori "easy" things can be a priori difficult. Afterwards one always knows everything better...).
[3] Every measurement (resp. perception) contains differentiations and decisions. It is also a central statement of the Copenhagen Interpretation, that every measurement result becomes existent only by doing the measurement (cf. e.g. [lijo] p.543 or [liku] p. 123).
[4] These axioms permit the a priori existence of infinite sets and of choice functions on those sets [lita] [limy1]. They were formulated at 1900 and implied several paradoxes (antinomies) from the beginning, which led to a intensive discussion on the concept of existence and on the foundations of mathematics [lihe] [lihi] [lifr] [liwe] [liwe1] [liwe2] [liri]. There were suggestions for several attempts to moderate the difficulties [litr1] [librid]. But with that always was connected a limitation of mathematical liberty, so that the majority of mathematicians keeps on axioms which demand the a priori existence of infinite sets. This surely also because of the noteworthy successes of analytical approaches in the approximative description of natural processes. So it's explainable that in mathematical physics the analytical working with infinite continuous number sets became an usually not scrutinized self-evident fact (exceptions cf. [likh]), despite of the mentioned open discussion on the foundations, despite of the discovery of quantization of physical measurement results (especially of the half effect quantum hq/2 ([lime] p.47) in the beginning of the 20th century. It has been a good opportunity for drawing conclusions with regard to the foundations of mathematical physics, but "the moment was lost" ([likh] p.15).
[5] After measurement we perceive it as present (for the moment fixated) information. So it becomes conscious and with that sure truth, i.e. its probability reaches 1.
[6] Also the mental choice of an element out of a set S is something physical. Due to the finite time, which we have for doing this, S always is finite. If we for example choose mentally an integer number within one minute, the information content of its mentally chosen representation will always be smaller than a predefined upper limit m, for example below m = 10000 bits. Therefore S contains only those integer numbers, whose representation (seen from our point of view -- information quantity is relative) requires at the most m bits, i.e. S contains not more than 2^m elements and therefore is a finite subset of all integer numbers.
[7] Our physical measurements are implicitly always done within a finite proper time interval, which starts with the definite decision to the measurement going out from us and ends with the perception of the measurement result returning to us [DecisionToPerception]. Every measurement contains an only finite sequence of recombinations [FiniteRecombinationSequence].
[8] According to the subsequently explained approach [NewInfo] information and diversity first must be created by decisions, whose number at given time always is finite. From this easily follows discreteness resp. quantization of measurement results.
[9]The condition "within a finite (proper) time interval" is important here. I have not at all the (pessimistic) opinion that "everything is finite". On the contrary, concerning the (very) long-term I am optimistic. Due to its arbitrary great diversity the infinite is related to future (and not to past, if one doesn't want to pervert the natural meaning of the words) (InfinityConcernsFuture). With my texts I want to show among other things that a complete uncoupling (of ego resp. proper time coordinate) isn't possible, that infinity (infinite coding depth resp. information) of something potentially existing is necessarily connected with infinity of potential (proper) time. Here the word "proper" is in brackets because the information barriers (which define the "proper" reference system) are changing in the course of time.
[10] Difficult to overview without previous experience (from past), without analytic concept, e.g. differentiation of functions, which are proceeding on the assumption of a priory existence of continuous (not countable) sets of numbers. The exact calculation of many analytic functions needs an infinite number of elementary combinations (ElementaryCombination), what is problematic for an exact approach to perceptible reality.
[11]We could say at once there is the (quite complex) world because it is here. We then have given reasons for something complicated by using something complicated but we haven't got further.
[12] Perhaps you wonder why I attach importance to exactness. It simply because "exact" is the primary concept which is suitable for elementary (primary) considerations. Something is (exactly) equivalent or not. The notion "approximate" arises only secondarily (e.g. from analyses of probability distributions) and a definition of this notion is difficult and arbitrary, therefore it is unsuitable for elementary considerations; c.f.a. (ExactnessNecessary).
[13]The perceptible reality contains all, which sometime, somehow can be represented (fixed) as information (potential past). Here notion reality means "perceptible reality" (PhysicalReality). The whole also permits decision liberty resp. life resp. the potential of an (unknown, nonexistent) future [Future]. By definition this potential cannot be fixed as information.
[14] For a description of the perceptible reality at most an exact start is possible. Already the currently by us perceived reality (ToPast) can not be described exactly, because there is no further equivalent for it, it is complex and as unique as we personally are. Therefore it is fair to assume that everything, which we perceive (as reality), in the end comes from ourselves. Whether this conclusion is correct also in the objective sense, you perhaps will be able to judge better after you have read the further text.
[15] Within "outer" systems self-combinations (recombinations) run much faster than we as observer can perceive (per proper time by combination with us personally). Only a part of those recombinations we perceive (which is for proportional to the sum of the probabilities, that transmissions from us return to us again [PTimePropSumQ0]). Usually this part is very small at first, which may be one of the reasons for it, that analytic models are suitable for approximative calculations quite well.
[16] The noteworthy successes of different model concepts of the quantum physics, for example the Hilbert space concept, shall not be put in question in any way. Of course useful models have their sense, e.g. as bridging. Also from those models we can derive (among others) valuable hints to get ideas for exact approaches (see below).
[17] E.g. the (complex) exponential function is often presupposed, among others for representation of a wave function. The exponential function is extremely frequently used because it's proportional to it's own derivative. Mathematically the differentiation, which leads to this derivative, uses the borderline case of infinitesimal differences of the exponent. But in the physical literature the exponent usually is proportional to time and/or location coordinate. With this the preconditions for infinitesimal differentiation aren't fulfilled, because there are no infinitesimal differences of time and place due to the uncertainty relation. The problem already lies in the differential equations, which presuppose infinitesimal differences of location and time and whose solution leads to the exponential function. Correction is e.g. possible by use of analogous equations with discrete resp. finite differences (Schroedinger) or discrete representations of the function (Q0SCTriangle).
At this it's reasonable to adapt the quantity of finite differences to natural quantization. Look for example at f(n,E):=(1+iE)^n. The finite difference f(n+1,E)-f(n,E)=((1+iE)-1)(1+iE)^n= iE f(n,E) is proportional to the original function. At this we can choose n=Et/(hq) (E=energy, t=time, hq=effect quantum).
[18]If we follow up this train of thought, the idea appears, that even the past known to us only presents an imperfect model of the whole - there is only then (new) future, if we don't get stuck in such models.
[19] In no way Hilbert's merits should be questioned. His engagement for objectivity is laudable. So he has written in his essay "Über das Unendliche" after a chapter concerning quantum physics ([lihi] p. 164):
"Und das Fazit ist jedenfalls, daß ein homogenes Kontinuum, welches die fortgesetzte Teilbarkeit zuließe, und somit das Unendliche im Kleinen realisieren würde, in der Wirklichkeit nirgends angetroffen wird."
My translation: "And the result is in any case, that a homogeneous continuum which would allow the continual divisibility and would create the infinity on a small scale is found in the reality nowhere."
I don't know how much he reminded this when his concepts of continuous spaces (e.g. the Hilbert space concept [lico] p. 14-20) found broad application just in quantum physics.
[20] e.g. for R/Q or for a Cauchy sequence converging towards the irrational number p - the "imagination" of a circle of course doesn't mean an exact description of this number. As further below described, we ca work on the principle, that the entire geometric appearance (of us personally and of our surroundings including the visible surfaces resp. information barriers) is only a secondary consequence of a combinatoric law which is directed by (our more or less old) decisions (within recombination points), that it represents only the borderline case of the composition of a large number of recombinations.
[21]In the model the surroundings are described by further variables.
[22]It is sometimes distinguished, whether a (non periodical) sequence of decisions is lawless or lawlike. To this has to be noticed from physical point of view, that even if there is a law or an algorithm (which can be coded with finite information) for calculating the sequence step by step, the actual calculation of the sequence (for example with the help of a computer) with every calculation step leads to an increase of entropy and therefore in other place to excessive information loss. So it's not enough to code a law or an algorithm, also after a finite number of steps an exact result must come into being that an equivalent of the result can exist in the reality. We can see from this that an exact consideration way requires a con-attention of the time component - concepts like "maximum fast recombination sequence in the local (informed) system" are relevant [MaxLocalFrequency].
[23] E.g. every non periodical real decimal number between 0 and 1 means a choice mapping from the infinite cross product of the set {0,1,...,9}, at which each of the infinitely many fractional digits means a new (new to decide) choice of an element of this set.
[24] The infinite large quantity of information is the fundamental problem (infinite information never belongs to past). Almost all real numbers are irrational numbers whose exact representation would contain an infinite large quantity of information. If the limit of an (infinite) Cauchy sequence however doesn't contain infinite quantity of information, for example in case of a rational limit value, an (exact) equivalent of the limit can exist also in nature (within finite time), e.g. in form of digital information in a computer memory (this word clarifies, that "existence" always reaches back to past). But the infinite Cauchy sequence is an artificial product (an endless sequence never belongs to past; we know, that time isn't arbitrarily sub-divisible). For example the limit of the series
can exist in nature (in form of an exact equivalent) but not the series itself. Concerning this topic there have been already many misunderstandings (e.g. Zeno's paradox). In special cases the limits of infinite series even can be exact. But unfortunately, since there is no equivalent to these series in nature (within finite time), the model concepts connected to these series also can obstruct our cognition.
[25] Physical quantities can not be defined independently from others. This has to be taken into account, if in mathematical models physical quantities, which don't fully compensate each other tend to zero or infinity. At this also other quantities change, which are connected in reality (often indirectly, outside the simplified model). Particularly also the proper time coordinate is concerned after finite time. So if it's necessary "to let tend variables against infinity", this must happen in the right combination (among others with the proper time coordinate, cf. a. [DtOnBothSides])) and order (Order), otherwise calculation becomes wrong.
[26] Mathematical proofs resp. foundations are exemplary with regard to consequence and exactness (within the dealt topic). It would be a pity [lish], if suddenly a strong break in consequence occurs, in that the results and/or models (e.g. from geometry or analysis) are being transferred to the reality in too far-reaching and therefore inadmissible way, particularly, if an extrapolation into ranges is done, which are far away from any possibility of experimental verification (e.g. in context of cosmology). The great probability of errors gets clear, if one thinks of the obligatory, hardly scrutinized application of analysis in mathematical physics and simultaneously of the cunning constructions in many analytical proofs (e.g. the counting of the union of countable many countable sets, sequences, diagonal sequences, the usage of infinite small not countable surroundings...). In these constructions often (n-> ¥) choice decisions are (implicitly) necessary and the liberties granted by the definitions are exhausted very largely - the definitions however don't describe the reality (in which decisions require space and also time) but a model, which (sometime, t -> ¥) deviates from reality.
I again would like to emphasize, that of course there is no doubt, that analysis (among others in mathematical physics) can be very helpful. It is just important to remain in the experimentally checkable range resp. in case of extrapolations not to forget the difficulties of the used models.
For many approximative calculations analytical considerations will remain necessary. Moreover the working with not countable, continuous sets (e.g. real numbers) can give valuable hints - such models permit extended liberties to test different combination orders and to check results by experimental findings. Perhaps so we can find the correct combination order, too. But there is the danger of time-consuming wanderings and systematics can be lost [TooManyPossibilities].
For a definite exact calculation not countable (continuous) number sets aren't suitable in principle, so that we must confine ourselves to numbers which (resp. their conceivable and so realistic equivalents) exactly exist within finite time and early enough as quotients of integer numbers (as result of our mental decisions resp. choices). At this the words "early enough" also mean, that even countable sets, e.g. the natural or rational numbers, can not be simply assumed as existing from the start, also here the order of counting is important.
Since in the physically measurable reality obviously counting is done very fast (short elementary time, parallelizing), analytic concepts probably will keep their justification on approximative calculations. However great progress might have to be expected, if we can explain the order of the combinations more exactly by orientation at our natural decision and perception process. Of course this process at first has to be described more exactly, cf. (Order).
[27]In many mathematical proofs implicitly or explicitly extensive use is made of the possibility of choosing a subset from a larger set. Under natural conditions this is joined to decisions, which require time (and free energy)...
[28]the greater, the more one calculates "at random"
[29] At first glance one perhaps could relate decision to division or subdivision, perception on the other hand to union (of Something). The so combined Something resp. the so combined formations are probably not only simple scalar quantities but sets with two or more dimensions, for at least differentiated perception always implies a decision within the perceived thing, more exactly said a distinction, i.e. a subdivision of the perceived set and an assignment of the parts to separated sets within the own past. For construction of an axiomatic system therefore it's reasonable to introduce the concept of the (primary) decision before the concept of the perception. Possibly it is necessary to distinguish between primary (1/2:1/2) decisions, which are independent of prior perceptions, and secondary decisions, which are dependent of prior perceptions.
Remarkable is the narrow connection of decision and perception particularly in later times; a sure (completed, perfect) decision would be possibly equivalent to a sure (completed, perfect) perception.
[30]their number becomes larger and larger also in the reality, but just not necessarily in the same order as in the (arbitrary, extrapolated) mathematical model
[31] However, often it is superseded, which also is consequence of an environment, which demands for quick and as clear as possible statements (without those often no money comes. But it's long-term seen unproductive, if wrong information is sold and bought). Particularly in popular scientific magazines such statements are frequently represented as "truth" (besides other actually correct statements). Such magazines belong to the mass media and there exists the trend to believe the statements of the mass media because they try to penetrate in frequent repetition on us. But an untruth doesn't get right by the fact that it is frequently repeated (mixed within true statements).
[32] If I'm wrong or there is a change to this, I would be very much (***) interested in references, which are relevant to this.
[33] In case of approximative calculations often it may be impracticable (and also not necessary) to work completely without the axiom of choice and continuous sets of numbers. Nevertheless an exact approach also is necessary for basic knowledge, which e.g. is needed to be able to estimate the applicability of approximative calculations.
In this context one of many examples, which shows the necessity of exactness [ExactnessNecessary]: We look at the function sin(x 2p), which frequently is used for description real waves. It has zero points for all (arbitrary large) integer x. If the number p (and the function sin) however has no (exact) equivalent in reality (because never exact conceivable), the function values and zero points in reality in case of sufficient large x clearly deviate.
[34] I noticed afterwards, that my previous argumentation in parts is similar to the one of the intuitionism [libr1] [libr2] [litr] [litr1] (cf. a. [IntuitN3]). Independently of this the further considerations should be essentially new.
[35] The geometric appearance influences our thinking so much that there is
great danger of wrong conclusions. Even the mathematical approaches of quantum
physics are concerned. For example one speaks about a complete angular momentum
lges of a particle and calculates it from (in reality not simultaneously
measurable) momentum components lx, ly und lz according to the formula , although it is well known
that in reality every angular momentum is an integer multiple of the half effect quantum hq/2=h/(4p).
[36] I also see no possibility to justify the a priori usage of analytical models. Already the basic analytical concept "continuity" proves as inappropriate for (exact) description of physical reality.
[37] Any decision, which we take, is strictly speaking the decision to a most often very complex (physical) experiment, whose initial parameter is the complete information at the beginning of the experiment, including the information about our decision. Depending on time then our perception contains a more or less large portion of the information about the experimental result. In the case of a simple quantum physical experiment with simple result (e.g. "Spin +1/2" or "Spin-1/2") our perception can be complete quite soon, i.e. it can contain the complete information about the experimental result. If the mathematical model, which describes an experiment, requires an infinite number of arithmetical operations for calculation of the probability distribution of the result from the initial parameters, this means strictly speaking (ExactnessNecessary) an uncoupling of the experimental result from the initial parameters, which also include the information about our decision. But actually (in case of finite experiment duration) the way of the information from every decision to every perception (from us to the surroundings and back) isn't infinite (we aren't uncoupled), but finite. It contains only a finite chain of recombinations, whose mathematical equivalent includes an only finite (RecombinationCountFinite) chain of elementary arithmetical operations (ElementaryCombination). Of course we are not uncoupled from our surroundings but connected (by finite ways of information).
[38] The phrasing "building block" clarifies the common idea of elementary particles and also shows the error in train of thought connected to the particle model. Namely a "building block" means something "stiff" or "frozen", therefore from the point of view of time unchanging, which exists completely isolated (implicit: "always independent") of the observer. It then wouldn't be observable at all.
Observation is always connected to the exchange of information. However, the exchange of something means that both in the subject and in the object something happens. This is only possible, if in both sides time passes f(DtOnBothSides), if there is an [overlapping].
So the experimental results also show that e.g. the length is connected with the impulse (unsharpness resp. indefiniteness of location). The isolated definition e.g. of a smallest length greater 0 isn't meaningfully possible probably because that isn't compatible with our decision and perception process: The "length ends" would be distinguishable (a step further into the future), which however would only be possible, if there are still smaller lengths. Primarily the discreteness (discontinuity) of the perceptible reality probably is caused by (information) theoretical quantities (bit, proper time unit [ProperTimeUnit]), which relate to our decisions resp. distinctions, not e.g. by (isolatedly defined, absolute) metric quantities.
[39]The wave model also works with absolute scales. In addition, one uses approximative (analytic) functions for the description of the waves at present. The reference to probability distributions, which is joined with the concept "wave", provides however interesting hints. The concept "wave" corresponds more to a probability distribution, i.e. the pre-state of a decision, the concept "particle" more to the result of a decision (to information causing in turn a new probability distribution).
[40] These surely cannot be viewed isolatedly from other physical quantities. We have already learned that even "small" dependences have important consequences e.g. dependences of lengths to impulse or potential. Of course further dependences are existing, already because of the frequent usage of (analytic) formulae, which are approximative due to the mentioned reasons (AnalysisAtBestApproximative).
If a dependence isn't known up to the present, unfortunately often is concluded (unconsciously) that there isn't any dependence. Particularly clearly this then is, if single formulae are isolatedly extrapolated to the extreme (e.g. big bang theory, spherical black holes [ExtrAstroPhys] ). This leads to nonsensical, partly contradictory results because the basic conditions for which the isolated (approximative) formula was appropriate don't exist in such extreme ranges any more.
(e.g. it was asked, whether the universe is open or closed: At firm, absolute Schwarzschild radius the universe "expanding since the big bang" would have been closed in early times, why should it be open now? So the suspicion arises that the used concepts are in principle unsuitable.)
Other basic conditions make other combination orders more probably than the one for which the formula was written. At the solution of non-linear equations it would be necessary to take the natural probability of different solution directions (combination orders) into account correctly.
[41]v = relative speed, c = speed of light
[42] As mentioned we cannot proceed on the assumption, that any endless (one- and of course also multidimensional) number sets (more exact: their equivalents in the reality) are existing from the beginning. Linear dependences either don't exist (for elements of a kernel unequally {0} or in case of missing surjektivity for elements outside the image of the dependence describing linear function) or (in the case of a bijective function) don't allow any freedom for new information, which could be represented e.g. as a set of new number vectors (in independent direction). However the elementary combinations, which correspond to non-linear dependences give hints for an objective explanation of the growing simultaneously perceptible information set (of recombinations) resp. the equivalent in form of a growing set of different number vectors.
[43] Outflow and inflow (flow and backflow) from different (orthogonal) pairs of directions?
In this context I thought of the Maxwell equations and the orthogonal directions of electric and corresponding magnetic fields, whose vector product (the (Poynting) vector) represents the flow of free energy, and that this free energy is directly perceptible by us (and also can be emitted by us (FreeEnergy))
These might correspond to the transitions future-presence resp. presence-past.
[44] Here further (combinatory) considerations may continue, e.g. under consideration of the [Maxwell] equations. But change of direction means by implication that the observer's point of view (ObserverViewPoint) cannot be a simple point. Otherwise there wouldn't be any distinguishable directions. Indirectly the former observer point of view already is taken into account with the impulse, which evidently not yet suffice. In the following we nevertheless talk of points, otherwise the considerations get quite complicated too early.
[45] More general: By "determined" units which contain enough free ("non contradictory") energy to overcome a separating potential (information) barrier [PotentialBarrier].
[46]As model one could imagine, that the photon comes from a single dipole radiator. But already the rotation symmetric form of the dipole radiator with surroundings isn't elementary but a geometrical borderline case. Therefore subsequently we assume that at first only 2 directions are eligible.
[47]This formulation may be even a quite good, if no decision about the direction has been made yet.
[48] The Gaussian distribution plays a central role in Eddington's "Fundamental Theory" and led him to interesting conclusions. In case of an priori finite (and therefore also discrete) approach the Gaussian distribution has to be replaced by a symmetric binomial distribution. The binomial distribution offers additional starting-points for elementary combinatorial considerations.
[49]The original Pascal triangle [lius] shows the numbers of way possibilities. Here (in the Q0 triangle) these numbers are multiplied by 1/2^n respectively (pl=pr=1/2; n is the number of the row resp. the count of steps) to get the corresponding probabilities. The numbers of the Q0 triangle are quotients of the regular Pascal triangle numbers and 2^n.
[50] For exact calculation of finite partial sums of the taylor series expansion we need only a finite number of elementary combinations (ElementaryCombination). This is nearer to reality than the function itself, which cannot be exactly calculated (in some way) within finite time.
[51]One has to start out from the assumption that (at first, elementary) only simple combinations (arithmetic operations) are carried out in nature (which need time). If in a series summation is done only over a finite number of sum members, infinities can be avoided, just in the usually quite difficult case x = 1 resp v=c. Relatively complicated functions like QV(x) or QW(x) are the borderline case, the result of a large number of simple elementary combinations.
[52] For p unequal 1/2 changed probabilities result in accordance with [Q0Pvar], especially a correction factor (4p(1-p))^n, which corresponds to x^(2n) in the taylor series expansion [TaylorQV] of QV(x), must be used for every row 2n.
[53]For v<c it is matter with rest mass. Seen so, information interchange (with the observer) therefore means (partial) transfer of the photon impulse to rest mass.
[54] One could argue analogously with metric (and of it derived physical) sizes.
[55]Orthogonality of vectors means a scalar product of 0 . The correlation coefficient also can be understood as scalar product, and thus (orthogonal) vectors as (uncorrelated). The correlation coefficient is the smaller, the greater the separating potential barrier (x^2) is (PotentialBarrier). It has to be considered that all potential barriers which must be overcomed necessarily have to be taken into account. If e.g. two rest mass reference systems can exchange information only by photons, then also in case of low relative speed of both the separation of perception may be great because information has to overcome at least 2 subsequent transitions with great potential difference, with flight speed v -> c, i.e. x=v/c->1 and the correlation coefficient goes against 0.
It is still worth mentioning that the proceeding of the outer time coordinate at first isn't influenced by our decisions, i.e. it is plausible to assume outside a decision direction orthogonal to time direction. This is different inside (IOtime). In our thoughts we have relatively far-reaching decision liberty regarding the time coordinate. We can remember different times and also control this, bat there isn't a liberty in location relative to our body. Till now I haven't deepened these considerations.
[56]The reasoning with information flow over single points is simplified. Interactions of several points must be taken into account at the same time (deductively) to consider e.g. inertia. A quite demanding, but interesting (***) task -- suggestions would be very welcome. A more precisely definition of "at the same time" and "after each other" is necessary.
[57]This means temporary lack of information for us, because at first we are missing the information (the perception of the reality) in the not chosen alternative. So decisions imply temporary lack of information defect for the one, who decides and therefore needs confidence [ConfidenceNecessary](like falling asleep, like temporarily leaving control, like "giving oneself for decisions"). Since the system separated by decision is temporarily free, it also can decide itself. The so possible recombinations in between quickly permit a gigantic set of way possibilities (for return) and multiplication of Information [Diversification].
[58]Perhaps the formulation "added to our local reality resp. presence" also would be adequate here.
[59] After renormalization (Renormalization) in multiplied, diversified form (Diversification).
[60] Since at this pattern and counter-pattern are extinguishing (becoming 0) in one direction component, this component probably isn't immediately perceptible by us, but only the projection onto our (Hyper)plane t=0. For example conceivable would be, that the reunification (and the with it joined deletion of pattern and counter-pattern in individual resp. local time direction) is connected with putting the present time to 0 permanently, from which the impression of the permanent new begin of present could emerge. The probability of the new presence is permanently (after every decision) set to 1, renormalizated. [AxiomP1] (Renormalization)
This (probably an axiom - we are here) would have to be taken into account in calculations. From this the subjective impression of the surrounding multiplicity of (also identical) particles could follow, so to speak as necessary "compensation" of renormalization to avoid contradictions (number p as approximative divisor of the proportionality constant) [PerceptionOfMultiplicity]).
[61]I chose the word "counter-pattern" to clarify that the key of the perception lies in an exact fit of counter-pattern to the corresponding part the own pattern.
[62]Strictly speaking every elementary decision is also the decision to a definite measurement, by the fact that an exactly the (coordinates of the) decision describing counter-pattern is separated. Due to the conservation laws the perception of the (in recombined shape, also after and after) coming back counter-pattern done always sometime and is the measurement result (DecisionToPerception). Therefore one also could grasp each of our macroscopically visible decisions as transmission of a large connected sequence of counter-patterns, which we receive again after more or less many recombinations (in changed shape, together with other counter-patterns which have been sent out by us) as truth resp. reality in the extensive sense. In this respect our life also is a science, a science not quite in itself, but in the beginning a very individual (special).
[63] By decisions between the separated and the remaining - after every decision the remaining is our immediately following (present) frame of reference. Most often the change of the reference system is minimal (proper time increases), however, it also can get quite clear [FullReferenceFrameChange].
Additional remark: The function QV(x) appears as factor and x is the root of a potential difference. Work is necessary to overcome a potential difference. One can on the one hand interpret this information-theoretically, then this means output of information (release of free energy (FreeEnergy) which results from own decisions), on the other hand simply physically as work=force*way.
If the factor QV(x) is large, so also the separation of the reference systems, i.e. the directions of the most probable ways within the reference systems are unmistakable different. The separation arose from earlier (locally primary) decisions, these also determined the individually locally most probable way. New decisions (which also may be objectively better than some old local decisions, which also may be able to come nearer to the objective truth), which lead away from this local way, initially require work and effort.
(otherwise the previous decisions would be insignificant, not existing, there wouldn't be any past. A contradiction would arise, if we define a decision as something effective, as the beginning of a new (con)sequence).
We already notice this in small dimensions in form of inertia.
Such considerations are locally still easily comprehensible. A more global consideration is fundamentally more complex. One consider, that a globally most probable way (as result of a primary decision, which determines the truth) can "look" e.g. circular or even completely crumpled in small dimensions (if it would be visible previously).
[64] Clearly the number of recombinations (and with this the number of equivalent arithmetical steps) from the starting point to the current destination point always is finite (RecombinationCountFinite).
[65] QV(x) is proportional to the increase of proper time. The greater n resp. the longer the partial sum of the taylor series expansion of QV(x), the more recombinations are done, the greater is the probability that something, which is separated before, returns. In case of x=1 resp. v=c this probability goes against 1 (cf. a. taylor series expansion of –1/QV(x)=–QW(x) [TaylorQW], summation beginning with the second power of x). This is the common case for photons, which transfer information. So we can assume, that all information, which we send, is coming back surely, in recombined form.
This clarifies the stupidity of egoistical thinking resp. behavior (egoism). Such thinking restricts itself unnecessarily (e.g. within the current reference system and its short-term future), doesn't recognize the consequences of current behavior (current output) for long term future (input) and therefore makes mistakes. The unnecessary, often very painful consequences are problematic outside and at last also inside (at long-term consideration every egoistical concept proves as senseless). The fatal consequences of egoism also apply to greater units (couples, small and great groups, Lobbies, companies, nations, mankind), if these units behave inconsiderately towards the environment.
[66]Our current point of view (our localization) determines the order and the separability of the perceptions. That, which by one point of view is simultaneous, generally by another point of view isn't simultaneous (and conversely) in the general. However, the (symmetric and antisymmetric) exceptions are also interesting.
[67] There is no causality violation within (***) a single proper time unit, because an order is only defined by chaining multiple proper time units.
[68]With it are meant more strictly speaking the events (the meetings in the middle of the Q0 triangle with all ways and accompanying amplitudes which flow into there), which correspond to the respective central probabilities; the corresponding places are the "outflow holes". From the observer point of view after removal of the corresponding probabilities the events don't exist any more in the present. One surely can describe this also in another manner, for example as differentiation (DiscreteDiff), i.e. presence as difference between future and past, flowing out into memory.
[69] the are not simultaneously be perceptible by any observer. Also our own reference system is unique. By the fact that we are aware of something, e.g. a localization, we exclude another localization for this moment.
Possibly one can associate to every sort of particle a specific position, more exactly said a specific combinatoric constellation (relative to a common frame of reference) cf. (ElementaryCoordinates).
[70] Bridge to the double slit experiment: The destruction of interference by perception of a passage through the right or left slit could just be caused by this separation: The interfering (the center crossing) parts were set on 0 (due to absorption of photons [PhotonAbsorption]), only those way possibilities, which lie completely on the right side (resp. passed through the right slit) or those, which lie completely on the left side (resp. passed through the left slit), remain.
Due to renormalization (Renormalization) the sum of their probabilities becomes 1, because no other ways are possible and after registration of the particle behind the slits on the screen surely one of the possible ways has been used.
[71]We can decide in favor of a side and one moment later (along proper time) we can explore it more exactly (during this perception we can make further subdivisions) (SubDivisionWithinChoice). Decision and perception are closely together.
[72]At first it is remarked that the perception of an orthogonal change of direction as "geometric bend of 90 degrees" is dependent on the observer point of view (synchrotron radiation). If one starts out from the assumption that with every recombination the direction is changed, the set of all (within the triangle vertical subsequent) recombinations whose probabilities |Q2Z(n)| flow out becomes a (multidimensional) (not dot-like) structure, e.g. a surface (from view of almost all points of view), which in analogous way also applies to the points of (horizontal) rows. The sum of all |Q2Z(2n)| (without row 2n=0) goes for v->c (flight speed) against 1, according to the probability, that a transmitted photon meets the next surface which is surrounding the transmitter (therefore reverse approach: Surrounding as "outflow hole"; in the case v<c we look at rest mass particles, whose "outflow probability" is smaller than 1, but greater than 0 (tunnel effect).
[73] More exact information about the area which was separated in the start (row 0) is missing. The subjectively nearer area of the own recent origin however is better known. Due to the conservation laws both ranges must be represented somehow (in our world), their asymmetric perception (surplus of matter [SurplusOfMatter], more below in hierarchy also the distinction inside-outside) is result of the own asymmetric information (of the individual last start point outside the center) due to our decisions [FullPrimarySymmetry]. Our asymmetric perception of reality shows us human beings, how much we are subordinated in hierarchy. Information exchange can help us to recognize the objectively existing symmetry more clearly. Then we can recognize the symmetry center from our current point of view and decide in direction to the objective (central) point of view (again).
[74]With this one can take into account the opposite direction from view of the (unique, singular) row center.
[75]So they are no possible sources during the next moment (step), just like a photon with impulse not to the observer but with direction to the outside.
[76]An interpretation possibility as bridging to physiology: the more we take decisions consciously (as connected sequence within ourselves), the less is left, the greater the probability becomes, that the kernel of consciousness finally "flows away" resp. "flows out" (FullReferenceFrameChange). So a connected chain of consciously controlled decisions is the more improbable, the longer it is. We know that e.g. the depth of our concentration is bounded, also the duration of connected conscious control (sleep need…). It is only a question of time, then we then change the reference system.
[77]Also other modifications are conceivable, for instance other outflow areas, "inflow" areas - it is for me alone impossible to explore all this.
[78]One has to take into account that the system must remain open and doesn't get completely deterministic, i.e. that connections back arise more slowly than recombination points with new open channels are created (for this there must be sufficient time, and space).
[79] Perhaps also the attempt "rational quaternions as probability amplitudes (Pl, Pr)" might be helpful. At this the probability amplitudes Pl resp. Pr for steps to the left resp. to the right correspond to two quaternions and consecutive axes of rotation (which results from the multiplication of the quaternions) should be orthogonal to each other.
[80] In combinatoric approaches it must be taken into account that temporal distinction doesn't allow any commutation of the order (***) [Order].
[81] If one calls for n greater 0 the values |Q2Z(2n)| = -Q2Z(2n) "outflow probabilities", so Q2Z(0)=1 is an "inflow probability" because of sign inversion. The probability to flow out in some row corresponds to the sum of the values -Q2Z(2n) from including n=2, the probability to flow in (and not to flow out opposing again) corresponds to the sum of the values Q2Z(2n) from including n=0. Here I am a little more generous also because the meaning of "out" and "in" is dependent on the observer's point of view which wasn't fixed till now. If however this point is defined more exactly, such finenesses become important and have to be taken into account.
[82]Now the proper time is clearly seen in relation to the observed object. In the case of information exchange there always is a common part of the proper times of subject and object. So an attempt for quantification of (overlapping) of consciousness contents seems to be given, but without further precise definition this statement is little helpful for the time being. Different details must be taken into account, even the definition of subject and object resp. definition of direction of information flow is dependent on an primary (initial, arbitrary) decision [PrimaryDecision]. Perhaps we can come back to this later again.
A short remark: In the case v/c=x=1 the separating potential barrier is maximal, the liberty there to here is maximal [DecisionFreedom], decisions uncorrelated, orthogonal), i.e. there is a free choice possibility left and right to the local center resp. probability maximum with p=1/2 and the (immediate) overlapping of proper time there and proper time here (regarding this local choice) is minimal. So proper time there resp. perception (meeting) there resp. event located there (e.g. the way of a photon from point A to B) contributes only a minimal share to own perception resp. proper time (if only the immediate perception is taken into account). Therefore the event there needs minimal time, from our point of view. This could be the reason for the fact that photons move from one (differentiable) point to the other in as short as possibly way, i.e. maximal fast on the shortest way (v=c and p=1/2 for way possibilities respectively on the left and on the right of the local way minimum - Fermat's principle of the local shortest time).
[83] Apart from other conceptions (e.g. as 2 x 2 matrix) one can conceive an imaginary number simplified as number which changes the sign with every multiplication (then it's naturally orthogonal to a real number). For instance the sequence of partial sums of the taylor series expansion of QW(x) converges to a real number |y|>1 for imaginary x in the case |x|<1. On the other hand for real y in the case |y|>1 the taylor series expansion of QW(x) has a partial sum sequence which alternates for sufficiently great n, i.e. the finite partial sums multiply so to speak like imaginary numbers, if one increases n by 1 with every multiplication ("like time").
[84] On the other hand they surely wouldn't cover a continuous interval due to the necessary discrete consideration.
[85]This always means a decision which isn't completely deducible from the perceived past (lack of information) and therefore always means a more or less great risk (the not chosen alternative might be the better). This is the clearest, if we have no pre-information (1/2 to 1/2 decision), but also the everyday digitalization (1-p to p decision for small p: 1>p>0, p->0) also belongs to it:
If we go for example across the street in the city, the probability, that we do it, is exactly 1 (because this process belongs to the conscious present). This is greater than the probability (1-p) that the street is actually free, for we then cannot exclude that we have failed to see something or somebody suddenly races with 100 mph along and then crumbling takes faster than it would be because of the decisions in the context of the natural ageing process.
The decision for life, particularly for the very conscious human life means a considerable risk and self effort to create new and more abundance, in the end for the whole. Life and particularly every human being earns respect also therefore.
[86] For this it's necessary that after copying something is left, that with the perception not all way possibilities from decision (which should make new information) up to perception flowed out and therefore are consumed. Of course this renunciation of complete safety is not quite without risk and so again a partial new creation - it is digitizing information again: If from our view e.g. a probability is 0.9, we (have to) proceed on the assumption that it's 1 if error risk and profit value are the same (of course this is a special case, in traffic the error risk for example is extremely greater than the profit value, as mentioned...).
By the way also our ideas are just based among others on digitized information copies (from different time directions, from experiences of inside and outside). Perhaps they have won more details and clarity again because of (risky) digitalization, one can regard this as the new (not always fault-free) part, but in the essential they contain more old things, than it appears at first sight. Of course this also applies to my texts.
[87] By making discrete differentiation (DiscreteDiff) along k and/or n (and renormalization) we can form orthonormal systems. In addition, there are many possibilities to superimpose the functions Q0 or Q1 (with sign) so that for great n wave pictures (***) of the graphs result. To this a couple of considerations are found in the download files wq2 and wq3 (in case of special interest enter corresponding string search, as usual).
[88] Perceptible (existing) are only the differences of the forces, more exactly said the resulting acceleration differences. If the same constant force per mass works everywhere, e.g. within an inertial system falling freely, then this "force" doesn't change physics.
[89] resp. the reduction of the sum of the squares, i.e. the [Scalarproduct] within row n in case of measuring or perception in row 2n; then the word amplitude would fit better for the number Q1(n,k) (ProbabilityAmplitudes).
[91]Of course differentiation normally means that dk goes against 0. Since we make discrete (not continuous) considerations here, however, this isn't possible. We vote dk = 2, the smallest possible (horizontal) distance for k. In case of great n the common differentiation approximates the (exact) discrete differentiation.
[92] For great n the resulting function is near k=0 approximatively proportional to k^m. Q1(n,k) for example is for great n and small |k| approximatively proportional to k (linear), which offers hints for bridges to classic physical models.
[93] Therefore and because the Q2Z(2n) also represent the discrete resp. finite differences of the Q0Z(2n) (we have Q0Z(2n)-Q0Z(2n-2)= Q2Z(2n) ), also the central numbers of the (with 2n=2 and row [1/4, 0, -1/2, 0, 1/4] starting) triangle with the second discrete derivatives of the Q0 triangle along k are equivalent to Q2Z(2n).
[94]Discrete considerations permit realistic functions with similar behavior, also nearly continuous functions with arbitrarily great (finite) number of waves (wavelike).
[95]So constructed functions like the Dirac delta-function [DiracDeltaFu] (or alternatively the introduction of distributions which one can understand as mappings of infinite-dimensional vectors into a continuous, not countable set) become superfluous.
[96]Consider that m and n can be very large and that in the analytic borderline case the sums are represented by integrals. Due to the nature of the function Q0 and its derivatives (e.g. Q1) automatically all way possibilities are taken into account. The development goes from the comprehensible to the complicated and back again to the comprehensible.
[97]These contain from view of point B partial information about the future although they already exist (within the whole). One can call this localization dependent part of the future "existing future". Further rows don't exist yet. So from local point of view there is a (finite) part of the future which already exists. Because further rows are possible (after new decisions), always there is an (infinite) part of the future, which still not exists. One can call the latter part "nonexistent future". (Future)
The concept of row n as boundary between existing and nonexistent future is simplified. At first this row probably is a finite[multidimensional]discrete set of points, in addition one can assume a hierarchical constellation of systems ("triangles") with respectively own definition of the horizontal direction (and so own definition of the discrete set of points corresponding to row n). With complete perception in the center k=0 of row 2n however row n of the corresponding system surely exists.
The conical shape of the area of information ways converging to point B suggests a connection to the light cone model. But the light cone is a four-dimensional, geometrical model with limited validity (vacuum, flat space time, borderline case of large numbers). Here, however, we talk generally of information ways within a discrete space, which is finite-dimensional (InfinityNotApriori) , but not necessarily only four-dimensional (further dimensions can make possible long-distant seeming interactions). We know that the ways of information can seem very curved at presence of rest mass. The definition of "straight" is dependent on the observer's point of view and with this also the appearance of the border of the area of the information ways to B.
[98]Connected with this is a remarkably simple interpretation possibility of unsharpness of perception as necessity for ensuring liberty: One could regard the one ways (e.g. those over the left half of row n, over points with k<0) as ways there and the others as ways back. By the fact that the exact way there isn't fixed, also the exact way back isn't fixed (by symmetry conditions), i.e. there is liberty in the choice of the way.
[99] Discrete considerations need no constructions like the Dirac delta-function (DiracDeltaFu).
[100] It is worth mentioning that in case of s=±1/2 the relationship remains local (more exactly said: neighboring) along this dimension.
[101]Due to the mentioned difficulties (InfinityNotApriori) of the analytical construction of the rotation angle w we have to start out of the assumption that only for the components of the rotation operators exact equivalents exist in nature, i.e. for sin(w) and cos(w), but not for the angle w of rotation.
[102] If we at this only allow only numbers of the form s=i(a^2-b^2)/(a^2+b^2), c=2ab/(a^2+b^2), in which i is the imaginary unit and a,b are integers (not both equal 0), we remain in the set Q+iQ.
Should s^2+c^2=1 be exactly guaranteed and should the quotient s/c be close to a real number q, we can choose a rational number v arbitrarily close to q±Ö(q^2+1) and set s:=(v^2-1)/(v^2+1) and c:=(2v)/(v^2+1). Then s^2+c^2=1 and the approximation s/c»q is arbitrarily exact.
[Also i can be replaced (e.g. by 2x2 matrices) so that we need only rational numbers. The accompanying angle w is most often irrational and therefore has no (exact) equivalent in physical reality.]
[103] Here we need only rational numbers, if we allow only numbers of the form s=2ab/(a^2-b^2), c=(a^2+b^2)/(a^2-b^2), in which a and b are integers (with ½a½¹½b½).
[104] We can use
in which M is a square matrix and 1 is the unit matrix with same dimension. If we (formally) define s:=1 and c:=M/n, row n corresponds to the binomial series of (s+c)^n = (1+M/n)^n and its sum approximates the matrix exponential function exp(M) arbitrarily exact, since n can be arbitrarily great.
[105]This applies not only to thinking (concentration) but also to life. Having recognized the right direction we proceed better with enduring determination.
[106]It is remarkable that temporally changing electric resp. magnetic fields induce "circular" magnetic resp. electric secondary fields. Such "circles" always contain way there and way back, analogous way there and way back from a central meeting of the temporal perception to the next [TimePerception]. The exact consideration suggests, that the "circles" in reality are discrete polygons, which are branching.
[107] Due to the analytic formulation of the Maxwell equations with the word "local" is meant "within an infinitesimal surrounding". This isn't valid in the natural measuring process and unsatisfactory. Only discrete considerations would allow an exact definition of the concept "local", equivalent to "belonging to the current presence". This current presence isn't dot-like, it contains a more or less extensive area. This area can look so extensive in some directions from view of other reference systems that the combinations produced locally here result in long-distance effects there (and reverse). This of course here and there always only under preservation of the conservation laws from one perceptible constellation to the next.
[108]Because magnetic dipoles exist, magnetic charges resp. monopolies however couldn't be found, the assumption seems reasonable, that the subdivision of a magnetic dipole in two (temporally separated) monopolies therefore isn't measurable (possible), because thus also the union of consciousness resp. connection of subject to object resp. connection of the measuring instrument with the quantity to be measured would be interrupted. Necessarily for this union is the connection of information way to and way back, i.e. the union of the sequence from one central meeting (CentralMeeting, e.g. in row n) to the next (e.g. in row n+2).
[109]Some of my earlier considerations (in the download files) dealt with the supposition "observer as magnetic monopoly". For the time being, I haven't deleted them yet. Even if there actually aren't any such monopolies, parts of the considerations still could have a little value. On the average the newer considerations are more relevant.
[110]Temporally differentiated perception always contains a local order and therefore more information than differentiation between places of equal kind. But if local differentiation is carried out along a gradient, for example a local potential difference, so the information quantity is the same and the notion equivalence permitted.
[111] They also are valid in the perception process, in the transition of presence to past: After perception of a temporal difference, i.e. temporal differentiation, the perception becomes a unit which we can remember it simultaneously (within one proper time unit), and we have decision liberty in remembering, i.e. there is also a free elementary choice possibility with p=1/2 (uncorrelated, orthogonal).
That physical equations, e.g. the Maxwell equations, are connected with the bases of memory in such immediate way, is surely a rather unusual view. Nevertheless this view seems only consequent and more detailed considerations could be worth the effort (***).
[112]"anti ", since there aren't any magnetic sources
[113] Also an exact equivalent of the result of the (discrete) differentiation exists: That, which is perpendicularly to the own way branching off just corresponds to the difference between before and afterwards. So strictly speaking x-axis resp. y-axis are perpendicular to the branching differences d/dx resp. d/dy.
[114]By cyclic exchange of the axes the considerations analogously are possible for the remaining direction components B(x) and B(y) of B.
[115]That seems to be so in current physical theories. But generalized criticism would be out of place, because many of those models surely are also very helpful [helpful], interpreted correctly.
[116] This is parallelly possible for all conceivable places, i.e. it is tried out extremely much (per proper time) so that we as human beings can't capture it any more with looking. Therefore the enormous lot of information.
[117] The action potential of nerve cells is only a special case which shows a particularly clear correlation to a decision. It will become still more clear if it leads to actions, which we can interpret, whose language we understand (due to the fit of our memory resp. counter-pattern (CounterPattern) to the perceived pattern).
[118]Due to our decision we now identify us besides the vertical center column k = 0, i.e. we are outside the most probable area. In the general the probability gradient (ProbabilityGradient) (differentiation (DiscreteDiff) along k) to the center now isn't zero, i.e. the probability of a step to the right is different to the probability of a step to the left, which implies a force. So decisions cause forces, at first away from the current center , then(secondary) back again. This for the time being qualitative consideration also matches the conservation of impulse. Quantitative considerations are dependent on the observer point of view, renormalization (Renormalization) has to be taken into account, the orders of magnitude can be very various.
[119] Initially the deeper potential (present) means a separation of positive and negative energy, the sum of energy remains 0 [Cons0Sum]. However, information lies in the order, in the pattern of the separation according to our decisions.
[120] Initially the deeper potential level can lie in the "Dirac sea of negative energy". Later, hierarchically subordinate, it can lie inside, i.e. near to a local rest mass center. If within the Q1 triangle we e.g. decide initially ourselves (our inside resp. inner world) for a side k<0 [Antimatter] , so we shall perceive a surplus on the other side (SurplusOfMatter). In addition we have determined to return from there to k=0, whereby in between we probably can determine certain row numbers n of different meeting points of our parts in k=0 [PhotonColumn] by localization of our counter-pattern (LocalizationOfCounterpattern), the exact way to that points on the other side [Matter] is free, however (FreeChoiceOfWay).
[121] Perhaps it would be more graphic to describe the unity as "uniform area" or (even more problematic) as "homogeneous space", but this encourages to wrong (the natural order (Order) violating) analytical conclusions, because we cannot start out from the a priori existence of any areas (or spaces). The "area" (space for decisions) is first created by the subdivision. Information theoretically seen simply is meant the creation of a choice possibility resp. the giving of decision liberty.
[122] The stepping sequence 1a and 1b can repeat before step 2 and a subdivision into more than two parts results (more than 2 distinguishable energy states).
[123] The (before 2. repeated) sequence 1a, 1b,... means creation and determination of measurement resp. experiment, i.e. creation of the set of possible experimental results.
[124] It isn't unrealistic to assume that the finiteness of consciously uninterrupted perception (awake-sleep-rhythm, cf. (FullReferenceFrameChange)) is the result of the finiteness of this free energy.
[125]We probably touch thermodynamic topics here.
[126]In this context one also should think about possibilities, how the won information can be copied and stored. That is prerequisite for the construction of complex hierarchical systems which exist in the reality. Also we then describe these systems as "clearly alive". They are controlled by decisions of the primary consciousness which may primarily contain only one bit but regularly cause an avalanche of information flow. In this manner also we as human beings live from moment to moment. The hierarchy is more far-reaching for we also live from day to day. A day contains a connected sequence of conscious decisions and differentiation (work), until we run out of energy and have to perceive it again in recombined form (dreams).
[127]Perhaps information can remain in the same (connected) system only temporarily and therefore must be copied resp. transmitted (for inter-storage) to be permanent.
[128] Usually there is pre-knowledge from the past, which is decisive for us and has to be taken into account. But (also) in everyday life frequently there are situations without any (local) pre-knowledge, where we cannot recognize at the best knowledge and conscience the right choice. Then we just have to primary (as first) decide resp. determine: "it is right so", "that's the right way". After this our determination will be decisive (also) for ourselves.
[129] (PtimePdecision) Initially proper time direction (Order) isn't defined. The primary decision can determine it by separating positive (free) from negative (bound) energy (***): Proper time is defined that with every (time-) perception positive (free) energy decreases and negative (bound) energy increases. Because "positive" and "negative" are relations to the own relative potential level, increase of proper time looks like increase of the own relative potential level, at which "potential" can be interpreted at the best information theoretically (cf. [lija] p. 86).
[130]It simply would be another decision prior to any perception possibility, not "better" or "worse". Quite generally we cannot judge decisions which lie outside of our perception range.
[131] That means separation from the center and effort (work), to try, to inspect.
[132]Most important example for this probably is the differentiation d/dt between past and presence and d/dt between presence and future, more clearly discrete is the one between charges with opposite sign and the subtraction of the fermion amplitudes.
[133]The word "perfect" because of local irreversibility: In the double action resp. necessity of pairing way-there and way-back for (temporal) perception lies the reason for the appearance of non-linear square quantities in the taylor series expansion and in the with probabilities weighted quantities -Q2Z(n), if one sets x^2=4*Pl*Pr , in which Pl resp. Pr are the probability amplitudes for decisions to the left resp. to the right (or decisions forward resp. backward depending on point of view). Thereby the function loses injectivity within the definition range - locally in the possible destination points of the following row (in n=2, k=-2,0,2 see below) the information for inversion is missing, a locally irreversible process due to branching of information.
[134]Perception means realization of information (in k=0). The chosen order (Order) past before presence before future contains information per double step. The greater n, the longer is the order which is determined by chaining double steps, the greater is the information quantity. So in case of great n (in k=0) the simultaneous perception of a great information quantity per proper time unit (ProperTimeUnit) is possible.
[135]The part appears as difference: our perception is characterized by the registration of differences e.g. between before and afterwards, between row 2n and row 2n+2. The Q1 triangle fits mathematically better to this differentiating e.g. along time. For example form the difference Q0Z(2n)-Q0Z(2n-2) and compare it with Q2Z (2n).
[136] The greater the product of the at first given energy (subjectively: work resp. effort) and of the proper time up to the return is, the greater seems to be the effect. This product is correlated with the information quantity (EtmGreatEnough).
From this arise even mathematical (!) possibilities for deeper foundation of ethical and also health regarding rules which more or less directly state that temporary renunciation resp. control of mental and bodily basic instincts are necessary. Mental basic instincts are: Striving for free available energy resp. information resp. safety resp. calculability of future, i.e. striving for things like money, power etc.; bodily basic instincts are: striving for free available energy resp. food, avoidance of pain e.g. due to oxygen deficiency.
Due to the fast increase of the number of the branchings meanwhile I regard as probable, that complete forgetting of information is impossible after it has become conscious (somewhere); after the accompanying central meeting it's irreversible. Unfortunately, also bad memories and error concern that. But unpleasant things or things which are forgiven are only seldom reminded (because cleared, no more questions), so that in comparison with the other things (which more quickly raise to higher power) they get negligible in the course of the time and are no longer an impairment of the whole.
[137]Knowledge gaps are normal and it would be an error to forget or not to admit them.
[138]The good memory of the way belongs to the aim. Also here the restrictedness of egoistical behavior gets clear. Egoistical behavior doesn't leave any (durable) good memory. What remains (even magnifies) resp. in the long run counts is the good memory and it's both the own and the common interest.
[139] I use parentheses because a complete definition isn't possible, at most an approach. Consciousness is unique, unequalled.
[140] Beyond the shown simple argumentation also basic prerequisites should be mentioned: Computersystems are examples for deterministic systems, but before we can speak of [determinism] at all, there must be a time coordinate (defined together with free (positive) energy by a primary decision [PtimePdecision] and with [ETmGreatEnough] the product ETm must be great enough). Only after this a deterministic system can exist at all, and also this only during finite time, in which it at the most can convert (decode) some information into a language we understand. This decoding resp. leading (adapting) to our counter-code (CounterPattern) consumes free energy, which results in an overall increase of entropy resp. loss of information.
[141] This means exchange of energy (PhotonColumn). So proper time is (locally) constant, if there is no energy exchange. From outer point of view this can last "very long" (***), e.g. in case of stable particles, in case of long ways of photons (WayTimeConstantTillNextMeeting).
[142] There may be a connection with the impossibility of the separation of the poles of a magnetic dipole. Till now I couldn't deepen this further.
[143]One can understand the central meetings(CentralMeeting) as the irreversible events (the measurements resp. perceptions, cf. [QuantumPhysicalObservation]) whose sequence defines the time direction. Causality is maintained since only information from above (previous) rows is available for every central meeting.
[144]The affected physical units anyway cannot be defined initially.
[145] This also means that the total sum of energy is 0 (under consideration of the negative field potentials). In analogous way this applies to all directional physical quantities. So the "own" (contradiction-free) contribution is decisive in the end (OwnDecisiveContribution).
[146] One could assign the symmetry center of a dimension respectively to the vertical center column k=0 of the Q1 triangle [Q1Triangle] (or of an along k in higher uneven order discretely differentiated Q0 triangle).
[147] For example in k=±1 [n1k1]due to a primary decision (PrimaryDecision)
[148] From view point A in row 0 the two halves of row 3 might represent matter and antimatter.
[149]They spread out within subordinate triangles. From them it (among others) indirectly always goes back again sometime. So also the subordinate (local) triangles are parts of the primary triangle. Probably due to the start points (in k not equal 0) of the subordinate triangle besides the center (in k=0) initially the global symmetry is hardly or only heavily recognizably there.
[150] How large is the number of different dimensions? Hints may be won also from computer simulations of different combination possibilities.
[151]Consider also, that the -Q2Z(n), which are incompatible with each other, first appear in row n-1 in k=±1 (before the flow out in n, k=0) [DistinguishableOrder].
In connection with this a further variant of the Q0 triangle, the "Q0M triangle" is mentioned. It can be derived from the Q0 triangle, if for every step to the right a multiplication by -1/2 (instead of 1/2) is done. The "Q0M triangle" is defined by
The signs of the numbers Q0M(n, k) alternate, especially Q0M(2n,2k) = Q0M(2n,-2k) and Q0M(2n+1,2k+1) = - Q0M(2n+1,-(2k+1)), i.e. the Q0M(n,k) in case of uneven row number add like the amplitudes of fermions and in case of uneven row number add like the amplitudes of bosons. The numbers Q0M(2n,0) are the taylor coefficients of QV(ix)=1/Ö(1+x^2). Of course the function Q0M(n,k) also permits further-reaching considerations with regard to discrete differentiation and sum formation. Some are mentioned in the compendium of formulas wqm. It is difficult for me to estimate the relevance, but at present the Q0M triangle seems to me quite constructed, therefore this is small printed within footnote.
[152] If one defines start and destination point so that no further interaction (recombination) of the particle takes place in between (FollowingMeeting) (even if in the observer system proper time passes), so (because of the necessary, discrete consideration) it's only consequent to regard a linear (continuous) connecting line between these points merely as artificial model, which doesn't it have an equivalent in the reality and can (must) be ignored in the description of the particle. The relevance of this consideration becomes particularly clear in case of minimally interacting particles (photons in vacuum, neutrinos).
[153] Strictly speaking our (individual) point of view again is the consequence of our (individual) decisions (which determine the constellation of information barriers between us). It would contradict our (individual) freedom of choice to demand the same perception of (resp. the same access to) reality for all (separated) individuals. Nevertheless a general (global) reality is possible (but it's accessible not before exchange of information, at another time, at the earliest [earliest] in the center (PerceptionInCenter)). It not only includes the individual realities but also the individual decisions (together with lines of thought, past), which lead to individual (lokal) realities.
[154] From moment to moment we assign ourselves to selected areas (with information), e.g. to inspect more details within those areas. This process also is accessible by elementary considerations [SubDivisionWithinChoice].
[155] Direction of out- or ingoing photons after a subjective short moment: If I hold two torches and beam to the moon for example, so there (after a subjective short moment) ingoing photons nearly have the same direction.
An analogous "inner time direction" (as direction of the information flow from consciousness to skin and back) is apparently very individual, dependent on our localization and surely not straight in the geometrical sense. Of course one could also define "straight" as direction of the quickest possible information propagation in the respective system.
[156] In the start this definition is determined by a primary decision together with our location (our point of view), cf. [UncertaintyOfOrder].
This definition of the time direction is indeed decisive. The usual everyday macroscopic interactions are examples. So the following example is nothing excellent, too, but I mention it among other things because it shows a little the perceptible connection between elementary consideration and (ordered) macroscopic phenomenon and perhaps in some fitting localization activates to follow up the thought:
Let's look at an ergometer fan who owns a bicycle ergometer with eddy current brake having permanent magnets. These braking magnets are however still unmagnetizated, i.e. the electrons in them don't spin in a preferential direction. He pedals for one year with 60 U/min daily 1/2 hour, of course what no problem and rather useless is since brake is deactivated. To this of course nothing changes, if after this year the magnets are magnetized strongly e.g. by an electromagnet. However, if they would have got magnetized before, i.e. if the electrons would already have spun in a direction preferentially before, he would have absolved an possibly exhausting training and much sweat and energy might have flowed.
In connection with this is remarkable, that in ferromagnetism the local energy minimum (the target of proper time) is reached if the electron spins point towards the same direction. That's greater order. Unlike thermodynamically usual the time direction goes to greater order here. Our decisions along this time direction also have ordering effect. May be that here (in the orientation of spins of loaded units) a possible physical target of our decisions lies. Under suitable basic conditions minimal own energy (Work) could be enough to distinguish a direction according to which the spins then (one behind the other) could gradually line up, what reinforcement causes.
[157]This is valid only during finite time intervals up to the next central meeting in a superordinated triangle, because the central meetings are points of more or less big ranges of the simultaneity and so define an order. If one starts out of a hierarchical constellation of "triangles" [hierarchical], then there absolutely can be a global time direction which results from the sequence of the central meetings in the primary triangle.
[158] Meant are not only buttons resp. measuring results of quantum physics, which can cause everything possible, meant are also (the energy equivalents of) our thoughts, which can cause actions. Even rough measurements of those energy equivalents can be done (EEG). They are very small, but of course they cannot be neglected. Even from our (subjective) point of view it means (brain)work [Work] resp. effort of energy for us, to make (initial mental) decisions.
[159] One calculates it from the gravitational potential, which actually is definable only relatively to us. The diameter therefore is also dependent of our own point of view. Already because of symmetry reasons we cannot exclude, that we are inside a (large) black hole. Remembering this one gets clear that the concept "black hole" is very simplified.
[160] a possibly quite everyday situation
[161] Locally measured masses might appear nearly constant, locally the situation could look ordinary.
[162]Formerly one used to think that the world is a disk. The blind end of this model was the disk rim. The blind end of the big bang model is the situation "diameter of universe=0".
[163] spherical means geometrical (so analytical) and therefore approximative, not exact consideration (AnalysisAtBestApproximative).
[164] The calculations use approximative (analytic) formulae in essential parts, e.g. of QW(x) or QV(x).
[165]It's important that one begins (to express oneself, to decide to a light pulse, to release free energy (FreeEnergy), information), otherwise it remains dark. If however this happens then because of symmetry reasons one can start out from the assumption that the other also does this. Everybody then has the same perception, namely having sent first and then received, i.e. the same reality. These realities of the two (or more) are compatible, they therefore can be or become the same with the memory of these realities, without contradictions.
[166] The model only shall serve as clue and surely is simplified. In the borderline case v/c->1 it contains extreme conditions for which up to now at most theoretical approaches exist, which aren't experimentally checkable. Some approaches of the general relativity theory would (among others) come to the conclusion, that the arrangement is unstable and would spirally collapse. These approaches include a lot of uncertainties, they start from prerequisites which don't necessarily correspond to reality or simply are undefined. So for example one could assume the arrangement as maximum large in the sense, that there is no outside, so that one cannot speak of a "spiral" because there is no standard of comparison which is necessary to measure radius.
The light propagation from transmitter to receiver isn't straight in the conventional sense, at least the black hole between transmitter and receiver works as gravitation lens. Also to mention is, that the direction of the light, which has been transmitted perpendicularly to v from view of the transmitter-receiver might turn forward from view of the center of gravity (from p/2 to arcsin(QW(x)); the "everywhere" gets a preferred direction: Effect analogous to synchrotron radiation.
Since however light is sent out to everywhere (also to behind), it arrives at the other.
So (speculative) details aren't essential, the principle is essential: everybody transmits exactly once, as soon as he receives and there is sufficient long a symmetric time dilation.
[167]A discrete, exact consideration way would allow only a finite number of directions.
[168]The light arriving from "the front" (from the future) has shorter wavelength and is more briefly (e.g. very short X-ray pulse) than his light pulse, the light arriving of behind has longer wavelength and is longer (microwaves). However, the initial source is the same.
[169] The direction of rotation may determine an order. The origin of the lightning just received corresponds to a choice decision (from the set of copies) and so contains information [InfoBack]. At this is the information quantity it the larger, the greater the number of copies (resp. resolution or sharpness) is.
[170] The three-dimensional area of the simultaneously perceptible is the spherical surface which squarely grows with the spherical radius. At this it's worth mentioning, that in the next to the last expression of
the number of the sum members squarely increases with n. The number of electrons per period arises in similar way, if one identifies n/2 with the angular momentum quantum number and m/2 with the magnetic quantum number.
[171]Locally one could also define "straight" according to the way of the light.
[172]Would a bridging be possible here to the interference of waves? If e.g. a double slit is illuminated by a monochromatic source, also there are in the interference picture behind the double slit more or less many minima and maxima at the same time although the light descends only from one source. The temporal sequence of the wave amplitude on the way from the source (it's order isn't perceptible in case of interference) became simultaneous multiplicity in the interference picture. In this context one could formulate therefore: Geometric appearance (here especially the surrounding multiplicity) as necessary consequence (to guarantee contradiction liberty).
Here still some speculation to the gyroscope model in the case x=v/c<<1: Then holds 1 > (QW(x))² > 1/2, but nobody can see (regularly) 1/(QW(x))², i.e. more than one and less than two copies of the "other" at the same time because this wouldn't be an integer number. Here is the contradiction possibly avoided by unsharpness of the perception which perhaps is connected with a more or less strongly noticeable tractive force within the observer as result of the differential gravitation working now (observer isn't an inertial system any longer, arriving photons have unsharp wavelength dependently on the unsharp place of the measuring).
[173]Perhaps a description can be given, which for us human beings is at most understandable as approach (step by step derivable), which however isn't exactly comprehensible in the same moment like one of our models. Models are at best copies, they never agree exactly with the whole. Already because of the reason, that the copier belongs to the whole.
[174] Potential barriers (PotentialBarrier) may be the physical boundaries between the local units.
[175] These maps only are more or less good approximations. Units (or individuals) whose inner map (or model) agrees with reality relatively well (i.e. units, who have a relatively good overview) have a relatively good chance to make the right choice and have an advantage with that.
[176] Of course also our human imagination of reality can be only a rough approximation of the truth.