Common CTMU objections and replies

From CTMU Wiki
Jump to: navigation, search

Contents

The universe can be explained with an infinite causal regress.

  • Some might argue that an infinite regress passes as a causal explanation of nature. But it doesn't, because an infinite regress never reaches the causal origin or "first cause" of nature, which is also the causal identity of nature. In contrast, the CTMU identifies nature as its own causal identity, recognizing it as its own cause, effect and causal agency. That's called "closure", and without it, one doesn't have a causal explanation (since a causal explanation is really what is being "enclosed" by the identity). This is deduced rather than "predicted".

Source: 1

Things don't have to be identical to their linguistic (cognitive) descriptions.

  • Suppose that there is some degree of noncorrespondence between cognitive syntax and perceptual content (observed phenomena). Then there exist items of perceptual content which do not correspond to or coincide with cognitive syntax. But if these items do not coincide with cognitive syntax, then they are unrecognizable, i.e. inobservable (since cognitive syntax is by definition the basis of recognition). But then these items are not included in perceptual reality (the set of observable phenomena), and we have a contradiction. Therefore, perceptual reality must coincide with cognitive syntax.
  • Suppose that cognition is not the only model for self-organizing systems, i.e. that such systems can be essentially non-homomorphic to cognitive processing. If so, then they lack meaningful cognitive representations, defying characterization in terms of mental categories like space, time and object. But then they fall outside reality itself, being indistinguishable as causes, effects, or any kind of phenomena whatsoever. In a word, they are irrelevant with respect to that part of reality isomorphic to human cognition. It follows that by any reasonable definition, reality is "cognitive" up to isomorphism with our own mental structures.

Nothingness is a contradictory concept.

  • Although UBT bears a disquieting resemblance to paradox, far better UBT than logic itself. If there were no medium in which logic could be negated - if there were no UBT - then logic would itself be indistinguishable from paradox, and in that case it and our world would fall apart.

What exactly is UBT?

  • UBT - a primordial realm of infocognitive potential free of informational constraint. In CTMU cosmogony, "nothingness" is informationally defined as zero constraint or pure freedom (unbound telesis or UBT), and the apparent construction of the universe is explained as a self-restriction of this potential.
  • If UBT is essential to logic, then why does UBT appear paradoxical and therefore antilogical? UBT is not merely paradoxical, but “meta-paradoxical” by definition. What does this mean? Paradox is what results from self-referentially applying the negation functor of logic to logic itself within logical bounds, and avoiding paradox is precisely what gives logic its discernability and utility. But if avoiding paradox gives logic its utility, then logic needs paradox in order to have utility (where the utility of logic tautologically resides in its power to exclude the negation of logic, i.e. paradox). This means that both logic and paradox exist in a mutually supportive capacity. But if so, then there is necessarily a medium of existence - a kind of “existential protomedium” or ontological groundstate - accommodating both logic and paradox. UBT is simply the name given to this protomedium, and it is why the CTMU refers to reality as a “self-resolving paradox”.
  • UBT and SCSPL are not defined as "two separate things", any more than a chunk of ice floating in a pond is a "separate thing" from the water in the pond. The water in the pond is where the chunk of ice came from and what it is essentially composed of, but the crystalline lattice structure of the ice does not distribute over the pond, and the water in the pond is not distributively bound by this structure. The molecules in the liquid-phase H20 have more degrees of freedom than those in the ice; they are less constrained, and less bound. All that you need do in order to apply this analogy is to take it to its logical conclusion while generalizing your usual idea of containment, replacing ice with SCSPL, the pond and its molecules of liquid water with UBT, and the crystalline molecular lattice of the ice with an SCSPL logic lattice, and to relax your grip on the tidy little picture of a chunk of ice with extrinsic measure bobbing around "in" the water. Any metric imputed to UBT must be intrinsically derivable within SCSPL domains (e.g., by intrinsic mutual exclusion).
  • Part of the problem here is that Russell does not accept the notion of null constraint; for him, there must always be some kind of constraint, and despite his inability to define it in juxtaposition to its complement, he takes the position that this initial constraint “just exists”. This, of course, disqualifies Russell from discussing (much less explaining) the genesis of the initial constraint, and in fact, it disqualifies him from even distinguishing the initial constraint from its complement. Sadly, failing to distinguish the initial constraint from its complement is to fail to define or distinguish the initial constraint, and this implies that Russell really has no initial constraint in mind after all. That is, while Russell seems to believe that there must be some sort of initial constraint, he cannot define this constraint by distinguishing it from its negation (if he could, then this would compel him to admit the existence of the complementary relationship between logic and nonlogic, and thus that both logic and nonlogic are superposed manifestations of something very much like UBT).
  • It follows that although negation cannot be applied in the usual manner to UBT, which implies that UBT is not what Godel would call a "positive property", it nevertheless “contains” other positive properties in the sense that it represents the suspension of their definitive constraints, and the superposition of the things thereby distinguished.
  • Jacob asks "From the above quote alone - what is the evidence of the existence of the UBT?" The evidence is logical; it's one of the language-theoretic criteria we've been talking about. To put it as simply as possible, unbound telesis is simply the logical complement of constraint with respect to syntax-state relationships, and is a requisite of any attempt to meaningfully define or quantify constraints such as physical states and laws of nature. Jacob asks "from whence arose the UBT?" Since UBT is nil constraint, it doesn't need to have "arisen"; causes are necessary only in the presence of informational content (that's really the point). Jacob also asks "has the UBT been exhausted?" How can something that is unbound be exhausted, given that exhaustion is a function that would have to bind its argument? Jacob wonders "Why is the UBT free of informational constraint and how do we know this?" We know this by definition...specifically, by a definition logically required in order to form any self-contained description of nature and causality. Jacob goes on to opine that "the CTMU raises more questions than offer any explanations." If this is true, then it is true only within the minds of people who fail to understand it.

Note: Our reality (SCSPL) is not equivalent to the extrapolated level of reality ascribed to UBT.

The CTMU makes no predictions. We can explain the universe without it.

  • In theoretically (cognitively) connecting perceptual reality in an explanatory causal network, one can't always progress by short obvious steps; sometimes one must plunge into an ocean of non-testability in order to come up with a superior testable description on the far shore (think of this kind of insight as analogous to irreducible complexity, but often followed by a simplificative "refolding stage"). In other words, it is not always easy to distinguish (empirically fruitful) science from nonscience as science progresses; one must rely on logic and mathematics in the "blind spots" between islands of perceptibility.
  • The scientific value of the CTMU resides largely in the fact that within its framework, certain logical truths can be regarded as scientific truths (as opposed to tentatively-confirmed scientific hypotheses).

Source: 1

Something can't come from nothing.

If you really start out with nothing, then there is no preexisting information. You must first tell cancellation how to occur, and tell zero how to "fission" into two or more mutually cancelling entities. Since these "instructions" on how things should cancel consist of constraint - of restrictions that limit the nature of "nothing" - constraint has logical priority over cancellation. So we have to explain how the constraint got there in the first place, and we have to do it without appealing to any preexisting cancellation operation supposedly inherent in "nothing". By default, the entities comprising the system that you call "the sum of components within the space-time manifold" must themselves combine to form the constraint, and since neither the constraint nor its constituents equal "nothing", we're forced to introduce "something" at the very outset. This tells us that we're not really starting out with "nothing", but with unbound potential or UBT... something which, by its nature, contains everything that can possibly exist. This amounts to interpreting 0 as "0 information" and replacing it with a combination of unity and infinity.

I.e., by interpreting 0 as the absence of something called "information", which implies that it equates to superficially unrestricted homogeneous (and therefore unitary) informational potential, which in turn equates to existential infinity, we're making it possible for 0 to represent the sort of cancellation point you envision. Since this implies that existence is a self-defined attribute - after all, there is no preexisting informational framework capable of defining it - only possibilities with the intrinsic ability to define and sustain their own existences can be actualized. This is what it takes to get "something" from "nothing".

Source: 1

Reality doesn't need an explanation. It exists as a brute fact.

  • ..."Explanation" is identical to "structure". In order to fully specify the structure of a system, one must explain why its aspects and components are related in certain ways (as opposed to other possible ways). If one cannot explain this, then one is unable to determine the truth values of certain higher-order relations without which structure cannot be fully specified. On the other hand, if one claims that some of these higher-order structural components are "absolutely inexplicable", then one is saying that they do not exist, and thus that the systemic structure is absolutely incomplete. Since this would destroy the system's identity, its stability, and its ability to function, it is belied by the system's very existence.

Source: 1

  • One might at first be tempted to object that there is no reason to believe that the universe does not simply "exist", and thus that self-selection is unnecessary. However, this is not a valid position. First, it involves a more or less subtle appeal to something external to the universe, namely a prior/external informational medium or "syntax" of existence; if such a syntax were sufficiently relevant to this reality, i.e. sufficiently real, to support its existence, then it would be analytically included in reality (as defined up to perceptual relevance). Second, active self-selection is indeed necessary, for existence is not merely a state but a process; the universe must internally distinguish that which it is from that which it is not, and passivity is ruled out because it would again imply the involvement of a complementary active principle of external origin.
  • Reality comprises a "closed descriptive manifold" from which no essential predicate is omitted, and which thus contains no critical gap that leaves any essential aspect of structure unexplained. Any such gap would imply non-closure...MAP requires a closed-form explanation on the grounds that distinguishability is impossible without it. Again this comes down to the issue of syntactic stability*. To state it in as simple a way as possible, reality must ultimately possess a stable 2-valued object-level distinction between that which it is and that which it is not, maintaining the necessary informational boundaries between objects, attributes and events. The existence of closed informational boundaries within a system is ultimately possible only by virtue of systemic closure under dualistic (explanans-explanandum) composition, which is just how it is effected in sentential logic.
  • A syndiffeonic regress ultimately leads to a stable closed syntax in which all terms are mutually defined; mutual definition is what stabilizes and lends internal determinacy (internal identifiability of events) to the system through syntactic symmetry

You can't define or ascribe attributes to "nothingness".

  • As far as concerns 0 being a defined quantity, of course it's defined...in the syntax of arithmetic. Without an underlying conceptual syntax, it would be undefined...and the absence of syntax is what UBT is all about.
  • Where something is undefined due to its lack of expressive syntax, it is totally unmeasurable even in principle. Therefore, there are no extensional or durational distinctions to be made, and this means that for practical and theoretical purposes, extension and duration are zero.
  • In fairness to Parallel, he seems to attempt to specify a paradox when he opines that "the undefinable" cannot have well-defined properties such as “unbound”, “without restraint”, and “zero extension and duration”. But this attempt is a bit hard to figure, since if the property "undefinable" is well-enough defined to be contradicted by the properties “unbound”, “without restraint”, and “zero extension and duration” as Parallel maintains, then it is well-enough defined to be described by them as well, particularly with respect to a model involving syntactic and presyntactic stages. Because Parallel does not take account of such a model, he can't be talking about the CTMU. What Parallel is talking about, only he knows for sure.

The universe is only material.

  • Reality consists of more than material objects; it also possesses aspects that cannot be reduced to matter alone. For example, space and time are relations of material objects that cannot be localized to the individual objects themselves, at least as locality is generally understood; they are greater than the objects they relate, permitting those objects to interact in ways that cannot be expressed solely in terms of their individual identities. Equivalently, we cannot reduce the universe to a set of material objects without providing a medium (spacetime) in which to define and distribute the set. It follows that spacetime possesses a level of structure defying materialistic explanation. Indeed, insofar as they possess spatiotemporal extension, so do material objects!

Reality is reducible to its parts.

  • ...If the universe were pluralistic or reducible to its parts, this would make God, Who coincides with the universe itself, a pluralistic entity with no internal cohesion. But because the mutual syntactic consistency of parts is enforced by a unitary holistic manifold with logical ascendancy over the parts themselves - because the universe is a dual-aspected monic entity consisting of essentially homogeneous, self-consistent infocognition - God retains monotheistic unity despite being distributed over reality at large.
  • It comes down to coherence; the wave function of the universe must be coherent in order for the universe to be self-consistent. Otherwise, it pathologically decoheres into independent and mutually irrelevant subrealities. Because the universe is both cognitive and coherent, it is a Mind in every sense of the word. "God" is simply a name that we give in answer to the unavoidable question "Whose Mind?"

According to CTMU, why are humans and other intelligent lifeforms of value to reality?

  • The CTMU says that God, as embodied by the universe, Self-configures.

To do this, He needs two things: (1) active sensors (agents, internal proxies) who can recognize and affect the state of the universe from local internal vantages; (2) a stratified utility function allowing Him and His agents to prefer one possible future over another. Human beings and other intelligent life forms are useful to God on both of these counts. Thus, the first criterion of His development is the possibility, and in fact the inevitability, of their existence.

To understand this, consider an extraordinarily wise child responsible for the development and maintenance of its own body and physiology (because the universe is in the process of self-configuration, we can liken it to a child). To meet this responsibility, the child requires internal sensors that provide information on exactly what is happening deep inside its growing body, preferably at the intracellular level, and that permit feedback. The child further requires that these sensors be able to register the utility of what they detect... whether it is "good" or "bad" from their own local perspectives. That way, the child can weigh the perceptions and utilities of all of its internal sensors to form overall developmental goals.

In order to meet the Self-configurative goals that it sets (as aggregates of the goals of its sensors), the child has the power to establish internal self-optimizative tendencies that affect the behavior of its internal agents, influencing them to perform such local operations and make such repairs as are necessary for the good of the whole child. To this end, they are equipped with global utility functions, "consciences", that combine with intelligence to make them responsive to the welfare of the whole organism (as opposed to their own individual welfares).

For want of a better name, we can use the term "soul" to describe the channel through which individual and global utility functions are put in consistent mutual contact. This channel permits the sensors to make more informed, more global, and more valid judgments about what is "good" and what is "bad", and gives them the internal strength to do what is good even if it means sacrificing individual utility (because global utility is an aggregate function of individual utility, serving global utility ultimately makes individuals happier).

  • Now let’s look at some convincing cybernetic evidence that we participate in the self-creation of reality.

Because intentional self-creation entails an internal stimulus-response dynamic consisting of feedback, any self-configuring system needs internal sensors (agents, internal self-proxies) capable of not only recognizing and affecting its state from local internal vantages, but of responding to higher-level instructions tending to enforce global structural criteria. Moreover, the system must possess a stratified utility function allowing it and its agents to prefer one possible future over another. Human beings and other intelligent life forms are useful to reality on both of these counts. So the first criterion of reality is the possibility, and in fact the inevitability, of the existence of “sensors” just like us…sensors with an advanced capacity to recognize, evaluate and respond to internal states of the system. How, in general, would the universe self-configure? It would select itself from a set of internally-generated, internally-refined structural possibilities in order to maximize its self-defined value. In the (somewhat inadequate) terminology of quantum mechanics, this set of possibilities is called its quantum wave function or QWF, and the utility-maximizing self-selection principle is traditionally called teleology. In exploiting this self-actualization mechanism, human beings would select their specific goals from the global QWF according to their own specific self selection principles or “teleses”. In the course of being realized, these individual teleses would interfere with teleology (and each other) in a constructive or destructive way, depending on whether they and their specific methods of implementation (modes of interference) are teleologically consistent or inconsistent. In this way, the “good”, or teleologically constructive, may be distinguished from the “bad”, or teleologically destructive. I.e., free will would give human beings a real choice between good and evil…a choice like that which we already seem to possess.

  • The Anthropic Principle, a controversial (but in the final analysis, necessary) ingredient of modern cosmology, is already about as anthropocentric as it gets. The CTMU replaces it with something called the Telic Principle, which avoids reference to particular agents such as human beings. The Telic Principle says merely that the universe, in addition to being self-configuring and self-processing, is self-justifying. Only the real universe can formulate and provide a real answer for the question of why it exists, and it does so through internal self-projections (distributed syntactic endomorphisms) called sentient agents.
  • Intelligence is inseparable from purpose, and since the CTMU distributes intelligence over the universe, it does the same for purpose. Voila - a new brand of teleology that prefers increasing control and knowledge to a dysgenic deterioration of cognitive ability. You're right that humanity can "screw everything up". But if it does, it won't enjoy the luxury of a valid philosophical justification for its crime.

Self-containment" is an oxymoron.

  • In short, the set-theoretic and cosmological embodiments of the self-inclusion paradox are resolved by properly relating the self-inclusive object to the descriptive syntax in terms of which it is necessarily expressed, thus effecting true self-containment: "the universe (set of all sets) is that which topologically contains that which descriptively contains the universe (set of all sets)."

The universe isn't a set.

  • “Being a set” is in fact a property of the universe. That’s because “set” is defined as “a collection of distinct objects”, and the universe is in fact a collection of distinct objects (and more). The definition of “set” correctly, if only partially, describes the structure of the universe, and nothing can be separated from its structure. Remove it from its structure, and it becomes indistinguishable as an object and inaccessible to coherent reference.
  • The universe fulfills the general definition of "set" in numerous ways, and this indeed makes it a set (among other things with additional structure). Otherwise, its objects could not be discerned, or distinguished from other objects, or counted, or ordered, or acquired and acted on by any function of any kind, including the functions that give them properties through which they can be identified, discussed, and scientifically investigated. If something is “not a set”, then it can’t even be represented by a theoretical variable or constant (which is itself a set), in which case Mark has no business theorizing about it or even waving his arms and mindlessly perseverating about it.
  • Because the universe fulfills the definitive criteria of the “set” concept (and more), it is at least in part a (structured) set. One may object that a “set”, being a concept or formal entity, cannot possibly describe the universe; after all, the universe is not a mere concept, but something objective to which concepts are attached as descriptive “tools”. But to the extent that concepts truly describe their arguments, they are properties thereof. The entire function of the formal entities used in science and mathematics is to describe, i.e. serve as descriptive properties of, the universe.
  • Everything discernable (directly perceptible) within the physical universe, including the universe itself (as a coherent singleton), can be directly mapped into the set concept; only thusly are secondary concepts endowed with physical content. One ends up with sets, and elements of sets, to which various otherwise-empty concepts are attached.
  • In search of counterexamples, one may be tempted to point to such things as time and process, “empty space”, various kinds of potential, forces, fields, waves, energy, causality, the spacetime manifold, quantum wave functions, “laws of nature”, “the mathematical structure of physical reality,” and so on as “non-material components of the universe”, but these are predicates whose physical relevance utterly depends on observation of the material content of the universe. To cut them loose from the elements of observational sets would be to deprive them of observational content and empty them of all physical meaning.
  • Can a containment principle for the real universe be formulated by analogy with that just given for the physical universe? Let's try it: "The real universe contains all and only that which is real." Again, we have a tautology, or more accurately an autology, which defines the real on inclusion in the real universe, which is itself defined on the predicate real. This reflects semantic duality, a logical equation of predication and inclusion whereby perceiving or semantically predicating an attribute of an object amounts to perceiving or predicating the object's topological inclusion in the set or space dualistically corresponding to the predicate. According to semantic duality, the predication of the attribute real on the real universe from within the real universe makes reality a self-defining predicate, which is analogous to a self-including set. An all-inclusive set, which is by definition self-inclusive as well, is called "the set of all sets". Because it is all-descriptive as well as self-descriptive, the reality predicate corresponds to the set of all sets. And because the self-definition of reality involves both descriptive and topological containment, it is a two-stage hybrid of universal autology and the set of all sets.

According to CTMU, is reality subjective or objective?

  • Cartesian dualism leads to a problem associated with the connectivity problem we have just discussed: if reality consists of two different "substances", then what connects these substances in one unified "reality"? What is the medium which sustains their respective existences and the putative difference relationship between them? One possible (wrong) answer is that their relationship is merely abstract, and therefore irrelevant to material reality and devoid of material influence; another is that like the physical epiphenomenon of mind itself, it is essentially physical. But these positions, which are seen in association with a slew of related philosophical doctrines including physicalism, materialism, naturalism, objectivism, epiphenomenalism and eliminativism, merely beg the question that Cartesian dualism was intended to answer, namely the problem of mental causation.

Conveniently, modern logic affords a new level of analytical precision with respect to the Cartesian and Kantian dichotomies. Specifically, the branch of logic called model theory distinguishes theories from their universes, and considers the intervening semantic and interpretative mappings. Calling a theory an object language and its universe of discourse an object universe, it combines them in a metaobject domain consisting of the correspondences among their respective components and systems of components, and calls the theory or language in which this metaobject domain is analyzed a metalanguage. In like manner, the relationship between the metalanguage and the metaobject domain can be analyzed in a higher- level metalanguage, and so on. Because this situation can be recursively extended, level by level and metalanguage by metalanguage, in such a way that languages and their universes are conflated to an arbitrary degree, reality can with unlimited precision be characterized as a "metalinguistic metaobject".

In this setting, the philosophical dichotomies in question take on a distinctly mathematical hue. Because theories are abstract, subjectively-formed mental constructs, the mental, subjective side of reality can now be associated with the object language and metalanguage(s), while the physical, objective side of reality can be associated with the object universe and metauniverse(s), i.e. the metaobject domain(s). It takes very little effort to see that the mental/subjective and physical/objective sides of reality are now combined in the metaobjects, and that Cartesian and Kantian "substance dualism" have now been transformed to "property dualism" or dual-aspect monism. That is, we are now talking, in mathematically precise terms, about a "universal substance" of which mind and matter, the abstract and the concrete, the cognitive-perceptual and the physical, are mere properties or aspects.

  • In the conventional model, percepts are objective observables that actively imprint themselves on, and thereby shape and determine, the passive mind and its internal processes. But a more general (and therefore more scientific, less tautological) description of percepts portrays them also as having a subjective perceptual aspect that is identified, at a high level of generality, with the Creation Event itself. This is just an extension of Wheeler's Observer Participation thesis ("only intelligent entities are observers") to physical reality in general ("everything is an observer", with the caveat that it's still possible, and indeed necessary, to reserve a special place for intelligence in the scheme of things).
  • The surprising part, in my opinion, is this. This reduction of all reality to simultaneously active and passive "infocognition" amounts to defining reality as did Hume...as pure experience ("infocognition" is just a technical synonym of "experience" that opens the concept up to analysis from the dual standpoints of information theory and cognition or computation theory). Thus, the Kantian mind-matter distinction, as embodied in the term "infocognition", is simply distributed over Hume's experiential reality by synonymy, bringing the metaphysics of Hume and Kant into perfect coincidence.

The universe is expanding.

  • Nothing that leads to logical inconsistency is "confirmed by evidence". Expansion leads to logical inconsistency analytically. To wit, if there were something outside reality that were sufficiently real to contain the "expansion" of reality, it would be contained in reality. That's a contradiction; ergo, the hypothesis is false.
  • The overall size of the universe is externally undefined and can only be defined intrinsically (as curvature), the sizes of objects change with respect to this curvature.
  • The cosmos can’t be expanding in any absolute sense, because there’s nothing for it to be expanding into. Therefore, we must invert the model in a way that “conserves spacetime”; the total “amount” of spacetime must remain constant. When we do so, the cosmos ceases to resemble a balloon inflating (extending outward) over time, and instead becomes an inward superposition of sequentially related states. The best way to think of it is in terms of a cumulative embedment of Venn diagrams (of state) on the inside surface of a sphere of extrinsically indeterminate size.
  • "Intrinsic expansion" is a contradiction in terms. If something is expanding, then it has to be expanding *with respect to* a fixed referent, and if it is, then it has to be extending into an external medium with respect to which the fixity of the referent has been established. On the other hand, saying that something is shrinking relative to that which contains it presents no such problem, for in that case, nothing is really "expanding". An inclusive relationship, like that whereby the universe includes its contents, can change intrinsically only if its total extent does not change; where its total extent is just that of the inclusive entity, this means that the extent of the *inclusive entity* cannot change. Ergo, no expansion; it's logically analytic. Reason in any other fashion, and the term "expansion" becomes meaningless.

Conspansive duality is false. Objects move through a background called "space".

  • If there were something outside reality that were real enough to topologically contain it, it would be intrinsic to reality (and therefore contained within it). We can make a similar statement regarding matter: if there were something outside matter that were material enough to contain it, it would to exactly that extent be intrinsic to matter. In order to accommodate matter, space must be potentially identical to it... must "share its syntax". In other words, matter doesn't "displace" space, but occupies it in perfect superposition...intersects with it. So space consists of material potential and is thus a "potential phase of matter". Denying this leads to a contradiction.

Mind does not equal reality, since I can conceive of fictional things like fairies that don't exist.

  • Our minds, and every thought they contain, do indeed exist in reality...What is “irreal” is the assumed veridical mapping of personalized fantasies onto general (perceptual) reality.
  • This principle is really just a statement of empiricism. Empiricism says that reality consists of perceptions or perceptual events. Now, the only way that perception can occur is through a processor which is capable of it...i.e., a percipient. Therefore, empiricism is really the statement that reality consists of an irreducible combination of percipient and perception; because perceptions include not only percipients but percepts (perceived objects, elements of "objective reality" supposedly independent of perception), it follows that reality also consists of an irreducible combination of percipient and percept.

But while this happens to be a very respectable philosophical thesis, that alone does not validate it. The British empiricists and their followers never got around to explicating their thesis, which encompasses many of the roots of modern science, to the required extent. However, they could easily have done so simply by exploiting its tautological nature. Again, in order to avoid your incessant complaints and ridiculous definitional hairsplitting, we will for the nonce replace this contested term "tautology" with generic logical necessity. Let's elaborate on that a little. A logical tautology like not(A and not-A) is characterized by the fact that it remains true no matter what the variable A happens to signify. In other words, its truth is logically necessary. Now take a look at "Mind = Reality". Both M and R are variables in the sense that their contents can vary, but they also have distinct constant aspects - one is always "mental" (like a percipient), while the other is always real (like a perception, e.g., an act of scientific observation). The question is thus, are percipients necessarily conflated with perceptions? The answer is both intuitively and analytically obvious, and it is unequivocally yes. You can't have a perception without a percipient. Therefore, we have what I've chosen to call an "analytic" or "semantic tautology" (you can call it whatever you like, but the CTMU is Langan's theory, and I believe that my choice of terminology closely parallels Langan's own). So that's it, Q.E.D.

The universe isn't cognitive.

  • This processing conforms to a state transition function included in a set of such functions known as "the laws of physics". Calling these functions "cognitive" is hardly a stretch, since it is largely in terms of such functions that cognition itself is understood. Moreover, since "the characteristics, the features, ... the behaviors of the many facets and elements of this reality" are generally attributed to the laws of physics, it is not clear why cognition should not generally apply. After all, spacetime consists of events and separations, and events can be described as the mutual processing of interacting objects. So where physical interaction is just mutual input-to-output behavioral transduction by physical objects, and cognition is mutual input-to-output behavioral transduction by neurons and their inclusive brain structures, physical interaction is just a generalization of human cognition. If this seems like a tautology, indeed it is; self-contained self-referential systems are tautological by definition.
  • Suppose you're wearing blue-tinted glasses. At first, you think that the world you see through them is blue. Then it occurs to you that this need not be true; maybe it's the glasses. Given this possibility, you realize that you really have no business thinking that the world is blue at all; indeed, due to Occam's razor, you must assume that the world is chromatically neutral (i.e., not blue) until proven otherwise! Finally, managing to remove your glasses, you see that you were right; the world is not blue. This, you conclude, proves that you can't assume that what is true on your end of perception (the blue tint of your lenses) is really true of reality.

Fresh from this victory of reason, you turn to the controversial hypothesis that mind is the essence of reality...that reality is not only material, but mental in character. An obvious argument for this hypothesis is that since reality is known to us strictly in the form of ideas and sensations - these, after all, are all that can be directly "known" - reality must be ideic. But then it naturally occurs to you that the predicate "mental" is like the predicate "blue"; it may be something that exists solely on your end of the process of perception. And so it does, you reflect, for the predicate "mental" indeed refers to the mind! Therefore, by Occam's razor, it must be assumed that reality is not mental until proven otherwise.

However, there is a difference between these two situations. You can remove a pair of blue sunglasses. But you cannot remove your mind, at least when you're using it to consider reality. This means that it can never be proven that the world isn't mental. And if this can never be proven, then you can't make an assumption either way. Indeed, the distinction itself is meaningless; there is no reason to even consider a distinction between that which is mental and that which is not, since nature has conspired to ensure that such a distinction will never, ever be perceived. But without this distinction, the term "mental" can no longer be restrictively defined. "Mental" might as well mean "real" and vice versa. And for all practical purposes, so it does.

A theory T of physical reality exists as a neural and conceptual pattern in your brain (and/or mind); it's related by isomorphism to its universe U (physical reality). T<--(isomorphism)-->U. T consists of abstract ideas; U consists of supposedly concrete objects like photons (perhaps not the best examples of "concrete objects"). But the above argument shows that we have to drop the abstract-concrete distinction (which is just a different way of expressing the mental-real distinction). Sure, we can use these terms to distinguish the domain and range of the perceptual isomorphism, but that's as far as it goes. For all practical purposes, what is mental is real, and vice versa. The T-U isomorphism seamlessly carries one predicate into the other.

Russell's paradox and Godel's incompleteness theorem prove that the CTMU is invalid.

  • In order to be consistent, mathematics must possess a kind of algebraic closure, and to this extent must be globally self-referential. Concisely, closure equals self-containment with respect to a relation or predicate, and this equates to self-reference. E.g., the self-consistency of a system ultimately equates to the closure of that system with respect to consistency, and this describes a scenario in which every part of the system refers consistently to other parts of the system (and only thereto). At every internal point (mathematical datum) of the system mathematics, the following circularity applies: "mathematics refers consistently to mathematics". So mathematics is distributively self-referential, and if this makes it globally vulnerable to some kind of implacable "meta-mathematical" paradox, all we can do in response is learn to live with the danger. Fortunately, it turns out that we can reason our way out of such doubts...but only by admitting that self-reference is the name of the game.

Source: 1

  • To demonstrate the existence of undecidability, Gödel used a simple trick called self-reference. Consider the statement “this sentence is false.” It is easy to dress this statement up as a logical formula. Aside from being true or false, what else could such a formula say about itself? Could it pronounce itself, say, unprovable? Let’s try it: "This formula is unprovable". If the given formula is in fact unprovable, then it is true and therefore a theorem. Unfortunately, the axiomatic method cannot recognize it as such without a proof. On the other hand, suppose it is provable. Then it is self-apparently false (because its provability belies what it says of itself) and yet true (because provable without respect to content)! It seems that we still have the makings of a paradox…a statement that is "unprovably provable" and therefore absurd.
  • What if we now introduce a distinction between levels of proof, calling one level the basic or "language" level and the other (higher) level the "metalanguage" level? Then we would have either a statement that can bemetalinguistically proven to be linguistically unprovable, and thus recognizable as a theorem conveying valuable information about the limitations of the basic language, or a statement that cannot be metalinguistically proven to be linguistically unprovable, which, though uninformative, is at least not a paradox. Presto: self-reference without the possibility of paradox!
  • Such paradoxes are properly viewed not as static objects, but as dynamic alternations associated with a metalinguistic stratification that is constructively open-ended but transfinitely closed (note that this is also how we view the universe). Otherwise, the paradox corrupts the informational boundary between true and false and thus between all logical predicates and their negations, which of course destroys all possibility of not only its cognitive resolution, but cognition and perception themselves. Yet cognition and perception exist, implying that nature contrives to resolve such paradoxes wherever they might occur. In fact, the value of such a paradox is that it demonstrates the fundamental necessity for reality to incorporate a ubiquitous relativization mechanism for its resolution, namely the aforementioned metalinguistic stratification of levels of reference (including levels of self-reference, i.e. cognition). A paradox whose definition seems to preclude such stratification is merely a self-annihilating construct that violates the "syntax" (structural and inferential rules) of reality and therefore lacks a real model.
  • In other words, to describe reality as cognitive, we must stratify cognition and organize the resulting levels of self-reference in a self-contained mathematical structure called Self-Configuring Self-Processing Language or SCSPL. SCSPL, which consists of a monic, recursive melding of information and cognition called infocognition, incorporates a metalogical axiom,Multiplex Unity or MU, that characterizes the universe as a syndiffeonic relation or "self-resolving paradox" (the paradox is "self-resolving" by virtue of SCSPL stratification). A syndiffeonic relation is just a universal quantum of inductive and deductive processing, i.e. cognition, whereby "different" objects are acknowledged to be "the same" with respect to their mutual relatedness.

Source: 1

  • One can’t even conceive of logic without applying a distributed "power-set template" to its symbols and expressions, and such templates clearly perform a syntactic function.

Multiverse theory explains the existence of our universe.

  • According to the standard model, the limit of nature is a cosmic singularity, and the limit of causality is whatever generated and sustains this singularity (standard cosmology merely purports to explain how this singularity, once its existence and underlying ontology were already given, blew up into the universe we see, beginning at 10^-43 seconds after the initial big bang). Many Worlds theory purports to account for these things by placing the universe - that is, the thing to be explained - inside a multiverse, and then fails to explain the multiverse. Yet, in order to explain a process that runs as part of an underlying process, the way local physical evolution (causality) runs as a part of global cosmic evolution, one must explain the underlying process right back to the beginning.
  • I can't help but be a bit taken aback when people who talk authoritatively about quantum mechanics, uncertainty, infinite-dimensional Hilbert spaces, infinitely-nested spacetimes and other clear manifestations of absolute freedom rail against UBT. Where do they think all of that freedom comes from? From constraint? Constraint is deterministic. Even if we posit the existence of a primal constraint which drives a deterministic many-worlds (relative-state) multiplexing of state and thus "creates freedom", what accounts for the nondeterministic (telic or aleatory) mapping of our consciousness into one particular history among the infinite possible histories thereby "created"? Again, we are forced to confront an inevitable fact: freedom is built into the microscopic structure of reality, and because the wave function is a stratified affair ultimately embracing the entire cosmos (regardless of the number of ulterior spaces in which our immediate reality is nested), it is built into the macroscopic structure of reality as well. Quantum mechanics, syndiffeonesis and UBT go together; remove any one of them, and all that finally remains is a teetering pile of paradoxes waiting to collapse.

Structure can be expressed without two valued logic (2VL).

  • For the expression of structure, 2-valued logic (2VL) is a necessary and sufficient criterion. In any structured (syndiffeonic) system, everything finally comes down to 2VL. We can come at this fact from below and from above. From below, we merely observe that because 0VL and 1VL do not permit nontrivial distinctions to be made among syntactic components, they do not admit of nontrivial, nonunary expressive syntax and have no power to differentially express structure. From above, on the other hand, we note that any many-valued logic, including infinite-valued logic, is a 2-valued theory - it must be for its formal ingredients and their referents to be distinguished from their complements and from each other - and thus boils down to 2VL. So 2VL is a necessary and sufficient element of logical syntax for systems with distributed internal structure. Infinite-valued logics can add nothing in the way of scope, but can only increase statistical resolution within the range of 2VL itself (and not individual resolution except in a probabilistically inductive sense).

Tautologies are meaningless.

  • That which has no complement is indistinguishable from its complement and therefore contains zero information. But if logic has no informational value, then neither does logical consistency. And if logical consistency has no informational value, then consistent and inconsistent theories are of equal validity.
  • Rex professes a lack of understanding as to why not having a complement is the same as being indistinguishable. The standard answer, of course, is that since information always restricts (or constrains) a potential by eliminating its alternatives therein, nothing to which informational value can be attached lacks a complement (in some probability space). For example, since observing that something exists is to rule out its nonexistence - existence and nonexistence are complementary states, provided that we conveniently classify nonexistence as a "state" - such observations distinguish existence from nonexistence and thus have positive informational value. On the other hand, that to which no information at all can be attached cannot be said to exist, and is thus indistinguishable. Because this applies to consistency and inconsistency, it also applies to logic and nonlogic.
  • Rex then states that "'logic' and 'not-logic' is a contradiction, under logic, so if logic admits both 'logic' and 'not-logic' then logic is self-contradictory." Not if it treats nonlogic as something which is excluded by logic in any given model, for example a nondistributive lattice. He then observes that “there's no problem with having non-logical statements, just allowing the entire theory of logic and the theory of not-logic to simultaneously exist in the same model.” Although I see where Rex is coming from, logic and nonlogic can in fact exist in the same model, e.g. a nondistributive lattice, provided that nonlogic does not interfere with logic in that part of the model over which logical syntax in fact distributes, e.g. the Boolean parts of the lattice. That the non-Boolean parts of the lattice approximate poorly-understood relationships among Boolean domains is irrelevant to the value of such "non-logical" models, as we see from the fact that nondistributive lattices permit the representation of real noncommutative relationships in quantum mechanics.

The CTMU does not imply the existence of God.

  • In keeping with its clear teleological import, the Telic Principle is not without what might be described as theological ramifications. For example, certain properties of the reflexive, self- contained language of reality - that it is syntactically self-distributed, self-reading, and coherently self-configuring and self-processing -respectively correspond to the traditional theological properties omnipresence, omniscience and omnipotence. While the kind of theology that this entails neither requires nor supports the intercession of any "supernatural" being external to the real universe itself, it does support the existence of a supraphysical being (the SCSPL global operator-designer) capable of bringing more to bear on localized physical contexts than meets the casual eye. And because the physical (directly observable) part of reality is logically inadequate to explain its own genesis, maintenance, evolution or consistency, it alone is incapable of properly containing the being in question.

The CTMU relies on naive set theory, which is invalid.

  • [The CTMU] nowhere relies on naïve set theory, and in fact can be construed as a condemnation of naïve set theory for philosophical purposes.
  • As it happens (and not by accident), consistent versions of set theory can be interpreted in SCSPL. The problem is, SCSPL can’t be mapped into any standard version of set theory without omitting essential ingredients, and that’s unacceptable. This is why the CTMU cannot endorse any standard set theory as a foundational language. But does this stop the universe from being a set? Not if it is either perceptible or intelligible in the sense of Cantor’s definition.
  • The CTMU does not rely on set-theoretic self-inclusion to effect explanatory closure (as it would have to do if the universe were merely “the largest set”).
  • No axiomatic version of set theory is included in Cantor’s definition of "set"; the definition is meaningful without it. Rather, the definition of "set" is itself a "minimal set theory" which can be viewed as a general subtheory in which multiple set theories intersect; it differs from standard set theory mainly in omitting the self-inclusion operation, and the further operation of compensating for that operation.
  • The power set is a distributed *aspect* of the universe by virtue of which objects and sets of objects are relationally connected to each other in the assignment and discrimination of attributes (the intensions of sets). Without it, the universe would not be identifiable, even to itself; its own functions could not acquire and distinguish their arguments. In fact, considered as an attributive component of identification taking a set as input and yielding a higher-order relational potential as output, it is reflexive and “inductively idempotent”; the power set is itself a set, and applied to itself, yields another (higher-order) power set, which is again a set, and so on up the ladder.
  • Of course, even the perceptual stratum of the universe is not totally perceptible from any local vantage. The universe, its subsets, and the perceptible connections among those subsets can be perceived only out to the cosmic horizon, and even then, our observations fail to resolve most of its smaller subsets (parts, aggregates, power-set constituents). But a distributed logical structure including the power set can still be inferred as an abstract but necessary extension of the perceptual universe which is essential to identification operations including that of perception itself.
  • The scientific import is obvious. Where the universe is defined, for scientific purposes, to contain the entire set of past and future observational and experimental data, plus all that may be inferred as requirements of perception, its power set is integral to it as a condition of its perception and scientific analysis, not to mention its intrinsic self-differentiation and coherence. Without its power set, its parts or subsets would be intrinsically indiscernible and indistinguishable, which would of course amount to an oxymoron; “parts” are distinguishable by definition, and therefore constitute a set with the discrete topology construed by relevance (any reference to which naturally invokes the power set) and the indiscrete topology construed by veracity (inclusion-exclusion). Without the power set function and its stratified relational potential, one not only can’t say what the parts and their mutual relationships are, one can’t even say what they’re *not* … and as any parts not relevant to the others are not “parts” as advertised, even referring to them generates contradictions and must therefore be avoided.

Reality is not timeless.

  • Not long after Einstein formulated his General Theory, it was discovered that the universe, AKA spacetime, was expanding. Because cosmic expansion seems to imply that the universe began as a dimensionless point, the universe must have been created, and the creation event must have occurred on a higher level of time: cosmic time. Whereas ordinary time accommodates changes occurring within the spacetime manifold, this is obviously not so for the kind of time in which the manifold itself changes.
  • Now for the fly in the cosmological ointment. As we have seen, it is the nature of the cognitive self to formulate models incorporating ever-higher levels of change (or time). Obviously, the highest level of change is that characterizing the creation of reality. Prior to the moment of creation, the universe was not there; afterwards, the universe was there. This represents a sizable change indeed! Unfortunately, it also constitutes a sizable paradox. If the creation of reality was a real event, and if this event occurred in cosmic time, then cosmic time itself is real. But then cosmic time is an aspect of reality and can only have been created with reality. This implies that cosmic time, and in fact reality, must have created themselves!
  • The idea that the universe created itself brings a whole new meaning to bidirectional time, and thus to the idea that cognition may play a role in the creation of reality. As a self-creative mechanism for the universe is sought, it becomes apparent that cognition is the only process lending itself to plausible interpretation as a means of temporal feedback from present to past. Were cognition to play such a role, then in a literal sense, its most universal models of temporal reality would become identical to the reality being modeled. Time would become cognition, and space would become a system of geometric relations that evolves by distributed cognitive processing.

Truth is not absolute.

  • Uncertainty or non-absoluteness of truth value always involves some kind of confusion or ambiguity regarding the distinction between the sentential predicates true and false. Where these predicates are applied to a more specific predicate and its negation - e.g., "it is true that the earth is round and false that the earth is not-round" - the confusion devolves to the contextual distinction between these lesser predicates, in this case round and not-round within the context of the earth. Because all of the ambiguity can be localized to a specific distinction in a particular context, it presents no general problem for reality at large; we can be uncertain about whether or not the earth is round without disrupting the logic of reality in general. However, where a statement is directly about reality in general, any disruption of or ambiguity regarding the T/F distinction disrupts the distinction between reality and not-reality. Were such a disruption to occur at the level of basic cognition or perception, reality would become impossible to perceive, recognize, or acknowledge as something that "exists".
  • Since a tautology is a necessary and universal element of this syntax, tautologies can under no circumstances be violated within reality. Thus, they are "absolute knowledge". We may not be able to specify every element of absolute knowledge, but we can be sure of two things about it: that it exists in reality to the full extent necessary to guarantee its non-violation, and that no part of it yet to be determined can violate absolute knowledge already in hand. Whether or not we can write up an exhaustive itemized list of absolute truths, we can be sure that such a list exists, and that its contents are sufficiently "recognizable" by reality at large to ensure their functionality. Absolute truth, being essential to the integrity of reality, must exist on the level of reference associated with the preservation of global consistency, and may thus be duly incorporated in a theory of reality.
  • On the other hand, the fact that any reasonable definition of "absolute truth" amounts to tautology can be shown by reversing this reasoning. Since absolute truth must be universal, it is always true regardless of the truth values of its variables (where the variables actually represent objects and systems for which specific state-descriptions vary in space and time with respect to truth value). Moreover, it falls within its own scope and is thus self-referential. By virtue of its universality and self-reference, it is a universal element of reality syntax, the set of structural and functional rules governing the spatial structure and temporal evolution of reality. As such, it must be unfalsifiable, any supposition of its falsehood leading directly to a reductio ad absurdum. And to ice the cake, it is unavoidably implicated in its own justification; were it ever to be violated, the T/F boundary would be disrupted, and this would prevent it (or anything else) from being proven. Therefore, it is an active constraint in its own proof, and thus possesses all the characteristics of a tautology.
  • To perceive one and the same reality, human beings need a kind of "absolute knowledge" wired into their minds and nervous systems. The structure and physiology of their brains, nerves and sense organs provide them, at least in part, with elementary cognitive and perceptual categories and relationships in terms of which to apprehend the world. This "absolute" kind of knowledge is what compels the perceptions and logical inferences of any number of percipients to be mutually consistent, and to remain consistent over time and space. Without the absoluteness of such knowledge - without its universality and invariance - we could not share a common reality; our minds and senses would lie and bicker without respite, precipitating us into mental and sensory chaos. Time and space, mind and matter, would melt back into the haze of undifferentiated potential from which the universe is born.
  • Because the truth-Truth distinction is just one of certainty, i.e. probability, everybody who claims "truth" (t) implicitly claims some measure of "Truth" (T), or the attribute denoting inclusion in a formal system or recognizable class of facts or perceptions mutually related by an inferential schema or "scientific theory" (which is required to exhibit logical consistency and thus to tacitly incorporate the formal system of logic). That is, scientific truth t is just the assignment of a subunary, usually subjective probability to logical truth T; if t does not come down to a probabilistic stab at T and thus devolve to generative logical inference, then it is meaningless.

There is no empirical evidence for God, so there is no reason to believe one exists.

  • Basically, the Scientific Method says that science should be concerned with objective phenomena meeting at least two criteria: distinguishability, which means that they produce distinctive effects, and replicability, which means that they can be experimentally recreated and studied by multiple observers who compare their data and confirm each other’s findings. Unfortunately, God nowhere fits into this scheme. First, God is considered to be omnipresent even in monotheistic schemata, which means “distributed over reality as a whole” and therefore lacking any specific location at which to be “distinguished”. Second, there is such a thing as being too replicable. If something is distributed over reality, then it is present no matter where or when it is tested, and one cannot distinguish what is being “replicated”. And then, of course, we have the “Creator” aspect of God; if God is indeed the Creator of reality, then He need not make His works replicable by mere scientists. Thus, the God concept is unavoidably ambiguous in both spatial and temporal location, and no amount of scientific experimentation can overcome this logical difficulty.
  • In short, while the God concept may be amenable to empirical confirmation, e.g. through the discovery of vanishingly improbable leaps of biological evolution exceeding available genetic information, it is by definition resistant to scientific verification. God, like consciousness, is a predicate whose extended logical structure, including a supporting conceptual framework, exceeds what science is presently equipped to analyze. This, of course, means that arguments for or against God cannot be decided on empirical grounds, all but precluding a working relationship between the scientific and religious communities. Even the sincerest attempts to foster dialogue between the two camps are obstructed by the unrealistic expectations of each regarding the ability of the other to meet it on its own ground; whereas the obvious first step towards meaningful communication is a basis for mutual understanding, no amount of encouragement or monetary incentive can provide it for those whose languages stubbornly resist translation. Since this describes the relationship between science and religion, the first step toward reconciliation must be to provide a logical bridge between their internal languages…a master language in which both languages are embedded. The CTMU, conceived as the most general and comprehensive of logical languages, is designed to serve as that bridge.

What are the CTMU's implications?

  • To summarize, the CTMU is a theory of reality-as-mind, in principle spanning all of science while permitting a logical analysis of consciousness and other subjective predicates (this does not mean that it has “solved all of the problems” of science and the philosophy of mind, but only that it has laid the preliminary groundwork). It provides the logical framework of a TOE, yielding an enhanced model of spacetime affording preliminary explanations of cosmogony, accelerating cosmic expansion, quantum nonlocality, the arrow of time, and other physical and cosmological riddles that cannot be satisfactorily explained by other means. The CTMU penetrates the foundations of mathematics, describing the syntactic relationships among various problematic mathematical concepts in a reality-theoretic context. It is the culmination of the modern logico-linguistic philosophical tradition, reuniting the estranged couple consisting of (rationalistic) philosophy and (empirical) science. It provides an indispensable logical setting for Intelligent Design. And perhaps most importantly, the CTMU enables a logical description of God and an unprecedented logical treatment of theology, comprising a metaphysical framework in which to unite people of differing faiths and resolve religious conflicts

Scientific paradoxes can only be scientifically resolved.

  • Any scientific paradox can be reduced to a logical paradox of the generic form "X = ~X" (or "X and not(X)"), and thus requires a logical resolution which restores consistency by restoring the tautology "not(X and not(X))" to uniform applicability on all scales of inference. This logical resolution may, or may not, be "scientific" in the sense of falsifiability.

Reality does not need a closed-form explanation.

  • The MAP is implied by SCSPL closure. For if reality is not ultimately closed, then entities outside reality can be incorporated in real structures and processes; but in that case they are real, and thus inside reality. This contradiction implies that reality is ultimately closed with respect to all real relations and operations, including the definition operation as applied to reality itself. Hence, reality is ultimately semantically closed with respect to its own definition, and the MAP must hold. (Q.E.D.)

Does the CTMU rely on deduction or induction?

  • There are several kinds of "theory". The CTMU is certainly a theory in the general sense that it is a descriptive or explanatory function T which takes the universe U as an argument: T = T(U). However, instead of employing logical deduction to derive theorems from axioms, it employs logical induction to derive the overall structure of reality from certain necessary properties thereof (which are themselves deduced from the facts of existence and perception). That is, it derives the unique structure capable of manifesting all of the required properties.
  • Logical induction does not have to assume the uniformity of nature; it can be taken for granted that nature is uniformly logical. For if nature were anywhere illogical, then it would be inconsistent, and could not be coherently perceived or conceived. But if something cannot be coherently perceived or conceived, then it cannot be recognized as reality, and has no place in a theory of reality. So for theoretical purposes, reality exhibits logical homogeneity, and logical induction thus escapes Hume's problem of empirical induction. (Q.E.D.)
  • The CTMU elevates empirical induction to the model-theoretic level of reasoning, thus circumventing the problem of induction.

The "fine-tuning" of our universe is explained by the Weak Anthropic Principle.

  • The Weak Anthropic Principle

goes halfway toward an answer by applying a probabilistic selection function: “the relationship has been logically selected by the fact that only certain kinds of universe can accommodate acts of observation and an observer like the questioner.” This is right as far as it goes, but omits the generative, pre-selective phase of the explanation … the part that accounts for the selection function and the domain to which it is applied. In this respect, the WAP is a bit like natural selection; it weeds the garden well enough, but is helpless to grow a single flower.