Talk:Generalized utility

From CTMU Wiki
Jump to: navigation, search

I deleted this page because it was lacking in comprehension and execution. I then typed up one which is more informative and intelligible, which I then deleted because I am unsure of whether I should have independent work on this wiki. I also share a similar back story to Langan, have been tested as immeasurably high on standard IQ tests, was stomped in the face by academia, and have been generally faced with far more problems than a young adult should face. Since a young age, it has been my goal to hold the universe within my mind and to help induce global human self-actualization. Through independent study and mentation, I arrived at the T.O.E., which is of course necessarily CTMU isomorphic. I made my run through and independently derived physics, mathematics, philosophical theories and the like as they were necessary to fulfill my goals. I independently derived the CTMU and all of Langan's other published works. However, as I am only 18 years old, my independent derivation of the CTMU makes me more impressive in terms of time constraint. I have come to this wiki because Langan's terminology is so effortlessly and seamlessly produced that I thought of the same exact words to express my T.O.E. as Langan did for the CTMU; so aside from a few differing terms, I merely translated my terminology into Langan's and viola! An understanding of SCSPL which mirrors Chris's himself. I am currently studying and developing SCSPL and working to unite the CTMU with real world problems so that they may be resolved from within its tautological framework. My question is: If I understand Langan's material by total isomorphism with my T.O.E., and my T.O.E. is indistinguishable from it, would you accept my independent work on SCSPL to be featured as content on this wiki?(whether it be explanatory or original extensions thereof) SCSPLmetaphysician (talk) 04:13, 29 June 2014‎ (UTC)

Hello, SCSPLmetaphysician. I think we should be conservative in what we post to the articles. The focus should be on explaining Langan's concepts as he has described them, rather than on presenting new insights that go beyond what he has said. Our audience will include people new to the CTMU who are trying to understand it for the first time, so we should try to explain things at a very basic level for them. Actually the previous version of the article, contributed by User:Eire, was a nice beginner's introduction, so it might be good to restore it, perhaps with more advanced material at the end. If you like, you can put your independent work in your userspace (at User:SCSPLmetaphysician), where there is more leeway to take things in your own direction. Tim Smith (talk) 09:07, 29 June 2014 (UTC)

Major changes to articles shouldnt be conducted without a discussion and notifying the relevant authors. This is a collaborative process where everyone works together to make something of use to the interested reader. There's a place for everyone's contributions whether they be complex or simple and Tim is a great editor overall for smoothing everything over. It's important that we be polite and communicative and take each others views into consideration in the final product no matter how much/little expertise we individually have. Eire (talk) 02:50, 30 June 2014‎ (UTC)

Interesting, I'll keep that in mind. However, you might want to correct the spelling mistakes first. In the previous page it says "generalised" instead of generalized. SCSPLmetaphysician (talk) 05:02, 4 July 2014‎ (UTC)
Thanks, I'll fix it. ("-ised" is an accepted variant spelling, but Langan uses "-ized", so I guess we should try to stay consistent with him.) Tim Smith (talk) 06:44, 6 July 2014 (UTC)

Constraint Satisfaction

“Commutative, idempotent groupoids and the constraint satisfaction problem ... The goal in a Constraint Satisfaction Problem (CSP) is to determine if there is a suitable assignment of values to variables subject to constraints on their allowed simultaneous values. The CSP provides a common framework in which many important combinatorial problems may be formulated—for example, graph colorability or propositional satisfiability. It is also of great importance in theoretical computer science, where it is applied to problems as varied as database theory and natural language processing.” http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.831.2240&rep=rep1&type=pdf

“Idempotent Simple Algebras ... An operation f(x1,...,xn) on a set U is said to be idempotent if f(u,u,...,u) = u is true of any u ∈ U.” https://math.colorado.edu/~kearnes/Papers/simfix.pdf

“Subalgebra Systems of ldempotent Entropic Algebras

This paper deals with some of the theory of a class of algebras, the idempotent entropic algebras, that enjoy a number of remarkable properties. Predominant among these is the structure of their subalgebra systems, described here by the concept of an IEO-semilattice. Examples of idempotent entropic algebras, detailed in the next section, include sets, semilattices, normal bands, vector spaces under the formation of centroids, and convex sets under the filling in of convex hulls. Corresponding examples of IEO-semilattices, detailed in Section 3, include semilattices, meet-distributive bisemilattices, semilattice-normal semirings, certain semilattice-ordered groupoids, and algebras modelling the semilattice-ordered sets of utilities of game theory and mathematical economics.” https://jdhsmith.math.iastate.edu/math/SSIEA.pdf

“The Rényi Entropies operate in Positive Semifields ... Consequently, the transformed product has an inverse whence the structure is actually that of a positive semifield. Instances of this construction lead to idempotent analysis and tropical algebra as well as to less exotic structures. We conjecture that this is one of the reasons why tropical algebra procedures, like the Viterbi algorithm of dynamic programming, morphological processing, or neural networks are so successful in computational intelligence applications. But also, why there seem to exist so many procedures to deal with “information” at large.” https://arxiv.org/pdf/1710.04728.pdf

“The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of α can be calculated explicitly by virtue of the fact that it is an automorphic function with respect to a particular subgroup of the modular group. In theoretical computer science, the min-entropy is used in the context of randomness extractors.” https://en.wikipedia.org/wiki/Rényi_entropy

“Renyi Entropy and Free Energy ... The Renyi entropy is a generalization of the usual concept of entropy which depends on a parameter q. In fact, Renyi entropy is closely related to free energy. Suppose we start with a system in thermal equilibrium and then suddenly divide the temperature by q. Then the maximum amount of work the system can do as it moves to equilibrium at the new temperature, divided by the change in temperature, equals the system's Renyi entropy in its original state. This result applies to both classical and quantum systems. Mathematically, we can express this result as follows: the Renyi entropy of a system in thermal equilibrium is minus the "1/q-derivative" of its free energy with respect to temperature. This shows that Renyi entropy is a q-deformation of the usual concept of entropy.” https://arxiv.org/abs/1102.2098

Entropy as a functor https://ncatlab.org/johnbaez/show/Entropy+as+a+functor

Category-Theoretic Characterizations of Entropy https://golem.ph.utexas.edu/category/2011/05/categorytheoretic_characteriza.html

“Several interesting physical systems abide by entropic functionals that are more general than the standard Tsallis entropy. Therefore, several physically meaningful generalizations have been introduced. The two most generals of those are notably: Superstatistics, introduced by C. Beck and E. G. D. Cohen in 2003 and Spectral Statistics, introduced by G. A. Tsekouras and Constantino Tsallis in 2005.” https://en.wikipedia.org/wiki/Tsallis_entropy

“Thermodynamic semirings ... In the current work, we examine this surprising occurrence, defining a Witt operad for an arbitrary information measure and a corresponding algebra we call a thermodynamic semiring. This object exhibits algebraically many of the familiar properties of information measures, and we examine in particular the Tsallis and Renyi entropy functions and applications to non- extensive thermodynamics and multifractals. We find that the arithmetic of the thermodynamic semiring is exactly that of a certain guessing game played using the given information measure.” http://www.its.caltech.edu/~matilde/ThermoSemirings.pdf

Entropy, holography, and p-adic geometry http://www.its.caltech.edu/~matilde/CMS75HolographySlides.pdf

Nonarchimedean Holographic Entropy from Networks of Perfect Tensors https://arxiv.org/abs/1812.04057

Entropy algebras and Birkhoff factorization https://arxiv.org/abs/1412.0247

“Syntactic Parameters and a Coding Theory Perspective on Entropy and Complexity of Language Families

We present a simple computational approach to assigning a measure of complexity and information/entropy to families of natural languages, based on syntactic parameters and the theory of error correcting codes.” https://www.mdpi.com/1099-4300/18/4/110

“The free energy principle is a formal statement that explains how living and non-living systems remain in non-equilibrium steady-states by restricting themselves to a limited number of states. It establishes that systems minimise a free energy function of their internal states (not to be confused with thermodynamic free energy), which entail beliefs about hidden states in their environment. The implicit minimisation of free energy is formally related to variational Bayesian methods and was originally introduced by Karl Friston as an explanation for embodied perception in neuroscience, where it is also known as active inference.

The free energy principle explains the existence of a given system by modeling it through a Markov blanket that tries to minimize the difference between their model of the world and their sense and associated perception. This difference can be described as "surprise" and is minimized by continuous correction of the world model of the system. As such, the principle is based on the Bayesian idea of the brain as an “inference engine.” Friston added a second route to minimization: action. By actively changing the world into the expected state, systems can also minimize the free energy of the system. Friston assumes this to be the principle of all biological reaction. Friston also believes his principle applies to mental disorders as well as to artificial intelligence. AI implementations based on the active inference principle have shown advantages over other methods.

The free energy principle has been criticized for being very difficult to understand, even for experts. Discussions of the principle have also been criticized as invoking metaphysical assumptions far removed from a testable scientific prediction, making the principle unfalsifiable. In a 2018 interview, Friston acknowledged that the free energy principle is not properly falsifiable: "the free energy principle is what it is — a principle. Like Hamilton's principle of stationary action, it cannot be falsified. It cannot be disproven. In fact, there’s not much you can do with it, unless you ask whether measurable systems conform to the principle.”” https://en.wikipedia.org/wiki/Free_energy_principle

Deleted Content on Maximization of Entropy and Utility (provided by Mars T.)

http://lawofmaximumentropyproduction.com/

"(E)volution on planet Earth can be seen as an epistemic process by which the global system as a whole learns to degrade the cosmic gradient at the fastest possible rate given the constraints"

- Rod Swenson, (1989), Emergent Evolution and the Global Attractor: The Evolutionary Epistemology of Entropy Production Maximization

THE LAW OF MAXIMUM ENTROPY PRODUCTION (LMEP, MEP)

The Law of Maximum Entropy Production (LMEP or MEP) was first recognized by American scientist Rod Swenson in 1988, and articulated by him in its current form (below) in 1989. The principle circumstance that led Swenson to the discovery and specification of the law was the recognition by him and others of the failure of the then popular view of the second law or the entropy principle as a 'law of disorder'. In contrast to this view where transformations from disorder to order were taken to be 'ininitely improbable" such transfromations are seen to characterize planetary evolution as a whole and happen regularly in the real world predictably and ordinarily with a "probability of one"[6]), The Law of Maximum Entropy Production thus has deep implications for evolutionary theory, culture theory, macroeconomics, human globalization, and more generally the time-dependent development of the Earth as a ecological planetary system as a whole.

It is given as follows:

THE LAW OF MAXIMUM ENTROPY PRODUCTION

A system will select the path or assemblage of paths out of available paths that minimizes the potential or maximizes the entropy at the fastest rate given the constraints (2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13).

Discussion:

1. ENERGY, ENTROPY, GRADIENTS AND

THE FIRST TWO LAWS OF THERMODYNAMICS

"(T)he laws of thermodynamics", as Swenson (12) has stated, "are special laws that sit above the other laws of physics as laws about laws or laws on which the other laws depend."

a) THE FIRST LAW (the 'law of energy conservation') says that all real world processes involve transformations of energy and that the total amount of energy is always conserved. It expresses time-translation symmetry, without which there could be no other laws at all (12).

b) THE SECOND LAW ('the entropy principle') as understood classically by Clausius and Thomson captures the idea that the world is inherently active and whenever an energy distribution is out of equilibrium a gradient of a potential (or thermodynamic force) exists that the world acts to dissipate or minimize. Whereas the first law expresses that which remains the same, or time-symmetric, the second law expresses the fundamental broken symmetry, or time-assymetry of the world. Clausius coined the term "entropy" to refer to the dissipated potential, and the second law in its most general form thus states that the world acts spontaneously to minimize potentials (or equivalently maximize the entropy).(12)

The active nature of the Second Law is intuitively easy to grasp and demonstrate. If a cup of hot liquid, for example is placed in a colder room a gradient of a potential exists and a flow of heat is spontaneously produced from the cup to the room until the potential is minimized or dissipated (the entropy is maximized) at which point the temperatures are the same and all flows stop (the cup/room system is 'in thermal equilibrium").

2. WHAT THE LAW OF MAXIMUM ENTROPY PRODUCTION SAYS THAT THE SECOND LAW DOES NOT

Whereas the Second Law says that the world acts to minimize potentials it does not say out of which available paths it will take to do this. This is the question the Law of Maximum Entropy Production answers. (Note: The Law of Maximum Entropy Production does not contradict or replace the second law. It is another law that is in addition to it). What Swenson pointed out as the answer to this question, as above, was that it will "select the path or assemblage of paths out of available paths that minimizes the potential or maximizes the entropy at the fastest rate given the constraints" (12). Like the classical view of the Second Law, although LMEP has profound and remarkable consequences, it is actually simple to grasp and empirically demonstrate.

Swenson & Turvey (6) provided the example of a warm mountain cabin in a cold snow-covered woods with the fire that provided the heat having burned out. Under these circumstances there is a temperature gradient between the warm cabin and cold woods. The second law tells us that over time the gradient or potential will be dissipated through walls or cracks around the windows and door until the cabin is as cold as the outside and the system is in equilibrium. We know empirically though that if we open a window or a door a portion of the heat will now rush out the door or window and not just through the walls or cracks. In short whenever we remove a constraint to the flow (such as a closed window) the cabin/environment system will exploit the new and faster pathway thereby increasing the rate the potential is minimized. Wherever it has the opportunity to minimize or 'destroy' the gradient of the potential (maximize the entropy) at a faster rate, it will do so, exactly as as the Law of Maximum Entropy Production says. Namely, it will "select the pathway or assembly of pathways that minimizes the potential or maximizes the entropy at the fastest rate given the constraints". Once this principle is grasped, examples are easy to recognize and show in everyday life

The profound implications of this deceivingly simple law are discussed in numerous other places (5,6,7,9, 13). (For additional discussion also see: ( i) The problem with Boltzmann's view of the Second Law, (ii) Why the world is in the order production business, (iii) Entropy production and order from disorder, (iv) Planetary evolution and entropy production, (v) Ecological relations and entropy production.

Feedback regarding relationship between Utility and Entropy

Utility and Entropy https://www.jstor.org/stable/25055372

Disutility Entropy in Multi-attribute Utility Analysis https://www.sciencedirect.com/science/article/abs/pii/S0360835222002595

Is Utility Theory so Different from Thermodynamics? https://www.santafe.edu/research/results/working-papers/is-utility-theory-so-different-from-thermodynamics

Utility function estimation: The entropy approach https://www.sciencedirect.com/science/article/abs/pii/S0378437108002550

number theory and entropy https://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/NTentropy.htm

“Lattice duality: The origin of probability and entropy

This paper shows how a straight-forward generalization of the zeta function of a distributive lattice gives rise to bi-valuations that represent degrees of belief in Boolean lattices of assertions and degrees of relevance in the distributive lattice of questions. The distributive lattice of questions originates from Richard T. Cox's definition of a question as the set of all possible answers, which I show is equivalent to the ordered set of down-sets of assertions. Thus the Boolean lattice of assertionns is shown to be dual to the distributive lattice of questions in the sense of Birkhoff's Representation Theorem. A straightforward correspondence between bi-valuations generalized from the zeta functions of each lattice give rise to bi-valuations that represent probabilities in the lattice of assertions and bi-valuations that represent entropies and higher-order informations in the lattice of questions." https://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/knuth2.pdf