Talk:Generalized utility

From CTMU Wiki
Jump to: navigation, search

I deleted this page because it was lacking in comprehension and execution. I then typed up one which is more informative and intelligible, which I then deleted because I am unsure of whether I should have independent work on this wiki. I also share a similar back story to Langan, have been tested as immeasurably high on standard IQ tests, was stomped in the face by academia, and have been generally faced with far more problems than a young adult should face. Since a young age, it has been my goal to hold the universe within my mind and to help induce global human self-actualization. Through independent study and mentation, I arrived at the T.O.E., which is of course necessarily CTMU isomorphic. I made my run through and independently derived physics, mathematics, philosophical theories and the like as they were necessary to fulfill my goals. I independently derived the CTMU and all of Langan's other published works. However, as I am only 18 years old, my independent derivation of the CTMU makes me more impressive in terms of time constraint. I have come to this wiki because Langan's terminology is so effortlessly and seamlessly produced that I thought of the same exact words to express my T.O.E. as Langan did for the CTMU; so aside from a few differing terms, I merely translated my terminology into Langan's and viola! An understanding of SCSPL which mirrors Chris's himself. I am currently studying and developing SCSPL and working to unite the CTMU with real world problems so that they may be resolved from within its tautological framework. My question is: If I understand Langan's material by total isomorphism with my T.O.E., and my T.O.E. is indistinguishable from it, would you accept my independent work on SCSPL to be featured as content on this wiki?(whether it be explanatory or original extensions thereof) SCSPLmetaphysician (talk) 04:13, 29 June 2014‎ (UTC)

Hello, SCSPLmetaphysician. I think we should be conservative in what we post to the articles. The focus should be on explaining Langan's concepts as he has described them, rather than on presenting new insights that go beyond what he has said. Our audience will include people new to the CTMU who are trying to understand it for the first time, so we should try to explain things at a very basic level for them. Actually the previous version of the article, contributed by User:Eire, was a nice beginner's introduction, so it might be good to restore it, perhaps with more advanced material at the end. If you like, you can put your independent work in your userspace (at User:SCSPLmetaphysician), where there is more leeway to take things in your own direction. Tim Smith (talk) 09:07, 29 June 2014 (UTC)

Major changes to articles shouldnt be conducted without a discussion and notifying the relevant authors. This is a collaborative process where everyone works together to make something of use to the interested reader. There's a place for everyone's contributions whether they be complex or simple and Tim is a great editor overall for smoothing everything over. It's important that we be polite and communicative and take each others views into consideration in the final product no matter how much/little expertise we individually have. Eire (talk) 02:50, 30 June 2014‎ (UTC)

Interesting, I'll keep that in mind. However, you might want to correct the spelling mistakes first. In the previous page it says "generalised" instead of generalized. SCSPLmetaphysician (talk) 05:02, 4 July 2014‎ (UTC)
Thanks, I'll fix it. ("-ised" is an accepted variant spelling, but Langan uses "-ized", so I guess we should try to stay consistent with him.) Tim Smith (talk) 06:44, 6 July 2014 (UTC)

Constraint Satisfaction

“Commutative, idempotent groupoids and the constraint satisfaction problem ... The goal in a Constraint Satisfaction Problem (CSP) is to determine if there is a suitable assignment of values to variables subject to constraints on their allowed simultaneous values. The CSP provides a common framework in which many important combinatorial problems may be formulated—for example, graph colorability or propositional satisfiability. It is also of great importance in theoretical computer science, where it is applied to problems as varied as database theory and natural language processing.” http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.831.2240&rep=rep1&type=pdf

“Idempotent Simple Algebras ... An operation f(x1,...,xn) on a set U is said to be idempotent if f(u,u,...,u) = u is true of any u ∈ U.” https://math.colorado.edu/~kearnes/Papers/simfix.pdf

“Subalgebra Systems of ldempotent Entropic Algebras

This paper deals with some of the theory of a class of algebras, the idempotent entropic algebras, that enjoy a number of remarkable properties. Predominant among these is the structure of their subalgebra systems, described here by the concept of an IEO-semilattice. Examples of idempotent entropic algebras, detailed in the next section, include sets, semilattices, normal bands, vector spaces under the formation of centroids, and convex sets under the filling in of convex hulls. Corresponding examples of IEO-semilattices, detailed in Section 3, include semilattices, meet-distributive bisemilattices, semilattice-normal semirings, certain semilattice-ordered groupoids, and algebras modelling the semilattice-ordered sets of utilities of game theory and mathematical economics.” https://jdhsmith.math.iastate.edu/math/SSIEA.pdf

“The Rényi Entropies operate in Positive Semifields ... Consequently, the transformed product has an inverse whence the structure is actually that of a positive semifield. Instances of this construction lead to idempotent analysis and tropical algebra as well as to less exotic structures. We conjecture that this is one of the reasons why tropical algebra procedures, like the Viterbi algorithm of dynamic programming, morphological processing, or neural networks are so successful in computational intelligence applications. But also, why there seem to exist so many procedures to deal with “information” at large.” https://arxiv.org/pdf/1710.04728.pdf

“The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of α can be calculated explicitly by virtue of the fact that it is an automorphic function with respect to a particular subgroup of the modular group. In theoretical computer science, the min-entropy is used in the context of randomness extractors.” https://en.wikipedia.org/wiki/Rényi_entropy

“Renyi Entropy and Free Energy ... The Renyi entropy is a generalization of the usual concept of entropy which depends on a parameter q. In fact, Renyi entropy is closely related to free energy. Suppose we start with a system in thermal equilibrium and then suddenly divide the temperature by q. Then the maximum amount of work the system can do as it moves to equilibrium at the new temperature, divided by the change in temperature, equals the system's Renyi entropy in its original state. This result applies to both classical and quantum systems. Mathematically, we can express this result as follows: the Renyi entropy of a system in thermal equilibrium is minus the "1/q-derivative" of its free energy with respect to temperature. This shows that Renyi entropy is a q-deformation of the usual concept of entropy.” https://arxiv.org/abs/1102.2098

Entropy as a functor https://ncatlab.org/johnbaez/show/Entropy+as+a+functor

Category-Theoretic Characterizations of Entropy https://golem.ph.utexas.edu/category/2011/05/categorytheoretic_characteriza.html

“Several interesting physical systems abide by entropic functionals that are more general than the standard Tsallis entropy. Therefore, several physically meaningful generalizations have been introduced. The two most generals of those are notably: Superstatistics, introduced by C. Beck and E. G. D. Cohen in 2003 and Spectral Statistics, introduced by G. A. Tsekouras and Constantino Tsallis in 2005.” https://en.wikipedia.org/wiki/Tsallis_entropy

“Thermodynamic semirings ... In the current work, we examine this surprising occurrence, defining a Witt operad for an arbitrary information measure and a corresponding algebra we call a thermodynamic semiring. This object exhibits algebraically many of the familiar properties of information measures, and we examine in particular the Tsallis and Renyi entropy functions and applications to non- extensive thermodynamics and multifractals. We find that the arithmetic of the thermodynamic semiring is exactly that of a certain guessing game played using the given information measure.” http://www.its.caltech.edu/~matilde/ThermoSemirings.pdf

Entropy, holography, and p-adic geometry http://www.its.caltech.edu/~matilde/CMS75HolographySlides.pdf

Nonarchimedean Holographic Entropy from Networks of Perfect Tensors https://arxiv.org/abs/1812.04057

Entropy algebras and Birkhoff factorization https://arxiv.org/abs/1412.0247

“Syntactic Parameters and a Coding Theory Perspective on Entropy and Complexity of Language Families

We present a simple computational approach to assigning a measure of complexity and information/entropy to families of natural languages, based on syntactic parameters and the theory of error correcting codes.” https://www.mdpi.com/1099-4300/18/4/110

“The free energy principle is a formal statement that explains how living and non-living systems remain in non-equilibrium steady-states by restricting themselves to a limited number of states. It establishes that systems minimise a free energy function of their internal states (not to be confused with thermodynamic free energy), which entail beliefs about hidden states in their environment. The implicit minimisation of free energy is formally related to variational Bayesian methods and was originally introduced by Karl Friston as an explanation for embodied perception in neuroscience, where it is also known as active inference.

The free energy principle explains the existence of a given system by modeling it through a Markov blanket that tries to minimize the difference between their model of the world and their sense and associated perception. This difference can be described as "surprise" and is minimized by continuous correction of the world model of the system. As such, the principle is based on the Bayesian idea of the brain as an “inference engine.” Friston added a second route to minimization: action. By actively changing the world into the expected state, systems can also minimize the free energy of the system. Friston assumes this to be the principle of all biological reaction. Friston also believes his principle applies to mental disorders as well as to artificial intelligence. AI implementations based on the active inference principle have shown advantages over other methods.

The free energy principle has been criticized for being very difficult to understand, even for experts. Discussions of the principle have also been criticized as invoking metaphysical assumptions far removed from a testable scientific prediction, making the principle unfalsifiable. In a 2018 interview, Friston acknowledged that the free energy principle is not properly falsifiable: "the free energy principle is what it is — a principle. Like Hamilton's principle of stationary action, it cannot be falsified. It cannot be disproven. In fact, there’s not much you can do with it, unless you ask whether measurable systems conform to the principle.”” https://en.wikipedia.org/wiki/Free_energy_principle