Joined
·
349 Posts
Well, I offer to you the description of Ti that I put in another thread. I hope this could help:
[Ti]
The most accurate way I could describe mi Ti dom in simple words is something like this:
Fractalized cosmovision.
If I explain this with some more words:
Find the simplest and uniform algorithm that could generate the whole reality that is known at every time.
And now I'll develop a bit more this idea.
At instant T sub 0, you have in your mind a concrete amount of information what constitutes "reality". Ti at work would analyze every property of "reality" and would try to find a "universal and simplest rule" that could explain this reality. This rule must be uniform in the same meaning that a continuum and derivable function has in Maths: the algorithm must be capable of justifying from the smallest portion of reality to the whole system with the same rule. No contradictions are allowed (a rule for a concrete portion of reality and another rule for a different portion, etc).
At instant T sub 1, an extra amount of information is provided, so reality expands. The algorithm is tested, if it still works, then OK. If not, it must be revised, changed, for being able to work with the increased reality. Like a bayesian inference.
The algorithm is created from what is perceived as a certainty (this idea is malleable, not inmutable) and used for evaluating ideas whose grade of certainty is unknown.
[Ti]
The most accurate way I could describe mi Ti dom in simple words is something like this:
Fractalized cosmovision.
If I explain this with some more words:
Find the simplest and uniform algorithm that could generate the whole reality that is known at every time.
And now I'll develop a bit more this idea.
At instant T sub 0, you have in your mind a concrete amount of information what constitutes "reality". Ti at work would analyze every property of "reality" and would try to find a "universal and simplest rule" that could explain this reality. This rule must be uniform in the same meaning that a continuum and derivable function has in Maths: the algorithm must be capable of justifying from the smallest portion of reality to the whole system with the same rule. No contradictions are allowed (a rule for a concrete portion of reality and another rule for a different portion, etc).
At instant T sub 1, an extra amount of information is provided, so reality expands. The algorithm is tested, if it still works, then OK. If not, it must be revised, changed, for being able to work with the increased reality. Like a bayesian inference.
The algorithm is created from what is perceived as a certainty (this idea is malleable, not inmutable) and used for evaluating ideas whose grade of certainty is unknown.