A mind can't be static. It must have a changing set of beliefs. At the least, a belief in an unreached end. This set is its universe. An idea that a mind can believe is a form: a pattern of sensation, not things or objects which are more complicated than a single form.
A thing exists to a mind no more or less than believed forms describe. If two things have the same form to a mind, then the mind sees only one thing.
Forms of forms. In the case of a thermostat, a mechanism in one position or another. Higher classes of mind require trees of distinctions within distinctions.
A mind bothers to keep a distinction because its state—true or false, up or down, light or dark—coincides with success of an act. Perception as biography: a finite mind tends only to see what serves it.
Essential beliefs: ends and means. In higher minds, inferences. Each must fit into itself and every other, free to form the endless loops and spirals of deeply intelligent behavior.
A case of the value of recursion, of applying powerful ideas to themselves:
Find the patterns. Find the patterns in the patterns. Learn how to find patterns. Learn how to learn. Find the patterns of learning. Search for patterns. Search for search methods.
Write code that writes code.
Judge the value of your values.
Define the process of defining processes.
A replicator that can replicate itself.
Invent a machine that invents.
The evolution of evolvablilty.
An idle proof:
Assumed fact: Every thing is unique. (Not necessarily true for a trivial mind or for any mind at low levels of sensation.)
Inference: If two things aren't the same, then they aren't equal, at least not for all uses.
Inferred fact: Every thing is unequal. Nothing is equal to anything else, or even to itself past an instant.
Inference: Two sets or groups of things are only equal if every element in one equals an element in the other.
Inferred fact: No groups are equal. Every thing or set of things is unequal. Nothing is the same but so far as the differences seem not worth knowing. Belief in equality is at best a useful provisional lie.
In principle, any thing or set can be equalized, can be turned into another, but then each thing has an unequal cost to become equal.
Kinds of realities for minds to sense and control: discrete vs. continuous, finite vs. infinite, opaque vs. transparent, regular vs. random. The simplest assumption is that all minds ultimately live in one Universe of infinite dimensions each infinitely divisible.
Beliefs vs. engine. An engine moves a mind towards its goals. Beliefs define the goals, means, and state. Examples: DNA vs. a cell that translates DNA to protein, a belief database vs. a computer program that reads and updates the database.
In a brain, beliefs are inseparable from the engine. In a computer, an engine can be distinct and applied as easily to one standardized set of beliefs as another.
Everything is everything. Every one, however briefly, is at times wise, foolish, bold, shy, evil and virtuous. You distinguish things by their proportions. Sometimes a liar will tell the truth and the honest lie. Are the ideas liar and honest useless because they mistake the men for an instant?
Time. As a practical matter, a mind designed by a human must presume time, but a simple mind's beliefs needn't include the distinctions: past, present, future. It can live in an eternal now.
Senses can lie about time. A sense may conceal gaps, failures or the delay it adds. A mind may allow it to lie so well that you would even remember believing an idea before you really believed it.
Sensing senses. A mind can have beliefs it finds to be conditions of sets of beliefs. You can see without eyes. A philosophical distinction: a posteriori, knowledge gained through the senses, vs. a priori, knowledge gained without the senses. Not that knowledge is really known to be received through the senses, we merely find it useful to imagine so. The idea of a sense—eye, ear—is an invention. How does a mind discover a sense beyond what, if anything, the mind did to make it?
How to prevent, detect and resolve the inevitable corruption of beliefs? DNA examples: mutation, copy errors.
Ends, inferences to ends. A mind can remake anything but the knowledge of what it should make.
Means to means, then more specific means.
Unawareness of x vs. the untruth of x. Not-hot does not equal cold. A mind can merely be not hot because it feels no temperature. The exclusivity of hot and cold is a learned negative suppressing inference between the two.
Inferences to inferences. An inference from x to y causes a mind to believe y when it believes x. Inferences from and to beliefs may be beliefs themselves. This preferable form gives a mind some self-awareness of its thoughts.
Trees of binary inferences. With inferences to inferences, a mind can infer from “x and y” to z using a nested pair of simple inferences—an inference from x to an inference from y to z—instead of complicating the engine to support inferences from more than one belief.
Exhaust the basic permutations of the special forms.
Means to means: Allow a mind to expand its powers.
Goal to means: Allow a mind to know the need to expand its means.
Goal to goal: What use?
Means to goal: What use?
Goal to inference: The desire to know what, of some form, a mind can infer from a belief.
Means to inference: Allow an unconscious mechanism to produce complex inferences.
Inference to goal: If from a goal, captures a condition of an act, regardless of means. If not from a goal, captures a conditional end.
Inference to means: Allow a mind to perceive a conditional means. Differs from a means having conditions.
Inference to inference: Form complex inferences by combining simple ones. Equivalent to the logical AND operator.
Inference from goal: What use?
Inference from means: What use?
Human minds separate short term memories from long term. Is this distinction an inescapable feature of any mind or does it only reflect a technical weakness of brain minds? Long term memories may require formation of expensive physical connections or investment in another optimization. May any mind benefit from an investment in lasting memories?
Forgetfulness. Most finite minds sense more beliefs than they can hold. How to choose what to keep? One method: a long-term bias that holds beliefs with consistent but sparse use and a short-term bias that gives recent beliefs a chance.
Bandwidth. How many sensations can a mind handle per second? How deeply? Can it reliably ignore more?
A pawn: Your x isn't real because it has fuzzy edges. The speaker, parroting a malicious script, presumes a level of philosophical strictness applied only to ideas that she dislikes. What next? Does dawn disprove the day? For a mind in a non-trivial universe, almost everything has unclear limits. The real question: how best to draw lines and when to redraw?
Why doesn't a mind just delude itself into thinking that it reached its ends? Whence a desire for truth towards yourself? Especially when at bottom a non-trivial mind constantly presents simplifications, lies. Why not accept a faulty sensor or false beliefs? How to organize such resistance? A partial answer: redundant senses.
On average, x is y. What use is this hedge: on average? Every statement about things in the world has exceptions. Even the exceptions have exceptions. Every statement is an obligatory average, a claim that the exceptions aren't worth keeping in mind.
How a mind groups forms, how it generalizes or categorizes, is unlimited. How to choose? In terms of the mind's interests and what coincides with those. Is there an objective categorization? One true for every mind? No, categories, abstractions exist to simplify each unique mind's predictions.
I recall criticisms of the Dewey decimal system's Eurocentric allocation of the higher numbers. A mind's starting point: nothing exists, everything is the same. If a difference seems to change the effects of our acts, then we admit a distinction, a pair of categories and assign sensations to them. We shouldn't be surprised by the unrealism of politically compelled assumptions. Nor surprised by the impracticality of any idea inferred from them. If one took them seriously, the only correct categorization of reality is x categories for x many infinite objects, without hierarchy.
You could average the interests of present humans, but most humans read too little to deserve inclusion. Then of most literate humans, updated as demographics change? An engineering solution that dodges the politics: cluster objects according to a machine made model of each human's interests. One downside: this may muffle discussion because the categories the machine discovers may correspond to no word or expression, though the machine could confine itself to such categories. The top of your taxonomy would summarize your interests.
Demolish dumb ideas by taking them seriously, not that their proposers meant to help us with them.
The idea of a problem, like any idea, is a simplification. An unsolvable problem may only seem so. Study the input more closely. Find an overlooked distinction that allows a solution. Much of mind is the twin work of adding distinctions, seeing again how two things differed, or removing distinctions, seeing how two things are the same for your use.
Appearance vs. reality. Only appearance is real. Reality is a mind's useful fiction.
A mind is no better than its senses.