Instrumental Calculus

Disclaimer:

This is soooo speculative. Believe at your own peril (protective goggles don't help).

At present this document is even more disjointed than my usual stream of semi-consciousness. I'm juggling several concepts while frantically attempting to fit them together. The major sections below may change places (or vanish completely) if I ever discover the "One True Direction" in which I need to wander.

At present we are not actually making sausage here, we are just dismantling the precursors and trying to reassemble the resulting components into something that still can't fly.


Introduction

Instrumentation can be used mathematically if the terms used can be quantified.

This isn't actually different than any other language, but I am exploring ways that the use of math and logic can be made more a part of the ethos of the language.

What follows are explorations of areas that I believe can impact the use of formal semantics.


Something Theory-like

The universe has two components, things and Predicates. We divide things into actors (Yang, Subject, Entity) and targets (Yin, Object). Predicates can be active (Verb, Change) or inactive (Relationship, Description).

Usually we think of the subject changing (or defining) the object, but it is possible for the subject to change itself and for the object to be changed by an unknown subject or by the environment (or by the universe, if fate exists). If the subject changes itself, the object may be present and changed or NOT present and NOT changed or either of the other two combinations. If the object is NOT present and still changed it is the result of either telepathy or coincidence or an error in estimating the impact of the change. 

The result of this wool gathering is the following rules for Instrumentation sentence construction.

Form
Connotation
Subject, Predicate and Object all present
Subject changes or defines the Object
Subject part of Predicate (using 'person' from verb) same
Subject missing (because predicate isn't a verb)
Object defined by unknown subject or environment
    "Because dog."
    "Transcending pig."
Dog is reason for current situation.
Pig became something greater (like bacon).
Predicate missing (see examples below)
Subject defines or equals the Object: zero copula(?)
    "Hamburger dog."
    "Dog hamburger."
Dog is made of (or full of) Hamburger.
Hamburger is made of (or for) a dog.
Object missing Subject changes or defines itself



A Survey of the Art

Below, we have several frameworks and techniques that model, manipulate and communicate information and thus, control the real world events, entities and objects described by that information.

Methodology
Actor
Operation
Target
Instrumentation Subject Predicate Object
Linguistics Subject
Verb Object
RDF
Object
Predicate
Object
Object/Entity Relationship
Object/Entity Relationship Object/Entity
Mathematics (practical)
Quantification
Operator
Quantification
   Harmonic motion
Velocity
Equals
Amplitude * Cos(theta)
   Pythagorean Theorem
A^2 + B^2
Equals if (theta = 90) C^2
   Law of Cosines
A^2 + B^2 +2AB*Cos(theta)
Equals C^2
Newtonian Physics
Force
Equals
Mass*Acceleration
I Ching
Yang
Change
Yin

The mathematical formulas shown above tend to have multiple  'Operators'. I picked the Operator that I believe to be the most important to be the major predicate (#7700). Algebraically, any operator can be considered the major predicate.


Adventures in Translation

The subject and the object can be difficult to identify in a natural language. In Instrumentation the subject is the thing that acts or the thing that causes changes or the thing that describes or has a relationship with the object. If a relationship is mutual or bidirectional, the subject and object can be reversed or combined. For example:

Dog likes pig.    Pig likes dog.    Dog and pig are friends.

The third sentence is most efficient. The first two sentences would both be needed to describe the complete relationship while the third sentence can stand alone.

If these were formal axioms in some domain axiomatization (for a system of inter-operating applications or an expert system or a knowledge base), all three might be required or the relation ship between 'like' and 'friend' might be defined elsewhere.

English
Instrumentation text
Instrumentation glyphs
The dog is blue
blue .| dog is feature of #648D #ED20A6
The dog is sad
sadness .| dog describes #EDD #C320A6
the sad dog barks
dog .| sad .| dog noise it just did (once)
(Hypodescription block not installed yet)
#20A6 #FDD #B52000
dog and pig are friends
dog .| pig and .| friendly they are still (many)
#20A6 #7108A6 #BF01AF
A^2 + B^2 = C^2 side exponentiate .| 2 .| most small .| plus .|
side exponentiate .| 2 .| average .|
equals .|
side exponentiate .| 2 .| most large .
(no 'Hypotenuse' term, yet)
#6C1264 #401 #B607 #620000
#6C1264 #401 #46
#600000
#6C1264 #401 #6907
(Comment: "bla, bla, bla")



To the Depths and Back again

The standard binary counting pattern may be an inefficient way to traverse the collection of terms found in any block. I've been practicing the Butterfly mnemonic while running through the associated finger positions (1, 2, 3, 4, 5, etc.) and it's easy to remember, but I think there is a better pattern.

The reason that this is important is because Instrumentation can be viewed as a collection of banks of sixteen and blocks of 256 (which is 16 X 16) things. If you can run through a group of sixteen contiguous indexes efficiently with one hand, it should be easier to view all of the terms in a sub-table or all of the links in a level of the index when searching for the perfect word.

The chart below shows a series of binary numbers, arranged so that only one bit changes from each state to the next. The "8421" column shows which keys are depressed (either the ones or the zeros, it doesn't really matter to me). This means that only one finger moves on each transition. I expect that this will speed things up, although I may need a new mnemonic before I can test my hypothesis.

row #
8 4 2 1 Hex
reflection mnemonic
1 0 0 0 1 1 unique Multiplex
2 0 0 1 1 3 (-2 = 1) Pretentious
3 0 0 1 0 2 (-2 = 0) Butterflies
4 0 1 1 0 6 (-4 = 2) Dizzily
5 0 1 1 1 7 (-4 = 3) Turning
6 0 1 0 1 5 (-4 = 1) Neatly
7 0 1 0 0 4 (-4 = 0) Wings
8 1 1 0 0 C (-8 = 4) Kings
9 1 1 0 1 D (-8 = 5) Zooming
10
1 1 1 1 F (-8 = 7) CHeerful
11
1 1 1 0 E (-8 = 6) Joyous
12
1 0 1 0 A (-8 = 2) Flutter-byes
13
1 0 1 1 B (-8 = 3) Glimmering
14
1 0 0 1 9 (-8 = 1) Vertiginous
15
1 0 0 0 8 (-8 = 0) Right
0 0 0 0 0 0 unique Sight

Binary Coverage by Single Bit Replacement

The "8421" column does not represent the only possible sequence that fulfills the "only move one finger" rule. Since the four sub-columns of bits can be swapped around, there should be 24 (or 6! [factorial]) possible orders (such as 1248, 8241, etc.). Also, the pattern is circular so you can start at any point and move in either direction and you will still cover all of the combinations in the same number of steps.

This means that there are (24 X 16 X 2 =) 768 possible progressions. I haven't looked at all the progressions to see if any of them are duplicates, but I imagine a few might be.

I chose the sequence above for expositional reasons. The "1" sub-column changes twice as often as the "2" sub-column which changes twice as often as the "4" sub-column which changes twice as often as the "8" sub-column. This make the pattern much easier to discern.

None of this, however, makes the pattern above the best pattern for an efficient and mnemonic human chord progression. Much testing is needed.


The limits of human understanding

I could increase the address space available for casual conversation by treating the first three levels as a single number instead of two numbers. The current space available is 64 (Creation+Description) kibi-terms plus 256 (Syntax) terms, it could be increased to 16 mebi-terms.

I decided to use two spaces in the design because it limits the size of the casual vocabulary to something roughly equivalent to the vocabulary of the average educated human. It also limits the amount of memory needed on the user's phone.

This 'articulated' interaction can also create a conceptual structure in the user's mind where the 256 terms of the articulation layer form pivots around which the basic vocabulary rotates. The Syntactic block thus provides a simulated "degree of freedom" while it actually holds the rest of the sentence together. (or perhaps that is just my fantasy)

I also decided to to use the visual structure to partition the glyph (instead of using consecutive indexes for morphologically related terms) because I believe that it will help memorization. I believe that it is easier to remember that the Intention spoke creates verbs than it is to remember that "#800000" creates verbs.

Having set the limits of my space I need to partition it out to different grammatical types of terms. I believe that there are two methods.

The first (and still unused) method would partition the memory space according to the breakdown of most used, most needed linguistic components in the nine target languages. I haven't used this method because I don't have that data available (possibly because I haven't actually looked for it).

I have attempted to model the most efficient distribution of "needed linguistic components" within the limits of human understanding. I have attempted to put as many of the most common terms used within mathematics, predicate logic, Set theory and social interaction as will fit in the available space of the Syntactic Block. I am sure something is still missing, but adjustments are not a problem, so I'll fix the valid needs when I know what they are (and what to discard).

"Needed components" are still being identified, but hopefully the noun, verb, adjective, adverb, preposition, etc. address spaces are large enough and in proper proportions to accommodate the (inevitable?) rise in the numbers of users ('inevitable', because those numbers can't get much smaller).

A future improvement to the vocabulary placement within the address space could be the re-distribution of the commonest terms to the chords (key combinations) that are easiest to type.

I do believe that my design should be compared with the vocabulary component distribution of the nine target languages and that any apparent optimizations should be attempted and tested (note to self: design communication optimization tests).




back to the home page.