In the tapestry model, new knowledge usually takes the form of new nodes and/or edges, so I think we’re all 3 aligned on that. Sometimes there can be deletions, updates and/or reorganizations, but additions would be the main method. The core insight of tapestry model is that the vast majority of new knowledge comes from my trusted peers. The need to arrive at consensus / develop conventions with my peers over seemingly mundane but nevertheless vital details of semantics and ontology — like what’s the word we use to describe this writing instrument in my hand, or do we use created_at or createdAt for the timestamp in nostr events — is what dictates the architecture of the tapestry model. The rules that govern the knowledge graph (class thread principle and maybe one or two other constraints) need to be as few and as frugal as possible. Isaac Newton didn’t come up with 10 thousand or 100 thousand or a million laws of motion, he came up with three. (Hamilton later replaced these 3 laws with a single unifying principle — an improvement, getting from 3 all the way down to 1.) The tapestry method of arriving at social consensus only works if all peers adopt the same initial set of rules as a starting point, and that doesn’t happen if the set of rules are more complicated (or greater in number) than they have to be. The class thread principle is a simple starting point. Only a handful of “canonical” node types (~ five) and “canonical” relationship types (also ~ five) are needed to get if off the ground. Once off the ground, an unlimited number of node types and relationship types can be added — usually learned from trusted peers. And the class thread principle allows concepts to be integrated vertically and horizontally. (So it’s not like you end up with a huge number of mostly disconnected sql tables. Class threads support multiple meaningful ways to weave disparate concepts together, even without adding any new node types or relationship types.)

Replies (2)

I think we’re aligned on the minimal-rule principle. If the base ontology requires 50 primitive types, it’s already unstable. If it can emerge from ~5 node classes and ~5 relation types, that’s powerful. Newton didn’t win because he had more laws — he won because he had fewer. Where this becomes interesting economically is this: When knowledge growth is additive and rule-minimal, value compounds naturally. If: Nodes are atomic knowledge units Edges are verified semantic commitments Ontology rules are globally agreed and minimal Then every new addition increases: 1. Traversal surface area 2. Compositional capacity 3. Relevance density And that creates network effects. The token layer (in my case via NFT-based encoding units) isn’t speculative garnish — it formalizes contribution: Encoding becomes attributable Structure becomes ownable Extensions become traceable Reputation becomes compounding In probabilistic systems, contribution disappears into weight space. In an algebraic/additive system, contribution is structural and persistent. So natural economics emerges because: More trusted peers → More structured additions → More traversal paths → More utility → More value per node. And because updates are local, not global weight mutations, you don’t destabilize the whole system when someone adds something new. Minimal rules → Shared ontology → Additive structure → Compounding value. That’s when tokenomics stops being hype and starts behaving like infrastructure economics. The architecture dictates the economics. Not the other way around.
i think your "class threads" is actually a hybrid of type theory and category theory: ----- ### The Curry-Howard Correspondence A deep connection between logic and computation: | Logic | Type Theory | Programming | |-------|-------------|-------------| | Proposition | Type | Specification | | Proof | Term | Implementation | | Implication (A → B) | Function type | Function | | Conjunction (A ∧ B) | Product type | Struct/tuple | | Disjunction (A ∨ B) | Sum type | Enum/union | If we represent knowledge as types, then: - **Facts are inhabitants of types**: `ssh_key : Key; git_remote : Remote` - **Relationships are function types**: `key_for : Remote → Key` - **Inference is type checking**: Does this key work for this remote? ----- Category theory is the "mathematics of composition." It provides: Abstraction: Focus on relationships, not implementation Compositionality: If f : A → B and g : B → C, then g ∘ f : A → C Universality: Unique characterizations of constructions For our purposes: category theory can describe how semantic structures compose without reference to gradients or continuous optimization. Key Categorical Structures Objects and Morphisms Objects: Types, concepts, semantic units Morphisms: Transformations, relationships, inferences Composition: Chaining relationships Functors: Structure-Preserving Maps A functor F : C → D maps: Objects of C to objects of D Morphisms of C to morphisms of D Preserving identity and composition ----- do you see what i'm talking about? this is part of this algebraic language modeling i have started working on. https://git.mleku.dev/mleku/algebraic-decomposition/src/branch/dev/ALGEBRAIC_DECOMPOSITION.md