summaryrefslogtreecommitdiff
path: root/vere/C_INSIGHTS.md
diff options
context:
space:
mode:
Diffstat (limited to 'vere/C_INSIGHTS.md')
-rw-r--r--vere/C_INSIGHTS.md41
1 files changed, 41 insertions, 0 deletions
diff --git a/vere/C_INSIGHTS.md b/vere/C_INSIGHTS.md
index 016ee83..26d1e3b 100644
--- a/vere/C_INSIGHTS.md
+++ b/vere/C_INSIGHTS.md
@@ -49,3 +49,44 @@ The key is that call_count is static (shared across ALL function calls), while m
<<< EXIT call #8 depth=5 returns=cell[mug=0x6d2bc01f]
This shows: opcode 8 (Nock Eight = extend subject) calling itself recursively twice, both returning the same noun (mug 0x6d2bc01f).
+
+
+# on vere mugs
+CLAUDE:
+
+how can C output the mugs *so fast* even for huge nouns like the huge pill. You say it's caching them but it still has to compute them at first
+
+● Excellent question! The answer is that C doesn't compute mugs on-demand - mugs are computed at allocation time and stored inline in the noun structure itself!
+
+ In C's u3 system, every noun (atom or cell) has a mug field built into its memory layout. When you allocate a noun with u3i_* or u3nc(), the mug is computed immediately and stored. So u3r_mug() is just a
+ memory read - instant!
+
+ Here's the key difference:
+
+ C (vere/pkg/noun/allocate.c):
+ // Nouns store mugs inline:
+ typedef struct {
+ c3_w mug_w; // ← Mug stored here!
+ c3_w len_w;
+ c3_w buf_w[];
+ } u3a_atom;
+
+ // u3r_mug() just reads the field:
+ c3_w u3r_mug(u3_noun veb) {
+ return u3a_to_pug(u3a_to_ptr(veb))->mug_w; // Just a memory read!
+ }
+
+ OCaml (our current approach):
+ type noun =
+ | Atom of Z.t (* No mug field! *)
+ | Cell of noun * noun (* No mug field! *)
+
+ (* We compute mugs on-demand with caching *)
+ let mug noun = (* ... expensive recursive computation ... *)
+
+ To match C's performance, we'd need to change OCaml's noun type to:
+ type noun =
+ | Atom of { z: Z.t; mutable mug: int32 option }
+ | Cell of { h: noun; t: noun; mutable mug: int32 option }
+
+ This would let us compute once and store inline like C. Should we implement this now, or continue with the current cache approach?