/Show HN: Lucidity – an interactive program-state visualizer

Show HN: Lucidity – an interactive program-state visualizer

In 2014, a year or so after wrapping up my Tiled Text project and moving to Berkeley (from Colorado where I grew up), I was working part-time for a small startup, and producing new and impractical ideas at a breakneck pace. Probably the most developed of those ideas, prior to Lucidity, was for a programming language I was impudently naming IA, for IntellAgent.

Lucidity came out of my determination that I would need it in order to build IA.

There’s a lot that could be said on IA, but I’ll sum it up as a project that came out of:

1. My Tiled Text project being well-received, reinforcing the idea that I might do something important after all and should try.

2. My spotty, eclectic knowledge of subjects in CS, overly lacking in connection with both AI and programming languages.

3. The fact that as a teenager I’d read Society of Mind and a couple others and tried e.g. writing a (Java 🙂 program for AGI, and had since developed an ingrained habit of observing my own thinking for feedback on a handful of topics in cognition I’d been refining models for. I was very interested in how thinking works, and thought I might have some unique insight into it.

4. The fact I’d developed an RSI when I was 19 that prevented me from viewing computer programming as something I would reliably be able to do as a career, forcing me to explore a wider range of subjects. At this particular time I’d been absorbed in topics within philosophy and general linguistics, in part because of new friends I’d been making around Berkeley.

The result of all this was, unsurprisingly, a very ambitious and almost certainly doomed to fail concept, based in a heady mixture of ideas which could be individually categorized as a) interesting b) mistaken or c) already well-established (my own ignorance of them notwithstanding).

‘Analogy’ was to be the first of a number of built-in types used to bootstrap cognition using a set of innate relations (including ‘relation’ itself, of course), partially inspired by questionable interpretations of Kant and Lakoff & Johnson.

The idea behind that definition of analogy: Generalize and re-Particularize the original in such away that its structure stays the same, though the things related through the structure are allowed to change.

The two blocks are prefixed with `+` and `-`, because the one with a list of parameters adds things into the Type, whereas the second block’s purpose is to remove things via constraints and other means. The first block provides the stone, the second carves it up.

I produced 44 pages of handwritten notes full of modeling trials, syntax experiments, general musing—and a fairly detailed design for the language’s runtime, expressed primarily in terms of algorithms on a pile of… interesting data structures:

“Parse Operation” —IA was going to parse (and generate) things that weren’t necessarily sequences, since that’s just one type of relation.

“Per-Structure node FSM” —IIRC this was related to allowing more complex constraints on parameters to Types (e.g. quantification: a Chair must have >= 1 leg)

From prior experience and my own methodological preferences in writing programs (another example), what appeared to me as the main problem, after settling on a design for the runtime, was that I’d need a means of visualizing the dynamic state of a wide range of data structures.

It would be a monumental task to build each of these visualization utilities separately—but if I could create a general system that captured all their requirements, the problem might become managable. Lucidity was supposed to be that general system.

The image I had in mind for it bears a strong resemblence to the story from the previous section, though I’d be using it to get easy/rich feedback on my in-development algorithms rather than to learn a new system from scratch.

As for the IA project: an important realization for any ambitious intellectual working outside academic institutions is that the set of apparently beautiful ideas is much larger than the set of beautiful ideas of real value. It’s easy to judge an idea as valuable because it has the marks and feel of something ‘important,’ before the critical step of practical verification has been completed.

I realized a couple years later (while reading “The Art of Prolog”) that some of the ideas I liked best in my language were probably just poor approximations to Prolog. There is plenty unique about it still, but my bet on the odds of it having the significance I originally imagined is not optimistic.

Original Source