Experimental tree-based writing interface for GPT-3

2023-11-2322:4923434github.com

Multiversal tree writing interface for human-AI collaboration - GitHub - socketteer/loom: Multiversal tree writing interface for human-AI collaboration

This is an experimental tree-based writing interface for GPT-3. The code is actively being developed and thus unstable and poorly documented.

Features

  • Read mode

    • Linear story view
    • Tree nav bar
    • Edit mode
  • Tree view

    • Explore tree visually with mouse
    • Expand and collapse nodes
    • Change tree topology
    • Edit nodes in place
  • Navigation

    • Hotkeys
    • Bookmarks
    • Chapters
    • 'Visited' state
  • Generation

    • Generate N children with GPT-3
    • Modify generation settings
    • Change hidden memory on a node-by-node basis
  • File I/O

    • Open/save trees as JSON files
    • Work with trees in multiple tabs
    • Combine trees

ooo what features! wow so cool

Block multiverse mode

Read this for a conceptual explanation of block multiverse interface and demo video

How to use in loom

  1. Click Wavefunction button on bottom bar. This will open the block multiverse interface in the right sidebar (drag to resize).
  2. Write initial prompt in the main textbox.
  3. [Optional] Write ground truth continuation in the gray entry box at the bottom of the block multiverse interface. Blocks in ground truth trajectory will be colored black.
  4. Set model and params in top bar.
  5. Click Propagate to propagate plot the block multiverse
  6. Click on any of the blocks to zoom ("renormalize") to that block
  7. Click Propagate again to plot future block multiverse starting from a renormalized frame
  8. Click Reset zoom to reset zoom level to initial position
  9. Click Clear to clear the block multiverse plot. Do this before generating a new block multiverse.

Hotkeys

Alt hotkeys correspond to Command on Mac

File

Open: o, Control-o

Import JSON as subtree: Control-Shift-O

Save: s, Control-s

Dialogs

Change chapter: Control-y

Preferences: Control-p

Generation Settings: Control-Shift-P

Visualization Settings: Control-u

Multimedia dialog: u

Tree Info: Control-i

Node Metadata: Control+Shift+N

Run Code: Control+Shift+B

Mode / display

Toggle edit / save edits: e, Control-e

Toggle story textbox editable: Control-Shift-e

Toggle visualize: j, Control-j

Toggle bottom pane: Tab

Toggle side pane: Alt-p

Toggle show children: Alt-c

Hoist: Alt-h

Unhoist: Alt-Shift-h

Navigate

Click to go to node: Control-shift-click

Next: period, Return, Control-period

Prev: comma, Control-comma

Go to child: Right, Control-Right

Go to next sibling: Down, Control-Down

Go to parent: Left, Control-Left

Go to previous Sibling: Up, Control-Up

Return to root: r, Control-r

Walk: w, Control-w

Go to checkpoint: t

Save checkpoint: Control-t

Go to next bookmark: d, Control-d

Go to prev bookmark: a, Control-a

Search ancestry: Control-f

Search tree: Control-shift-f

Click to split node: Control-alt-click

Goto node by id: Control-shift-g

Organization

Toggle bookmark: b, Control-b

Toggle archive node: !

Generation and memory

Generate: g, Control-g

Inline generate: Alt-i

Add memory: Control-m

View current AI memory: Control-Shift-m

View node memory: Alt-m

Edit topology

Delete: BackSpace, Control-BackSpace

Merge with Parent: Shift-Left

Merge with children: Shift-Right

Move node up: Shift-Up

Move node down: Shift-Down

Change parent: Shift-P

New root child: Control-Shift-h

New Child: h, Control-h, Alt-Right

New Parent: Alt-Left

New Sibling: Alt-Down

Edit text

Toggle edit / save edits: Control-e

Save edits as new sibling: Alt-e

Click to edit history: Control-click

Click to select token: Alt-click

Next counterfactual token: Alt-period

Previous counterfactual token: Alt-comma

Apply counterfactual changes: Alt-return

Enter text: Control-bar

Escape textbox: Escape

Prepend newline: n, Control-n

Prepend space: Control-Space

Collapse / expand

Collapse all except subtree: Control-colon

Collapse node: Control-question

Collapse subtree: Control-minus

Expand children: Control-quotedbl

Expand subtree: Control-plus

View

Center view: l, Control-l

Reset zoom: Control-0

Instructions

  1. Make sure you have tkinter installed

    sudo apt-get install python3-tk

  2. Setup your python env (should be >= 3.9.13)

     ```python3 -m venv env``` 
     ```source env/bin/activate```
    
  3. Install requirements

    pip install -r requirements.txt

  4. [Optional] Set environmental variables for OPENAI_API_KEY, GOOSEAI_API_KEY, AI21_API_KEY (you can also use the settings options)

    export OPENAI_API_KEY={your api key}

  5. Run main.py

  6. Load a json tree

  7. Read :)

(Only tested on Linux.)

  1. [Optional] Edit the Makefile with your API keys (you can also use the settings options)

  2. Run the make targets

     ```make build```
     ```make run```
    
  3. Load a json tree

  4. Read :)


Read the original article

Comments

  • By photoGrant 2023-11-2323:132 reply

    The multiverse visualisation was really cool! This whole tool feels like it's designed for a writer from the future.

    link for curious: https://generative.ink/meta/block-multiverse/

    • By mathiasgredal 2023-11-241:365 reply

      I have always been bothered with the multinomial layer that picks the next token. It seems like you should be able to stick a classifier in there to detect if the probabilities align with a satisfactory answer and whether the model is unsure of the response. If that is the case, then it should branch out to a bigger model which would compute that single token that it is unsure about, and from that the rest of the response is obvious for the smaller model.

      This way you could combine e.g. a 7B parameter model with a 70B parameter model, and get the quality of the larger model while most of the time you are only running the small model.

      Edit: You could also store the full probabilities for each token, and then the classifier could detect if it had gone down a bad path, and then unwind the tokens and pick a different path.

      • By jacquesm 2023-11-241:40

        That's reminiscent of a caching scheme, you could probably layer that down to even smaller and up to even larger models for increased performance and accuracy.

        Better yet: have many smaller models that can when confused call upon larger models and the larger models could then pick the most appropriate smaller 'expert' model or, alternatively themselves escalate. Sort of a supervisor tree for language models.

      • By soulofmischief 2023-11-245:52

        Contextual loading of a bigger model would incur a lot of latency. In this case it might be a good idea to farm out the request to a non-local model. Could be a good avenue for a hybrid local & remote completion system.

      • By photoGrant 2023-11-241:55

        I want to say this is where the progress is being made currently. Not only having 'experts per domain' but 'experts per pairing'

      • By pyinstallwoes 2023-11-245:521 reply

        Is there anything experimenting with that?

        • By Roark66 2023-11-246:531 reply

          I would be very surprised if (not at all)OpenAI didn't. I've noticed it with both gpt 3.5 and Gpt4 that if you ask it a difficult question the latency increases a lot, especially at the beginning of the answer. I wouldn't ve surprised if they did exactly what was described.

          • By pyinstallwoes 2023-11-247:13

            Right I’ve noticed that too. It’s like it takes a pause and reevals. Definitely feels like a cache or memoization of token paths. Interesting.

    • By DonHopkins 2023-11-2323:28

      ...from the futures!

  • By parafactual 2023-11-245:472 reply

    I maintain a less featureful but easier to use and less janky Loom implementation as an Obsidian plugin: https://github.com/cosmicoptima/loom

    (Why Obsidian? Because then I don't have to write a text editor from scratch, and because one can then combine it with other plugins. Also because I intended for this implementation to work on mobile, but getting the UX right for that is annoying so it isn't supported right now.)

    davinci-002 is a good publicly available model to start with. Weaving takes practice if you want something very specific.

    • By pyinstallwoes 2023-11-245:531 reply

      How do you use it in your workflow?

      • By parafactual 2023-11-245:591 reply

        i create new notes, put interesting and shiny texts in them, and weave from them for fun and intellectual or aesthetic inspiration

        • By pyinstallwoes 2023-11-246:27

          How do you handle the branches, variations? Like the leaf nodes? Do you use it to compare possible paths then when you decide you prune them?

    • By 3abiton 2023-11-2410:09

      What is the difference?

  • By refulgentis 2023-11-241:09

    This is the work of @repligate, I can't recommend them highly enough. A true creative in every sense of the word, a rare spirit with a fundamental grounding and understanding of the spirit of LLMs. I highly, highly recommended pursuing every inch of their writing that interests you.

    https://twitter.com/repligate

HackerNews