Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLL demand-matcher suffers from the same problems the HLL during/spawn used to #25

Open
tonyg opened this issue Aug 14, 2017 · 1 comment

Comments

@tonyg
Copy link
Owner

tonyg commented Aug 14, 2017

For example, in #21, we have that the port 6000 listener sticks around even when no longer wanted, because the change in demand happened before the listener could boot up fully. When it starts running, it monitors demand correctly, but demand for port 6000 has already vanished so it never gets any events at all.

The HLL during/spawn fixes this by allocating instance IDs and running a protocol between the during part and each spawned supply-instance. The LLL demand-matcher should probably do the same thing.

@tonyg
Copy link
Owner Author

tonyg commented Aug 27, 2017

Relevant comment from example-demand-matcher-glitch-bug.rkt:

;; Example showing the consequences of not honouring the requirement
;; of the current LLL demand-matcher that supply tasks must *reliably*
;; terminate when their demand is not present. In this case, demand
;; changes too quickly: it exists for long enough to start the task,
;; but is withdrawn before the task itself has a chance to detect it.
;; Because the task (as currently implemented) does not use the "learn
;; negative knowledge" pattern to detect the *absence* of some
;; assertion, it does not terminate as it is supposed to.
;;
;; Specifically, here, the port 6000 server is started, but by the
;; time it starts monitoring demand for its services, the demand is
;; already gone, replaced with demand for port 5999. This causes
;; connections to be accepted on port 6000 going nowhere.
;;
;; One "fix" is to use #:assertions to give the TCP listener actor
;; some initial interests, thus transferring responsibility
;; atomically. This has been implemented (in commit 2a0197b). However,
;; this doesn't completely eliminate all possible instances where
;; demand may change too quickly. See example-demand-matcher-glitch-bug2.rkt.
;;
;; Of course, the real "fix" is for the TCP listener actor to use a
;; `flush!` to robustly detect that demand for its services no longer
;; exists even at startup time.
;;
;; A speculative idea, if we set aside the (in principle) documented
;; requirement that the LLL demand-matcher places on its supply tasks,
;; is to use a kind of contract-monitor to enforce the invariant that
;; demand *cannot* fluctuate too rapidly. One might write that "if
;; (listen 6000) is asserted, then if (listen 6000) is retracted,
;; (observe (listen 6000)) must have been asserted in the causal
;; history of the retraction", but what does "causal history" mean,
;; precisely? And how can it be soundly and efficiently tracked?
;;
;; The only "fix" that solves the problem, is currently implementable,
;; and allows supply tasks to escape responsibility for noticing their
;; own superfluity that I have thought of is to modify the
;; demand-matcher to do something like `during/spawn` is doing, using
;; an auxiliary protocol to centralise tracking of demand and supply
;; at the demand-matcher rather than delegating it to the services.

So, what do we do? Leave demand-matcher alone, and "repair" its clients that don't honour its precondition? Or alter demand-matcher to remove the state-tracking it's doing in favour of something more like the state-tracking implicit in the implementation of during/spawn?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant