Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #26
This is quite a huge patch which does the following:
rpc
binary in/cmd/rpc
. This needs to be built BEFORE you rungo test
with the same-tags
that are used ingo test
. This binary acts as an RPC server.COMPLEMENT_CRYPTO_RPC_BINARY
, which if set to the binary path, will enable RPC tests ininternal/tests
. If this is not set, RPC tests are ignored.deploy.RPCLanguageBindings
which takes in just the RPC binary path and the language string you want to run. This binding has the following behaviour:LanguageBindings.MustCreateClient
: when called, the RPC server is spawned in a child process which listens on a random high numbered port, communicated via stdout. An RPC call is made with the client creation options and language to construct the client. This is why you need to build the RPC server with the same tags, as it makes a real client at this point, which needs FFI bindings / dist directory to be embedded for rust/js respectively. Returns a new typedeploy.RPCClient
which implementsapi.Client
.LanguageBindings.PreTestRun|PostTestRun
: these do nothing in RPC mode. Logs are instead handled when the client is created/closed in the RPC server. A newcontextID
has been added to the interface in an attempt to allow logs to be split correctly (same file paths as today for the main process, and$user_$device
scoped for the RPC process). TODO: logs don't always seem to come through?api.Client
for the most part just proxies the existing interface one-to-one. For example,MustBackpaginate
does an RPC call with the same name, but bundles up the args/return param into a form that can be serialised bynet/rpc
. The exception is with the waiters:WaitUntilEventInRoom
returns a new typedeploy.RPCWaiter
which implementsapi.Waiter
. CallingWaitUntilEventInRoom
"registers" a waiter on the RPC server, which returns a unique waiter ID which is used in all calls which relate to that particular waiter.Waitf
on this waiter does NOT just do a simple RPC call, but a complex dance to keep the API expressive. The problem lies in the function signature ofWaitUntilEventInRoom
which allows an arbitrary function to be thechecker
. We can't pass arbitrary functions over the RPC boundary. Instead:Waitf
will call the RPC functionWaiterStart
which callsTryWaitf
server-side. The checker function used always returnsfalse
(soTryWaitf
will always timeout with an error) but it also stashes the event that was asked to be checked.WaiterPoll
every 100ms, which will then return the stashed events which can be serialised. When back in the client process, the arbitrary function can then be run against these events, and if one passes the check, great! If not,WaiterPoll
is called again, until the RPC server decides to give up and time out the client.