-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ready for production? #24
Comments
Hi @DaniGuardiola, I put this project together as a prototype, but it quickly became more than that. The test coverage is extensive and the "equivalence" tests demonstrate that nameko-grpc and the official python client behave in the same way -- at least at the application level (i.e. grpc messages sent/received, status codes etc; there may be differences in terms of bytes on the wire) I have not had an opportunity to use it in production yet, but it is likely that my company will be doing so in the next couple of months, so if you do decide to use it you will have someone along on the journey with you. I think gRPC is fantastic and conceptually pairs very well with Nameko. I have a (draft) blog post on the subject that you might like to read: https://gist.github.com/mattbennett/1fdc9df9ccde3cd4798af5e47a714fce |
Nevermind, typo in my code |
I'm gonna post the little issues I find here to avoid spamming the issues tab, also because probably most of them are gonna be little mistakes on my side as I'm not very familiar with the project (if that's alright with you). |
Some questions:
Thanks a lot!! |
Not yet. This would be a great PR :)
Will do 👍
AMQP-RPC isn't necessarily replaced by gRPC. You can use one or both or neither, up to you. If your service isn't using any Nameko's "built-in" AMQP extensions, you don't need a broker.
@iky and I went around the houses on this one. gRPC is different to all the other Nameko extensions in that it supports streaming requests and responses. The approach I chose to take in #20 was to emit a new record for every message in a stream. In contract, the opentracing standard (which supports gRPC) emits one span for every request/response (i.e. all messages in a stream end up in the same span). Since opentracing is a standard, we decided that this was a better approach, and #20 kind of died on the vine. I think the next step for tracing is to expand nameko-tracer to support the opentracing standard, and then nameko-grpc can do whatever it needs to do emit opentracing-compatible grpc traces.
You'll have to expand on this a bit for me. Nameko's AMQP-RPC is completely dynamic, whereas gRPC uses a predefined protobuffer specification. I don't think there's any scope for dynamically defining the definition. Perhaps I'm misunderstanding you?
I've never actually done any work with type annotations. We aren't supporting Python 2.x so I don't see why we can't make use of this.
Any help would be great. Supporting secure connections is something that would be helpful, and overall testing and use is the most significant thing that would move the project forwards. |
Hi @mattbennett thanks for your answers.
About the decorator, here's what I mean. Currently, you would declare a ( from nameko_grpc.entrypoint import Grpc
from proto.service_pb2 import MultiplyResponse
from proto.service_pb2_grpc import exampleStub
grpc = Grpc.implementing(exampleStub)
class ExampleService:
name = "example"
@grpc
def multiply(self, request, context):
result = request.number1 * (request.number2 or 2)
return MultiplyResponse(result=result) What I'm proposing is something like the following: from nameko_grpc.entrypoint import Grpc
from proto.service_pb2_grpc import exampleStub
grpc = Grpc.implementing(exampleStub)
class ExampleService:
name = "example"
@grpc
def multiply(self, number1: int, number2: int = 2) -> Dict[str, int]:
"""
Multiplies two numbers
:param number1: The first number to multiply.
:param number2: The second number to multiply.
:return: The product of number1 times number2.
"""
result = number1 * number2
return {"result": result} I believe it should be achievable. The parameters can be passed by destructuring (or however this is called) the request object content (as a dict) as keyword args: # wherever this is called...
returned_dict = example_service.multiply(**request.__dict__) The response might be a little trickier but should be achievable as well by finding the response class from the corresponding response_cls = get_response_class_somehow(method_name_or_something)
response_instance = response_cls.__init__(**returned_dict)
# then do whatever's done with the response instance I see some big advantages to this method:
What do you think? |
I should add that this isn't compatible with having the current decorator as well, either with a different name (ex. |
Another thing that could be useful is being able to specify the port(s) as well. Ports can be secure and insecure, and there's apparently no limit in gRPC. |
I see what you mean about the decorator now. You're breaking up the request schema into its components and exposing each one as a separate argument. It does look more similar to the Nameko AMQP RPC entrypoint, but I don't think it'll bring all the advantages you list. And/or there are other ways to achieve those advantages:
Protobuf message definitions can use nested types (example) so you'd still need to import proto-defined classes. You can only really expose the first "layer" in the method signature.
You could still use typehints on the proto-defined classes. You'd need this anyway for any nested types. I haven't looked, but I would be surprised if there wasn't gRPC support in some IDEs.
I think this is probably invalidated by the nested types.
Perhaps I'm misunderstanding the goal here, but doesn't protobuf give you this out of the box? Given the (protobuf) definition for a service, you can generate a client in whatever language you choose. And you can inspect the definition for the types and (I assume) doctrings. |
Yep, this would be great :) |
@mattbennett alright, makes sense. I've had some time to dive deeper into protobuf and I understand now why my idea doesn't really make sense. The only thing that could maybe be an improvement is removing the
Ideally this would be possible as well:
In a way in which both decorators work without causing a significant performance or memory impact (maybe by using some dependency injection pattern, or some kind of memoization). This would make unused Thanks! |
I considered this when building the prototype. Keeping it the same as standard gRPC helps the code look familiar and would make it easier to transition existing Python gRPC servers to Nameko services, for example. Plus if nameko-grpc removed it from the signature, we'd have to provide some alternative mechanism to access it. If you really wanted to remove |
Hello there, thanks for your work on Nameko :)
We want to use nameko + grpc in our company and we're wondering about the state of this project. We are still far from a production version ourselves so we don't really need nameko-grpc to be production ready right now, but we would like to know about the roadmap so that we can decide whether to use it or not.
If we do decide to use it we could report any bugs and suggestions we find along the way, possibly even contribute with PRs at some point.
Thank you!
The text was updated successfully, but these errors were encountered: