Replies: 3 comments 4 replies
-
@erkist yes.. this is a great topic for a discussion, and I think that we should totally support both.. In my head, because we are defining events here you can do both, even if you are not running a pipeline. The event-driven approach only needs a correlation key between all the events to aggregate the data related to the value being produced (artifact, service or whatever the pipelines are doing). What about TaskRuns? If all TaskRuns share the same correlation ID that can be used for the same purposes (if that correlation ID is the pipelineRun Id or just a correlation Id it should be technically the same). To your point, ESCI Flow Run Id is exactly that.. the correlation key shared between different activities. |
Beta Was this translation helpful? Give feedback.
-
I think I understand what you mean with push and pull pipelines. In my mind I think of "pull pipelines" as async pipelines. In the example of Jenkins, the Jenkins pipeline plugin would be your pipeline orchestrator. The trigger event might be someone pushing a button, or a cron trigger, or some other kind of event. Even if the orchestrator starts the execution of the pipeline, it does so as a reaction to an event of some kind. Once the pipeline is complete it may generate an event, and that might be the trigger to the next pipeline. In case of Tekton, tekton is your pipeline orchestrator which takes care of executing through the various parts of the pipeline (tasks - in Tekton terminology). The execution of a pipeline in case of Tekton is initiated by submitting a "Push" pipelines usually have a static definition which allows to visualise and analyse them even before execution. |
Beta Was this translation helpful? Give feedback.
-
When reading https://github.com/cloudevents/spec/blob/v1.0.1/primer.md#cloudevents-concepts and especially the following part:
I am failing to paint the picture of a push pipeline that would fit into this picture. For me, a push pipeline needs to know all the steps beforehand and thus the different parts cannot be deployed independently. Does this make sense and what do you think about this? |
Beta Was this translation helpful? Give feedback.
-
I'm having problems painting a mental picture of what exactly can be run in a CD pipeline as defined in the Continuous Delivery bucket PR #17
The PR states:
This matches quite well with a Jenkins pipeline, in my view, where a pipeline, which is runnable, consists of a set of stages (named Tasks in our spec).
I would call this a "push pipeline", e.g. there is a controlling system that starts an instance of the pipeline and step by step executes everything in it.
There is an alternative way of driving pipelines, which I would call a "pull pipeline" or "event-driven pipeline", where the pipeline consists of a bunch of distributed runnable tasks which (at least conceptually) all wait for "some value to be produced" that should cause them to run. These "values produced" could be anything from an artifact being released, to a test suite failing, to a new environment having been created.
In these scenarios, there isn't really a pipeline running in any system, and it doesn't make sense to talk about a pipeline run being finished, instead the focus is on what value has been produced, and what activities are or have been run as part of the pipeline instance.
Eiffel has support for this event-driven approach through various chaining concepts (e.g. CAUSE and CONTEXT), ESCI explicitly defines a Flow Run Id that is shared between all activities that are run as part of the same flow/pipeline.
I hope we can find a way to support both push and pull pipelines, but right now I can't figure out exactly what minimal parts are needed to support both. Does anyone have any suggestions?
Beta Was this translation helpful? Give feedback.
All reactions