Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

communication between micro processor and the workflow engine #3

Open
rsoika opened this issue Sep 9, 2024 · 6 comments
Open

communication between micro processor and the workflow engine #3

rsoika opened this issue Sep 9, 2024 · 6 comments

Comments

@rsoika
Copy link
Member

rsoika commented Sep 9, 2024

Concept how the communication between micro processor and the workflow engine should work

  • The workflow engine should accept commands to receive a process definition (BPMN xml), start the process, stop the process, accept external data that advances the execution of a task.
  • use of peer-to-peer computing and mesh networking. remove internet and/or WAN dependency within the walls of the facility.
@gmillinger
Copy link
Collaborator

Addressing each point in separate comments.

-- Requirement:
The workflow engine should accept commands to receive a process definition (BPMN xml), start the process, stop the process, accept external data that advances the execution of a task.

-- Explanation:
This explanation reflects how I have done this in the past, I would expect there are better ways to do it. It is important to note the design constraints when thinking through this. The solution must be lightweight, minimum dependencies, run on computing resources such as Raspberry Pi, and so on. This rules out some of the heavyweight enterprise application infrastructures/platforms. Simplicity is the key concept.

This breaks down into an architectural description. Each software component is a stand-alone running program thought of as a service. For example, the workflow engine is a service that manages one or more running process definition instances that consumes and emits events using a publish-subscribe design pattern. The workflow engine is considered the hub which exposes a websocket for the exchange of the event messages. Other clients or services could make a secure connection to the engine and receive events and publish events to the engine. Examples of client-services are automation equipment and user interfaces such as a webpage. The event messages can be considered "commands" such as "load bpmn xml", "start process", "stop process", and so on. The event messages also publish state changes of the process and context data captured at the state change.

It may make more sense to do this locally with a type of plugin design pattern but there still exists the need to communicate more broadly with other workflow engines on the same network. This describes a decentralized approach with many workflow engines running on the same network on individual computers such as a Raspberry Pi acting as the workstation. The same requirement exists within a centralized application architecture design but is probably better using a different method of communication or an enterprise infrastructure.

The most important thing is meeting the requirement. Whatever technical solution meets requirements and constraints can be used.

@gmillinger
Copy link
Collaborator

-- Requirement
use of peer-to-peer computing and mesh networking. remove internet and/or WAN dependency within the walls of the facility.

-- Explanation
The hardware/computing resource architecture for our current platform has not changed much since 2005, yes 20 years. Very traditional client/server setup. Usually it is 3 server grade computers.

The App Servers - running Windows Terminal Services with workstation thin client terminals using RDP for the UI. The are 2 APP Servers that are exact duplicates with software that keeps each identical. Both computers have redundant NICs and a RAID 5 storage system. Replicated MS SQL Server databases are running on each App Server for workflow definitions, persistence, and historical process execution data. As you can see the system is almost bullet-proof except for the network infrastructure within manufacturing plant. The network is very redundant also but is the weakest link. The RDP session starts an instance of our workflow engine that is dedicated to the processes run at that workstation. A typical facility would have between 20 and 50 workstations. In some cases more.

The third server is a Database Server with RAID 5 storage. The MS SQL Server is linked to the App Servers and historical data is flushed to the Database Server at scheduled intervals and purged from the App Server DB. All historcial reporting and analytics is performed from the Database Server.

You can see how this has many moving parts and can get very expensive. Nowadays the App and Database servers are virtualized on-premise but there are times when uptime does not meet the requirements.

The challenge: create an infrastructure with the same reliability and reduce the cost.

The experiment:

  • Setup Raspberry 4 (RPi) with 8GB RAM and 32GB micro sd storage with Unbuntu minimum install.
  • Configure the Raspberry Pi to be an Access Point. Also add mesh networking capability so they can talk directly with each other. Use wired networking with a failover to wireless if the wired network were to fail.
  • Build a very simple state machine that can execute a workflow that is defined with BPMN2 but only supports tasks, gateways, events, and parallel paths. Use node.js as the engine to execute workflow state machine which is a javascript module, and make it accessible through express.js http and websockets.
  • Use a NoSQL database such as CouchDB for workflow state and historical execution records.
  • The workflows are define to run specifically in one workstation and the tasks are not passed to other roles like a business process would be.

The result:

  • The experiment hardware and software architecture outperformed the traditional client/server architecture and could scale with additional RPis that cost about $80 USD each. Pre-configured micro sd cards made adding or replacing workstations trivial.
  • The total number of workflow instances was many times more than could be accomplished with the App Servers
  • The mesh networking and NoSQL database replication between all RPis created a highly redudant system
  • All the software used was open source
  • Costs savings were 1000s USD less than the client/server architecture

@rsoika
Copy link
Member Author

rsoika commented Sep 19, 2024

ok - I understand.

So we can think about a architecture like this:

  1. We have one Enterprise Workflow Engine - holding the meta models and controlling the micro workflow engines. This engine can run on a modern Jakarta App server.
  2. We have one or many Workstations (Raspberry ) running the micro workflow engine.

To setup a new Workstation as a System engineer you just need to do

  1. Create a meta model with the BPMN Modelling tool defining a kind of human centric meta process.
  2. Deploy the meta model on the Enterprise server. The meta model holds information about the workstation devices.
  3. Create a new Micro Workflow Model with the BPMN Modelling tool.
  4. The Micro Model holds information about the Endpoint to the Enterprise Workflow Engine and optional Endpoints to other workstations
  5. Next you build the Raspberry Image (with maven) burn it on a SD Card
  6. Put the new SD Card into the workstation (Raspberry) and reboot the device

What happens now (in my vision):

  1. Imixs-Micro starts and is loading automatically all local BPMN Micro Workflow Models from the SD Card
  2. According to the model definition, the micro engine connects automatically with the Enterprise Workflow engine (via Websockets) and says 'hello'.
  3. Now the workstation is ready to accept commands (web socket). Typically something like 'start workflow 1.0.0 '
  4. Optional - the workstation also says 'hello' to other workstations in the network

grafik

The 'Create' event holds a SignalAdapter (this is what you expect in a Task) that calls the Micro Controller. The configuration is done in the BPMN Event Details. For example this may look like this:

grafik

Each time the micro workflow completes it triggers an event on the Enterprise Workflow with optional meta data.

rsoika added a commit that referenced this issue Sep 21, 2024
@rsoika
Copy link
Member Author

rsoika commented Sep 21, 2024

Hi @gmillinger , now I think I found a working architecture.
In its core, I think we need to distinguish between to different kind of message flows

  • the Meta Process <-> Workstation communication
  • the Workstation <-> Workstation communication

I created a new architecture overview document here

https://github.com/imixs/imixs-micro/blob/main/README.md

@gmillinger
Copy link
Collaborator

Hi @rsoika, the thought process looks really good. I am going to take some time to run the concept through my use cases to flush out details. I have a list of tasks for the this project, I will add formalizing the use cases and add them to the project documents.

@rsoika
Copy link
Member Author

rsoika commented Sep 22, 2024

I also added now JUnit tests to test the WebSocket behavior
In addition I will add a Docker Compose File to stimulate the complete technical setup...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants