outEventMap = event.getSubset(fieldsToSend).getRaw();
- try {
- String json = new String(dataFormatDefinition.fromMap(outEventMap));
- Request.Post(REST_ENDPOINT_URI).body(new StringEntity(json, Charsets.UTF_8)).execute();
- } catch (SpRuntimeException e) {
- LOG.error("Could not parse incoming event");
- } catch (IOException e) {
- LOG.error("Could not reach endpoint at {}", REST_ENDPOINT_URI);
- }
- }
-
- @Override
- public void onDetach() throws SpRuntimeException {
-
- }
-}
-
-```
-The only class variable you need to change right now is the REST_ENDPOINT_URL. Change this url to the URL provided by your request bin.
-In the ``ònEvent`` method, we use a helper method to get a subset of the incoming event.
-Finally, we convert the resulting ``Map`` to a JSON string and call the endpoint.
-
-
-## Preparing the service
-The final step is to register the sink as a pipeline element.
-
-Go to the class `Init` and register the sink:
-```java
-.registerPipelineElement(new RestSink())
-```
-
-## Starting the service
-
-
Tip
-
Once you start the service, it will register in StreamPipes with the hostname. The hostname will be auto-discovered and should work out-of-the-box.
-In some cases, the detected hostname is not resolvable from within a container (where the core is running). In this case, provide a SP_HOST environment variable to override the auto-discovery.
-
-
-
-
-
-
Tip
-
The default port of all pipeline element services as defined in the `create` method is port 8090.
- If you'd like to run multiple services at the same time on your development machine, change the port here. As an alternative, you can also provide an env variable `SP_PORT` which overrides the port settings. This is useful to use different configs for dev and prod environments.
-
-
-
-Now we are ready to start our service!
-
-Configure your IDE to provide an environment variable called ``SP_DEBUG`` with value ``true`` when starting the project.
-
-Execute the main method in the class `Init` we've just created. The service automatically registers itself in StreamPipes.
-
-To install the created element, open the StreamPipes UI and follow the manual provided in the [user guide](03_use-install-pipeline-elements.md).
-
-## Read more
-
-Congratulations! You've just created your first data sink for StreamPipes.
-There are many more things to explore and data sinks can be defined in much more detail using multiple wrappers.
-Follow our [SDK guide](../dev-guide-sdk-guide-sinks) to see what's possible!
diff --git a/website-v2/versioned_docs/version-0.70.0/06_extend-tutorial-data-sources.md b/website-v2/versioned_docs/version-0.70.0/06_extend-tutorial-data-sources.md
deleted file mode 100644
index a2d95f094..000000000
--- a/website-v2/versioned_docs/version-0.70.0/06_extend-tutorial-data-sources.md
+++ /dev/null
@@ -1,214 +0,0 @@
----
-id: extend-tutorial-data-sources
-title: "Tutorial: Data Sources"
-sidebar_label: "Tutorial: Data Sources"
-original_id: extend-tutorial-data-sources
----
-
-In this tutorial, we will add a new data source consisting of a single data stream. The source will be provided as a standalone component (i.e., the description will be accessible through an integrated web server).
-
-## Objective
-
-We are going to create a new data stream that is produced by a GPS sensor installed in a delivery vehicle.
-The sensor produces a continuous stream of events that contain the current timestamp, the current lat/lng position of the vehicle and the plate number of the vehicle.
-Events are published in a JSON format as follows:
-```json
-{
- "timestamp" : 145838399,
- "latitude" : 37.04,
- "longitude" : 17.04,
- "plateNumber" : "KA-AB 123"
-}
-```
-
-These events are published to a Kafka broker using the topic `org.streampipes.tutorial.vehicle`.
-
-In the following section, we show how to describe this stream in a form that allows you to import and use it in StreamPipes.
-
-## Project setup
-
-Instead of creating a new project from scratch, we recommend to use the Maven archetype to create a new project skeleton (streampipes-archetype-extensions-jvm).
-Enter the following command in a command line of your choice (Apache Maven needs to be installed):
-
-```
-mvn archetype:generate \
--DarchetypeGroupId=org.apache.streampipes -DarchetypeArtifactId=streampipes-archetype-extensions-jvm \
--DarchetypeVersion=0.69.0 -DgroupId=my.groupId \
--DartifactId=my-source -DclassNamePrefix=MySource -DpackageName=mypackagename
-```
-
-You will see a project structure similar to the structure shown in the [archetypes](06_extend-archetypes.md) section.
-
-
-
Tip
-
Besides the basic project skeleton, the sample project also includes an example Dockerfile you can use to package your application into a Docker container.
-
-
-
-## Adding a data stream description
-
-Now we will add a new data stream definition.
-First, create a new class `MyVehicleStream` which should look as follows:
-
-```java
-
-package org.apache.streampipes.pe.example;
-
-import org.apache.streampipes.model.SpDataStream;
-import org.apache.streampipes.sources.AbstractAdapterIncludedStream;
-
-public class MyVehicleStream extends AbstractAdapterIncludedStream {
-
- @Override
- public SpDataStream declareModel() {
- return null;
- }
-
- @Override
- public void executeStream() {
-
- }
-}
-```
-
-This class extends the class ``AbstractAdapterIncludedStream``, which indicates that this source continuously produces data (configured in the ``executeStream()`` method.
-In contrast, the class `AbstractAlreadyExistingStream` indicates that we only want to describe an already existing stream (e.g., a stream that already sends data to an existing Kafka broker).
-
-Next, we will add the definition of the data stream. Add the following code inside of the `declareModel` method:
-```java
-return DataStreamBuilder.create("org.streampipes.tutorial.vehicle.position", "Vehicle Position", "An event stream " +
- "that produces current vehicle positions")
-```
-
-This line creates a new instance of the SDK's `DataStreamBuilder` by providing three basic parameters:
-The first parameter must be a unique identifier of your data stream.
-The second and third parameters indicate a label and a description of your stream.
-These values will later be used in the StreamPipes UI to display stream details in a human-readable manner.
-
-Next, we will add the properties as stated above to the stream definition by adding the following lines:
-```java
-.property(EpProperties.timestampProperty("timestamp"))
-.property(EpProperties.stringEp(Labels.from("plate-number", "Plate Number", "Denotes the plate number of the vehicle"), "plateNumber", "http://my.company/plateNumber"))
-.property(EpProperties.doubleEp(Labels.from("latitude", "Latitude", "Denotes the latitude value of the vehicle's position"), "latitude", Geo.lat))
-.property(EpProperties.doubleEp(Labels.from("longitude", "Longitude", "Denotes the longitude value of the vehicle's position"), "longitude", Geo.lng))
-```
-These four _event properties_ compose our _event schema_. An event property must, at least, provide the following attributes:
-
-* **Runtime Name**. The runtime name indicates the key of the property at runtime, e.g., if our JSON message contains a structure such as `{"plateNumber" : "KA-F 123"}`, the runtime name must be `plateNumber`.
-* **Runtime Type**. An event property must have a primitive type (we will later see how to model more complex properties such as lists and nested properties).
- The type must be an instance of `XMLSchema` primitives, however, the SDK provides convenience methods to provide the property type.
-* **Domain Property**. The domain property indicates the semantics of the event property. For instance, the `latitude` property is linked to the `http://www.w3.org/2003/01/geo/wgs84_pos#lat` property of the WGS84 vocabulary.
- The domain property should be an URI as part of an existing or domain-specific vocabulary. The SDK provides convenience methods for popuplar vocabularies (e.g., Schema.org, Dolce or WGS84).
-
-In order to complete the minimum required specification of an event stream, we need to provide information on the transport format and protocol of the data stream at runtime.
-
-This can be achieved by extending the builder with the respective properties:
-```java
-.format(Formats.jsonFormat())
-.protocol(Protocols.kafka("localhost", 9094, "TOPIC_SHOULD_BE_CHANGED"))
-.build();
-```
-
-Set ``org.streampipes.tutorial.vehicle`` as your new topic by replacing the term ``TOPIC_SHOULD_BE_CHANGED`.
-
-In this example, we defined that the data stream consists of events in a JSON format and that Kafka is used as a message broker to transmit events.
-The last build() method call triggers the construction of the data stream definition.
-
-That's it! In the next section, we will connect the data stream to a source and inspect the generated RDF description.
-
-## Creating some dummy data
-
-Let's assume our stream should produce some random values that are sent to StreamPipes. We'll add a very simple data simulator to the ``executeStream`` method as follows:
-
-```java
-@Override
- public void executeStream() {
-
- SpKafkaProducer producer = new SpKafkaProducer("localhost:9094", "my-topic", Collections.emptyList());
- Random random = new Random();
- Runnable runnable = () -> {
- for (;;) {
- JsonObject jsonObject = new JsonObject();
- jsonObject.addProperty("timestamp", System.currentTimeMillis());
- jsonObject.addProperty("plateNumber", "KA-FZ 1");
- jsonObject.addProperty("latitude", random.nextDouble());
- jsonObject.addProperty("longitude", random.nextDouble());
-
- producer.publish(jsonObject.toString());
-
- try {
- TimeUnit.SECONDS.sleep(1);
- } catch (InterruptedException e) {
- e.printStackTrace();
- }
-
- }
- };
-
- new Thread(runnable).start();
- }
-```
-
-Change the topic and the URL of your Kafka broker (as stated in the controller).
-
-## Registering the data stream
-
-You need to register the stream in the service definition. Open the ``Init`` class and register the ``MyVehicleStream``:
-
-```java
-
- @Override
- public SpServiceDefinition provideServiceDefinition() {
- return SpServiceDefinitionBuilder.create("org.apache.streampipes",
- "human-readable service name",
- "human-readable service description", 8090)
- .registerPipelineElement(new ExampleDataProcessor())
- .registerPipelineElement(new ExampleDataSink())
- .registerPipelineElement(new MyVehicleStream())
- .registerMessagingFormats(
- new JsonDataFormatFactory(),
- new CborDataFormatFactory(),
- new SmileDataFormatFactory(),
- new FstDataFormatFactory())
- .registerMessagingProtocols(
- new SpKafkaProtocolFactory(),
- new SpJmsProtocolFactory(),
- new SpMqttProtocolFactory())
- .build();
- }
-
-```
-
-You can remove the other two example classes if you want.
-
-## Starting the service
-
-
-
Tip
-
Once you start the service, it will register in StreamPipes with the hostname. The hostname will be auto-discovered and should work out-of-the-box.
-In some cases, the detected hostname is not resolvable from within a container (where the core is running). In this case, provide a SP_HOST environment variable to override the auto-discovery.
-
-
-
-Now we are ready to start our first container!
-
-Execute the main method in the class `Init`, open a web browser and navigate to http://localhost:8090, or change the port according to the value of the ``SP_PORT`` variable in the env file.
-
-Configure your IDE to provide an environment variable called ``SP_DEBUG`` with value ``true`` when starting the project.
-
-You should see something as follows:
-
-
-
-Click on the link of the data source to see the generated description of the pipeline element.
-
-
-
-The container automatically registers itself in StreamPipes.
-
-To install the just created element, open the StreamPipes UI and install the source over the ``Install Pipeline Elements`` section.
-
-## Read more
-
-Congratulations! You've just created your first pipeline element for StreamPipes.
-There are many more things to explore and data sources can be defined in much more detail.
diff --git a/website-v2/versioned_docs/version-0.70.0/07_technicals-architecture.md b/website-v2/versioned_docs/version-0.70.0/07_technicals-architecture.md
deleted file mode 100644
index 4ef1a54f4..000000000
--- a/website-v2/versioned_docs/version-0.70.0/07_technicals-architecture.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-id: technicals-architecture
-title: Architecture
-sidebar_label: Architecture
-original_id: technicals-architecture
----
-
-
-The following picture illustrates the high-level architecture of StreamPipes:
-
-
-
-Users mainly interact (besides other UI components) with the _Pipeline Editor_ to create stream processing pipelines based on data streams, data processors and data sinks.
-These reusable pipeline elements are provided by self-contained _pipeline element containers_, each of them having a semantic description that specifies their characteristics (e.g., input, output and required user input for data processors).
-Each pipeline element container has a REST endpoint that provides these characteristics as a JSON-LD document.
-
-Pipeline element containers are built using one of several provided _wrappers_.
-Wrappers abstract from the underlying runtime stream processing framework.
-Currently, the StreamPipes framework provides wrappers for Apache Flink, Esper and algorithms running directly on the JVM.
-
-The _pipeline manager_ manages the definition and execution of pipelines.
-When creating pipelines, the manager continuously matches the pipeline against its semantic description and provides user guidance in form of recommendations.
-Once a pipeline is started, the pipeline manager invokes the corresponding pipeline element containers.
-The container prepares the actual execution logic and submits the program to the underlying execution engine, e.g., the program is deployed in the Apache Flink cluster.
-
-Pipeline elements exchange data using one or more message brokers and protocols (e.g., Kafka or MQTT).
-StreamPipes does not rely on a specific broker or message format, but negotiates suitable brokers based on the capabilities of connected pipeline elements.
-
-Thus, StreamPipes provides a higher-level abstraction of existing stream processing technology by leveraging domain experts to create streaming analytics pipelines in a self-service manner.
-
-## Semantic description
-Pipeline elements in StreamPipes are meant to be reusable:
-
-* Data processors and data sink are generic (or domain-specific) elements that express their requirements and are able to operate on any stream that satisfies these requirements.
-* Data processors and data sinks can be manually configured by offering possible configuration parameters which users can individually define when creating pipelines.
-* Data streams can be connected to any data processor or data sink that matches the capabilities of the stream.
-
-When users create pipelines by connecting a data stream with a data processor (or further processors), the pipeline manager _matches_ the input stream of a data processor against its requirements.
-This matching is performed based on the _semantic description of each element.
-The semantic description (technically an RDF graph serialized as JSON-LD) can be best understood by seeing it as an envelope around a pipeline element.
-It only provides metadata information, while we don't rely on any RDF at runtime for exchanging events between pipeline elements.
-While RDF-based metadata ensures good understanding of stream capabilities, lightweight event formats at runtime (such as JSON or Thrift) ensure fast processing of events.
-
-Let's look at an example stream that produces a continuous stream of vehicle positions as illustrated below:
-
-
-
-While the runtime layer produces plain JSON by submitting actual values of the position and the vehicle's plate number, the description layer describes various characteristics of the stream:
-For instance, it defines the event schema (including, besides the data type and the runtime name of each property also a more fine-grained meaning of the property), quality aspects (e.g., the measurement unit of a property or the frequency) and the grounding (e.g., the format used at runtime and the communication protocol used for transmitting events).
-
-The same accounts for data processors and data sinks:
-
-
-
-Data processors (and, with some differences, data sinks) are annotated by providing metadata information on their required input and output.
-For instance, we can define minimum schema requirements (such as geospatial coordinates that need to be provided by any stream that is connected to a processor), but also required (minimum or maximum) quality levels and supported transport protocols and formats.
-In addition, required configuration parameters users can define during the pipeline definition process are provided by the semantic description.
-
-Once new pipeline elements are imported into StreamPipes, we store all information provided by the description layer in a central repository and use this information to guide useres through the pipeline definition process.
-
-Don't worry - you will never be required to model RDF by yourself.
-Our SDK provides convenience methods that help creating the description automatically.
-
diff --git a/website-v2/versioned_docs/version-0.70.0/07_technicals-configuration.md b/website-v2/versioned_docs/version-0.70.0/07_technicals-configuration.md
deleted file mode 100644
index 459909ac6..000000000
--- a/website-v2/versioned_docs/version-0.70.0/07_technicals-configuration.md
+++ /dev/null
@@ -1,59 +0,0 @@
----
-id: technicals-configuration
-title: Configuration
-sidebar_label: Configuration
-original_id: technicals-configuration
----
-
-On this page we explain how the StreamPipes configuration works.
-StreamPipes allows the individual services (pipeline element containers and third-party services) to store configuration parameters in a distributed key-value store.
-This has the advantage that individual services do not need to store any configurations on the local file system, enabling us to run containers anywhere.
-As a key-value store we use [Consul](https://www.consul.io/), which is an essential service for all our services.
-
-
-
-
-## Edit Configurations
-All services in StreamPipes can have configuration parameters.
-You can either change them in the consul user interface (which is by default running on port 8500) or directly in the StreamPipes Configurations Page.
-Once a new pipeline element container is started, it is registered in Consul and the parameters can be edited in the configuration page, as shown below.
-To store changes in Consul, the update button must be clicked.
-
-
-
-
-
-## Configuration for Developers
-We provide a Configurations API for the use of configuration parameters in your services.
-Each processing element project has a “config” package [[Example]](https://github.com/apache/streampipes-extensions/tree/dev/streampipes-sinks-internal-jvm/src/main/java/org/streampipes/sinks/internal/jvm/config).
-This package usually contains two classes.
-One containing unique keys for the configuration values and one containing the getter and setter methods to access these values.
-For the naming of configuration keys, we recommend to use “SP” as a prefix.
-As we explain later, it is possible to set default configurations as environment variables, this prefix makes them unique on your server.
-A configuration entry needs a unique config key. For this key, a value can be specified containing the configuration, like for example the port number of the service.
-For each configuration, a description explaining the parameter can be provided, further the data type must be specified and whether it is a password or not.
-Below, the schema of a configuration item is shown on the left and an example of a port configuration on the right.
-
-
-
-As a developer, you can add as many new configurations to services as you wish, but there are some that are required for all processing element containers.
-Those are **the host**, **the port**, and **the name** of the service.
-
-## Default Values
-You can provide default values for the configurations, which are used when a configuration is read for the first time.
-The first option is to register a configuration parameter in the Config class.
-This is a fallback value, which is used if nothing else is defined.
-Since this value is static, we offer a second option.
-It is possible to provide a default value by setting an environment variable.
-In this case, the convention is that the key of a configuration parameter must be used as the environment variable.
-Now, this value is used instead of the value defined in the Config class.
-During development, the configuration values often need to be changed for debugging purposes, therefore we provide an .env file in all processing element projects and archetypes.
-This file can be used by your IDE to set the environment variables. (e.g., [Intellij Plugin](https://plugins.jetbrains.com/plugin/7861-envfile))
-When you need to change the variable at runtime, you can do this in the StreamPipes configurations as explained before.
-Those changes take effect immediately without the need of a container restart.
-
-
-
Installed pipeline elements
-
Be cautious, when the configuration is used in the semantic description of a processing element which is already installed in StreamPipes, you have to reload this element in StreamPipes (my elements -> reload).
- In addition, changes might affect already running pipelines.
-
diff --git a/website-v2/versioned_docs/version-0.70.0/07_technicals-messaging.md b/website-v2/versioned_docs/version-0.70.0/07_technicals-messaging.md
deleted file mode 100644
index 64d9a2ef8..000000000
--- a/website-v2/versioned_docs/version-0.70.0/07_technicals-messaging.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-id: technicals-messaging
-title: Messaging
-sidebar_label: Messaging
-original_id: technicals-messaging
----
-
-tbd
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/07_technicals-runtime-wrappers.md b/website-v2/versioned_docs/version-0.70.0/07_technicals-runtime-wrappers.md
deleted file mode 100644
index dedc3ee18..000000000
--- a/website-v2/versioned_docs/version-0.70.0/07_technicals-runtime-wrappers.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-id: technicals-runtime-wrappers
-title: Runtime Wrappers
-sidebar_label: Runtime Wrappers
-original_id: technicals-runtime-wrappers
----
-
-tbd
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/dev-guide-archetype.md b/website-v2/versioned_docs/version-0.70.0/dev-guide-archetype.md
deleted file mode 100644
index 6b2486911..000000000
--- a/website-v2/versioned_docs/version-0.70.0/dev-guide-archetype.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-id: dev-guide-archetype
-title: Start Developing
-sidebar_label: Start Developing
-original_id: dev-guide-archetype
----
-
diff --git a/website-v2/versioned_docs/version-0.70.0/dev-guide-processor-sdk.md b/website-v2/versioned_docs/version-0.70.0/dev-guide-processor-sdk.md
deleted file mode 100644
index 5ceca4bd7..000000000
--- a/website-v2/versioned_docs/version-0.70.0/dev-guide-processor-sdk.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-id: dev-guide-sdk-guide-processors
-title: "SDK Guide: Data Processors"
-sidebar_label: "SDK Guide: Data Processors"
-original_id: dev-guide-sdk-guide-processors
----
-
-## Project Setup
-(coming soon, please check the [tutorial](../dev-guide-tutorial-processors) to learn how to define data processors)
-
-## SDK reference
-The complete SDK reference for defining data processors will follow soon. Please check the SDK's Javadoc for now!
diff --git a/website-v2/versioned_docs/version-0.70.0/dev-guide-sink-sdk.md b/website-v2/versioned_docs/version-0.70.0/dev-guide-sink-sdk.md
deleted file mode 100644
index d2e253441..000000000
--- a/website-v2/versioned_docs/version-0.70.0/dev-guide-sink-sdk.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-id: dev-guide-sdk-guide-sinks
-title: "SDK Guide: Data Sinks"
-sidebar_label: "SDK Guide: Data Sinks"
-original_id: dev-guide-sdk-guide-sinks
----
-
-## Project Setup
-(coming soon, please check the [tutorial](../dev-guide-tutorial-processors) to learn how to define sinks)
-
-## SDK reference
-The complete SDK reference for defining data sinks will follow soon. Please check the SDK's Javadoc for now!
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.coindesk.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.coindesk.md
deleted file mode 100644
index 7d5e9b8e8..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.coindesk.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.coindesk
-title: Coindesk Bitcoin Price
-sidebar_label: Coindesk Bitcoin Price
-original_id: org.apache.streampipes.connect.adapters.coindesk
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-This adapter continuously provides the current bitcoin price from the Coindesk API.
-
-## Configuration
-
-### Currency
-
-The current in which the price should be provided.
-
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.flic.mqtt.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.flic.mqtt.md
deleted file mode 100644
index 4eb8ec0dd..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.flic.mqtt.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.flic.mqtt
-title: Flic MQTT
-sidebar_label: Flic MQTT
-original_id: org.apache.streampipes.connect.adapters.flic.mqtt
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Connect Flic Smart Button over MQTT
-
-***
-
-## Required input
-
-This adapter uses the MQTT protocol and requires the data in following exemplary JSON format:
-{ "timestamp": 1584973344615, "click_type": SINGLE, "button_id": button1 }.
-***
-
-## Configuration
-
-### Broker URL
-
-Example: tcp://test-server.com:1883 (Protocol required. Port required)
-
-### Access Mode
-
-The user can choose between unauthenticated or authenticated access.
-
-### TOPIC
-The topic the MQTT broker publishes to.
-
-## Output
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.iex.stocks.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.iex.stocks.md
deleted file mode 100644
index a3ea09745..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.iex.stocks.md
+++ /dev/null
@@ -1,48 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.iex.stocks
-title: IEX Cloud Stock Quotes
-sidebar_label: IEX Cloud Stock Quotes
-original_id: org.apache.streampipes.connect.adapters.iex.stocks
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-This adapter provides news events from the IEXCloud news API. An API key from IEXCloud is required.
-Visit IEX Cloud for more info.
-
-***
-
-## Configuration
-
-
-### API Token
-A valid API token from the IEXCloud API.
-
-### Stock Symbol
-A stock symbol that should be monitored.
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.image.set.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.image.set.md
deleted file mode 100644
index f9c910c4d..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.image.set.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.image.set
-title: Image Upload (Set)
-sidebar_label: Image Upload (Set)
-original_id: org.apache.streampipes.connect.adapters.image.set
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-Upload a zip file of images and create an event per image
-
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.mysql.set.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.mysql.set.md
deleted file mode 100644
index 4ccecc38e..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.mysql.set.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.mysql.set
-title: MySQL Set Adapter
-sidebar_label: MySQL Set Adapter
-original_id: org.apache.streampipes.connect.adapters.mysql.set
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Creates a data set from an SQL table
-
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.mysql.stream.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.mysql.stream.md
deleted file mode 100644
index 594d70d11..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.mysql.stream.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.mysql.stream
-title: MySql Stream Adapter
-sidebar_label: MySql Stream Adapter
-original_id: org.apache.streampipes.connect.adapters.mysql.stream
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Creates a data stream for a SQL table
-
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.nswaustralia.trafficcamera.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.nswaustralia.trafficcamera.md
deleted file mode 100644
index 98d33411c..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.nswaustralia.trafficcamera.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.nswaustralia.trafficcamera
-title: NSW Traffic Cameras
-sidebar_label: NSW Traffic Cameras
-original_id: org.apache.streampipes.connect.adapters.nswaustralia.trafficcamera
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Traffic camera images produced by NSW Australia
-
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.plc4x.modbus.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.plc4x.modbus.md
deleted file mode 100644
index ddc8a8ee4..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.plc4x.modbus.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.plc4x.modbus
-title: PLC4X MODBUS
-sidebar_label: PLC4X MODBUS
-original_id: org.apache.streampipes.connect.adapters.plc4x.modbus
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Shows the live position of the International Space Station (ISS), updated every two seconds.
-
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.plc4x.s7.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.plc4x.s7.md
deleted file mode 100644
index 9a94da164..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.plc4x.s7.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.plc4x.s7
-title: PLC4X S7
-sidebar_label: PLC4X S7
-original_id: org.apache.streampipes.connect.adapters.plc4x.s7
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Shows the live position of the International Space Station (ISS), updated every two seconds.
-
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.simulator.machine.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.simulator.machine.md
deleted file mode 100644
index a9c4fae12..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.simulator.machine.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.simulator.machine
-title: Machine Data Simulator
-sidebar_label: Machine Data Simulator
-original_id: org.apache.streampipes.connect.adapters.simulator.machine
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Publishes various simulated machine sensor data in a configurable time interval (in milliseconds).
-Sensors are:
-* flowrate
-* pressure
-* waterlevel
-***
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.simulator.randomdataset.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.simulator.randomdataset.md
deleted file mode 100644
index 248070299..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.simulator.randomdataset.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.simulator.randomdataset
-title: Random Data Simulator (Set)
-sidebar_label: Random Data Simulator (Set)
-original_id: org.apache.streampipes.connect.adapters.simulator.randomdataset
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Publishes a bounded stream of random events.
-
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.simulator.randomdatastream.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.simulator.randomdatastream.md
deleted file mode 100644
index 12564088c..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.simulator.randomdatastream.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.simulator.randomdatastream
-title: Random Data Simulator (Stream)
-sidebar_label: Random Data Simulator (Stream)
-original_id: org.apache.streampipes.connect.adapters.simulator.randomdatastream
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Publishes a continuous stream of random events
-
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.wikipedia.edit.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.wikipedia.edit.md
deleted file mode 100644
index 015004fd4..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.wikipedia.edit.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.wikipedia.edit
-title: Wikipedia Edits
-sidebar_label: Wikipedia Edits
-original_id: org.apache.streampipes.connect.adapters.wikipedia.edit
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Continuously publishes recent Wikipedia edits
-
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.wikipedia.new.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.wikipedia.new.md
deleted file mode 100644
index f656f7e64..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.wikipedia.new.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-id: org.apache.streampipes.connect.adapters.wikipedia.new
-title: Wikipedia New Articles
-sidebar_label: Wikipedia New Articles
-original_id: org.apache.streampipes.connect.adapters.wikipedia.new
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Continuously publishes articles created on Wikipedia
-
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.protocol.stream.file.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.protocol.stream.file.md
deleted file mode 100644
index 9db6eedd0..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.protocol.stream.file.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-id: org.apache.streampipes.connect.protocol.stream.file
-title: File Stream
-sidebar_label: File Stream
-original_id: org.apache.streampipes.connect.protocol.stream.file
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Continuously streams the content from a file
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processor.geo.flink.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processor.geo.flink.md
deleted file mode 100644
index 93e1e970b..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processor.geo.flink.md
+++ /dev/null
@@ -1,52 +0,0 @@
----
-id: org.apache.streampipes.processor.geo.flink
-title: Spatial Grid Enrichment
-sidebar_label: Spatial Grid Enrichment
-original_id: org.apache.streampipes.processor.geo.flink
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Groups spatial events into cells of a given size.
-The result is like a chessboard pattern in which the geo coordinates are inserted. The user can define the coordinates of the first field.
-
-***
-
-## Required input
-Requires a latitude and longitude in the data stream.
-
-## Configuration
-
-* Latitude property
-* Longitude property
-* The size of the cell
-* Latitude and longitude of the first cell
-
-## Output
-Appends the grid cell coordinates to the input event
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.aggregation.flink.aggregation.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.aggregation.flink.aggregation.md
deleted file mode 100644
index 5e9e9e808..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.aggregation.flink.aggregation.md
+++ /dev/null
@@ -1,62 +0,0 @@
----
-id: org.apache.streampipes.processors.aggregation.flink.aggregation
-title: Aggregation
-sidebar_label: Aggregation
-original_id: org.apache.streampipes.processors.aggregation.flink.aggregation
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Performs different aggregation functions based on a sliding time window (e.g., average, sum, min, max)
-
-***
-
-## Required input
-
-The aggregation processor requires a data stream that has at least one field containing a numerical value.
-
-***
-
-## Configuration
-
-### Group by
-The aggregation function can be calculated separately (partitioned) by the selected field value.
-
-### Output every
-The frequency in which aggregated values are sent in seconds.
-
-### Time window
-The size of the time window in seconds
-
-### Aggregated Value
-The field used for calculating the aggregation value.
-
-## Output
-
-This processor appends the latest aggregated value to every input event that arrives.
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.aggregation.flink.eventcount.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.aggregation.flink.eventcount.md
deleted file mode 100644
index d744a88d2..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.aggregation.flink.eventcount.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-id: org.apache.streampipes.processors.aggregation.flink.eventcount
-title: Event Counter
-sidebar_label: Event Counter
-original_id: org.apache.streampipes.processors.aggregation.flink.eventcount
----
-
-
-
-
-
-
-
-***
-
-## Description
-Counts the number of events arriving within a time window. An event is emitted every time the time window expires.
-
-***
-
-## Required input
-There is no specific input required.
-
-***
-
-## Configuration
-Time Window: The scale and size of the time window.
-
-### TimeWindowSize
-Specifies the size of the time window.
-
-### Time Window Scale
-Specifies the scale/unit of the time window. There are three different time scales to choose from: seconds, minutes or hours.
-
-## Output
-```
-{
- 'timestamp': 1601301980014,
- 'count': 12
-}
-```
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.enricher.jvm.sizemeasure.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.enricher.jvm.sizemeasure.md
deleted file mode 100644
index 520018f6c..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.enricher.jvm.sizemeasure.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-id: org.apache.streampipes.processors.enricher.jvm.sizemeasure
-title: Size Measure
-sidebar_label: Size Measure
-original_id: org.apache.streampipes.processors.enricher.jvm.sizemeasure
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Measures the size of an incoming event and appends this number to the event by serializing it.
-
-***
-
-## Required input
-The size measure processor does not have any specific input requirements.
-
-***
-
-## Configuration
-
-You can specify if the size should be in Bytes, Kilobytes (1024 Bytes) or in Megabytes (1024 Kilobytes).
-
-## Output
-The size measure processor appends the size of the event (without the field, which is getting added) as a double. The rest of the event stays the same.
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.pattern-detection.flink.absence.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.pattern-detection.flink.absence.md
deleted file mode 100644
index bebd7623b..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.pattern-detection.flink.absence.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-id: org.apache.streampipes.processors.pattern-detection.flink.absence
-title: Absence
-sidebar_label: Absence
-original_id: org.apache.streampipes.processors.pattern-detection.flink.absence
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Detects whether an event does not arrive within a specified time after the occurrence of another event.
-
-***
-
-## Required input
-
-
-***
-
-## Configuration
-
-Describe the configuration parameters here
-
-### 1st parameter
-
-
-### 2nd parameter
-
-## Output
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.pattern-detection.flink.peak-detection.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.pattern-detection.flink.peak-detection.md
deleted file mode 100644
index 4c003114c..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.pattern-detection.flink.peak-detection.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-id: org.apache.streampipes.processors.pattern-detection.flink.peak-detection
-title: Peak Detection
-sidebar_label: Peak Detection
-original_id: org.apache.streampipes.processors.pattern-detection.flink.peak-detection
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Detect peaks in time series data.
-
-***
-
-## Required input
-
-
-***
-
-## Configuration
-
-Describe the configuration parameters here
-
-### 1st parameter
-
-
-### 2nd parameter
-
-## Output
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.pattern-detection.flink.sequence.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.pattern-detection.flink.sequence.md
deleted file mode 100644
index 4605707a8..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.pattern-detection.flink.sequence.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-id: org.apache.streampipes.processors.pattern-detection.flink.sequence
-title: Sequence
-sidebar_label: Sequence
-original_id: org.apache.streampipes.processors.pattern-detection.flink.sequence
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Detects a sequence of events in the following form: Event A followed by Event B within X seconds. In addition, both streams can be matched by a common property value (e.g., a.machineId = b.machineId).
-
-***
-
-## Required input
-
-
-***
-
-## Configuration
-
-Describe the configuration parameters here
-
-### 1st parameter
-
-
-### 2nd parameter
-
-## Output
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.siddhi.frequencychange.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.siddhi.frequencychange.md
deleted file mode 100644
index e4a99cd7e..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.siddhi.frequencychange.md
+++ /dev/null
@@ -1,59 +0,0 @@
----
-id: org.apache.streampipes.processors.siddhi.frequencychange
-title: Frequency Change
-sidebar_label: Frequency Change
-original_id: org.apache.streampipes.processors.siddhi.frequencychange
----
-
-
-
-
-Notifies if there is a frequency change in events.
-
-***
-
-## Description
-
-Detects when the frequency of the event stream changes.
-
-***
-
-## Required input
-
-Does not have any specific input requirements.
-
-***
-
-## Configuration
-
-### Time Unit
-
-The time unit of the window. e.g, hrs, min and sec
-
-### Percentage of Increase/Decrease
-
-Specifies the increase in percent (e.g., 100 indicates an increase by 100 percent within the specified time window).
-
-### Time window length
-
-The time duration of the window in seconds.
-
-## Output
-
-Outputs event if there is a frequency change according to the provided configuration.
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.siddhi.sequence.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.siddhi.sequence.md
deleted file mode 100644
index 994baf584..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.siddhi.sequence.md
+++ /dev/null
@@ -1,36 +0,0 @@
----
-id: org.apache.streampipes.processors.siddhi.sequence
-title: Sequence Detection
-sidebar_label: Sequence Detection
-original_id: org.apache.streampipes.processors.siddhi.sequence
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Merges events from two event streams, when the top event arrives first and then the bottom event
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.statistics.flink.statistics-summary.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.statistics.flink.statistics-summary.md
deleted file mode 100644
index d989db1ce..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.statistics.flink.statistics-summary.md
+++ /dev/null
@@ -1,44 +0,0 @@
----
-id: org.apache.streampipes.processors.statistics.flink.statistics-summary
-title: Statistics Summary
-sidebar_label: Statistics Summary
-original_id: org.apache.streampipes.processors.statistics.flink.statistics-summary
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Calculate simple descriptive summary statistics for each selected list property.
-
-The statistics contain:
-* Minimum
-* Maximum
-* Sum
-* Standard Deviation
-* Variance
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.textmining.jvm.languagedetection.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.textmining.jvm.languagedetection.md
deleted file mode 100644
index 0d05118cb..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.textmining.jvm.languagedetection.md
+++ /dev/null
@@ -1,170 +0,0 @@
----
-id: org.apache.streampipes.processors.textmining.jvm.languagedetection
-title: Language Detection
-sidebar_label: Language Detection
-original_id: org.apache.streampipes.processors.textmining.jvm.languagedetection
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Detects the language of incoming text. For a proper detection each text should contain at least 2 sentences.
-
-Supported languages:
-* Afrikaans (afr)
-* Arabic (ara)
-* Asturian (ast)
-* Azerbaijani (aze)
-* Bashkir (bak)
-* Belarusian (bel)
-* Bengali (ben)
-* Bosnian (bos)
-* Breton (bre)
-* Bulgarian (bul)
-* Catalan (cat)
-* Cebuano (ceb)
-* Czech (ces)
-* Chechen (che)
-* Mandarin Chinese (cmn)
-* Welsh (cym)
-* Danish (dan)
-* German (deu)
-* Standard Estonian (ekk)
-* Greek, Modern (ell)
-* English (eng)
-* Esperanto (epo)
-* Estonian (est)
-* Basque (eus)
-* Faroese (fao)
-* Persian (fas)
-* Finnish (fin)
-* French (fra)
-* Western Frisian (fry)
-* Irish (gle)
-* Galician (glg)
-* Swiss German (gsw)
-* Gujarati (guj)
-* Hebrew (heb)
-* Hindi (hin)
-* Croatian (hrv)
-* Hungarian (hun)
-* Armenian (hye)
-* Indonesian (ind)
-* Icelandic (isl)
-* Italian (ita)
-* Javanese (jav)
-* Japanese (jpn)
-* Kannada (kan)
-* Georgian (kat)
-* Kazakh (kaz)
-* Kirghiz (kir)
-* Korean (kor)
-* Latin (lat)
-* Latvian (lav)
-* Limburgan (lim)
-* Lithuanian (lit)
-* Luxembourgish (ltz)
-* Standard Latvian (lvs)
-* Malayalam (mal)
-* Marathi (mar)
-* Minangkabau (min)
-* Macedonian (mkd)
-* Maltese (mlt)
-* Mongolian (mon)
-* Maori (mri)
-* Malay (msa)
-* Min Nan Chinese (nan)
-* Low German (nds)
-* Nepali (nep)
-* Dutch (nld)
-* Norwegian Nynorsk (nno)
-* Norwegian Bokmål (nob)
-* Occitan (oci)
-* Panjabi (pan)
-* Iranian Persian (pes)
-* Plateau Malagasy (plt)
-* Western Panjabi (pnb)
-* Polish (pol)
-* Portuguese (por)
-* Pushto (pus)
-* Romanian (ron)
-* Russian (rus)
-* Sanskrit (san)
-* Sinhala (sin)
-* Slovak (slk)
-* Slovenian (slv)
-* Somali (som)
-* Spanish (spa)
-* Albanian (sqi)
-* Serbian (srp)
-* Sundanese (sun)
-* Swahili (swa)
-* Swedish (swe)
-* Tamil (tam)
-* Tatar (tat)
-* Telugu (tel)
-* Tajik (tgk)
-* Tagalog (tgl)
-* Thai (tha)
-* Turkish (tur)
-* Ukrainian (ukr)
-* Urdu (urd)
-* Uzbek (uzb)
-* Vietnamese (vie)
-* Volapük (vol)
-* Waray (war)
-* Zulu (zul)
-
-***
-
-## Required input
-
-A stream with a string property which contains a text.
-The longer the text, the higher the accuracy of the language detector.
-
-
-***
-
-## Configuration
-
-Simply assign the correct output of the previous stream to the language detector input.
-To use this component you have to download or train an openNLP model:
-https://opennlp.apache.org/models.html
-
-## Output
-
-Adds two fields to the event:
-1. String Property: The acronym of the detected language which can be seen above.
-2. Double Property: The confidence of the detector that it found the correct language. Between 0 (not certain at all) and 1 (very certain).
-
-
-**Example:**
-
-Input: `(text: "Hi, how are you?")`
-
-Output: `(text: "Hi, how are you?", language: "eng", confidenceLanguage: 0.89)`
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.flink.field-converter.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.flink.field-converter.md
deleted file mode 100644
index c577f1297..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.flink.field-converter.md
+++ /dev/null
@@ -1,55 +0,0 @@
----
-id: org.apache.streampipes.processors.transformation.flink.field-converter
-title: Field Converter
-sidebar_label: Field Converter
-original_id: org.apache.streampipes.processors.transformation.flink.field-converter
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Converts a string value to a number data type.
-
-
-***
-
-## Required input
-This processor requires an event that contains at least one string valued field.
-
-***
-
-## Configuration
-
-### Field
-Specifies the string field that is converted.
-
-### Datatype
-Specifies the target datatype depending on the previously specified string field.
-
-## Output
-Output event in the specified target datatype.
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.flink.processor.boilerplate.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.flink.processor.boilerplate.md
deleted file mode 100644
index 998c90ccf..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.flink.processor.boilerplate.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-id: org.apache.streampipes.processors.transformation.flink.processor.boilerplate
-title: Boilerplate Removal
-sidebar_label: Boilerplate Removal
-original_id: org.apache.streampipes.processors.transformation.flink.processor.boilerplate
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Removes boilerplate tags from HTML and extracts fulltext
-
-***
-
-## Required input
-Requires a Text field containing the HTML
-
-***
-
-## Configuration
-
-Select the extractor type and output mode
-
-## Output
-Appends a new text field containing the content of the html page without the boilerplate
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.processor.state.buffer.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.processor.state.buffer.md
deleted file mode 100644
index 309e8b44a..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.processor.state.buffer.md
+++ /dev/null
@@ -1,55 +0,0 @@
----
-id: org.apache.streampipes.processors.transformation.jvm.processor.state.buffer
-title: State Buffer
-sidebar_label: State Buffer
-original_id: org.apache.streampipes.processors.transformation.jvm.processor.state.buffer
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Buffers values of a sensor, while state does not change.
-Select a state field in the event. Events are buffered as long as state field does not change. When it changes result event is emitted.
-
-***
-
-## Required input
-
-Define the state and sensor value field
-
-### Timestamp
-A mapping property for a timestamp field
-
-### State
-Select the field representing the state
-
-### Sensor value to cache
-Select the field with the numerical values to buffer
-
-## Output
-Emits a new event on state change, with the fields `timestamp`, `state`, and a list containing all `sensor values`.
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.processor.state.labeler.buffer.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.processor.state.labeler.buffer.md
deleted file mode 100644
index 8c3c4e352..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.processor.state.labeler.buffer.md
+++ /dev/null
@@ -1,70 +0,0 @@
----
-id: org.apache.streampipes.processors.transformation.jvm.processor.state.labeler.buffer
-title: State Buffer Labeler
-sidebar_label: State Buffer Labeler
-original_id: org.apache.streampipes.processors.transformation.jvm.processor.state.labeler.buffer
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Apply a rule to a time-series recorded during a state of a machine. (E.g. when minimum value is lower then 10, add label `not ok` else add label `ok`)
-
-
-***
-
-## Required input
-
-Requires a list with sensor values and a field defining the state
-
-### Sensor values
-
-An array representing sensor values recorded during the state.
-
-### State field
-
-A field representing the state when the sensor values where recorded.
-
-***
-
-## Configuration
-
-### Select a specific state
-When you are interested in the values of a specific state add it here. All other states will be ignored. To get results of all states enter `*`
-
-### Operation
-Operation that will be performed on the sensor values (calculate `maximim`, or `average`, or `minimum`)
-
-### Condition
-Define a rule which label to add. Example: `<;5;nok` means when the calculated value is smaller then 5 add label ok.
-The default label can be defined with `*;nok`.
-The first rule that is true defines the label. Rules are applied in the same order as defined here.
-
-
-## Output
-Appends a new field with the label defined in the Condition Configuration
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.protocol.set.file.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.protocol.set.file.md
deleted file mode 100644
index 37adf4d65..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.protocol.set.file.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-id: org.apache.streampipes.protocol.set.file
-title: File Set
-sidebar_label: File Set
-original_id: org.apache.streampipes.protocol.set.file
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Reads the content from a local file.
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.protocol.set.http.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.protocol.set.http.md
deleted file mode 100644
index 8f316fe29..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.protocol.set.http.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-id: org.apache.streampipes.protocol.set.http
-title: HTTP Set
-sidebar_label: HTTP Set
-original_id: org.apache.streampipes.protocol.set.http
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Regularly poll an HTTP endpoint
-
-***
-
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.databases.flink.elasticsearch.md b/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.databases.flink.elasticsearch.md
deleted file mode 100644
index 5e29bbbb4..000000000
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.databases.flink.elasticsearch.md
+++ /dev/null
@@ -1,61 +0,0 @@
----
-id: org.apache.streampipes.sinks.databases.flink.elasticsearch
-title: Elasticsearch
-sidebar_label: Elasticsearch
-original_id: org.apache.streampipes.sinks.databases.flink.elasticsearch
----
-
-
-
-
-
-
-
-
-
-***
-
-## Description
-
-Stores data in an Elasticsearch database.
-
-***
-
-## Required input
-
-This sink requires an event that provides a timestamp value (a field that is marked to be of type ``http://schema
-.org/DateTime``.
-
-***
-
-## Configuration
-
-Describe the configuration parameters here
-
-### Timestamp Field
-
-The field which contains the required timestamp.
-
-### Index Name
-
-The name of the Elasticsearch index where events are stored to.
-
-## Output
-
-(not applicable for data sinks)
diff --git a/website-v2/versioned_docs/version-0.70.0/01_try-installation.md b/website-v2/versioned_docs/version-0.95.1/01_try-installation.md
similarity index 51%
rename from website-v2/versioned_docs/version-0.70.0/01_try-installation.md
rename to website-v2/versioned_docs/version-0.95.1/01_try-installation.md
index e179eb2d5..ea6b45d79 100644
--- a/website-v2/versioned_docs/version-0.70.0/01_try-installation.md
+++ b/website-v2/versioned_docs/version-0.95.1/01_try-installation.md
@@ -2,7 +2,6 @@
id: try-installation
title: Installation
sidebar_label: Installation
-original_id: try-installation
---
import DownloadSection from '@site/src/components/download/DownloadSection.tsx';
@@ -15,22 +14,15 @@ recommend looking at our Kubernetes support, which is also part of the installat
The Docker-based installation requires **Docker** and **Docker Compose** to be installed on the target machine.
Installation instructions can be found below.
-
-
Install Docker
-
Go to https://docs.docker.com/installation/ and follow the instructions to install Docker for your OS. Make sure docker can be started as a non-root user (described in the installation manual, don’t forget to log out and in again) and check that Docker is installed correctly by executing docker-run hello-world
-
-
-
-
Configure Docker
-
By default, Docker uses only a limited number of CPU cores and memory.
- If you run StreamPipes on Windows or on a Mac you need to adjust the default settings.
- To do that, click on the Docker icon in your tab bar and open the preferences.
- Go to the advanced preferences and set the **number of CPUs to 6** (recommended) and the **Memory to 4GB**.
- After changing the settings, Docker needs to be restarted.
+:::info Install Docker
+Go to https://docs.docker.com/installation/ and follow the instructions to install Docker for your OS. Make sure
+docker can be started as a non-root user (described in the installation manual, don’t forget to log out and in
+again) and check that Docker is installed correctly by executing docker-run hello-world
+:::
### Supported operating systems
-The Docker-based installation supports the operating systems **Linux**, **Mac OS X** and **Windows 10**. Older windows
+The Docker-based installation supports the operating systems **Linux**, **Mac OS X** and **Windows 10 upwards**. Older windows
versions are not fully compatible with Docker. Linux VMs running under Windows might cause network problems with Docker,
therefore some manual work might be needed to make StreamPipes run properly.
@@ -41,7 +33,7 @@ best experience), Firefox or Edge.
## Install StreamPipes
-
+
## Setup StreamPipes
@@ -55,17 +47,16 @@ On the login page, enter your credentials, then you should be forwarded to the h
Congratulations! You've successfully managed to install StreamPipes. Now we're ready to build our first pipeline!
-
-
-
Errors during the installation process
-
In most cases, errors during the installation are due to an under-powered system.
-If there is a problem with any of the components, please restart the whole system (docker-compose down
and eventually also delete the volumes).
- Please also make sure that your system meets the hardware requirements as mentioned in the first section of the installation guide.
-
+:::danger Errors during the installation process
+In most cases, errors during the installation are due to an under-powered system.
+If there is a problem with any of the components, please restart the whole system (`docker-compose
+down` and eventually also delete the volumes).
+Please also make sure that you've assigned enough memory available to Docker.
+:::
## Next Steps
-That's it! To ease your first steps with StreamPipes, we've created an [interactive tutorial](try-tutorial).
+That's it! Have a look at the usage guide to learn how to use Apache StreamPipes.
diff --git a/website-v2/versioned_docs/version-0.70.0/01_try-overview.md b/website-v2/versioned_docs/version-0.95.1/01_try-overview.md
similarity index 89%
rename from website-v2/versioned_docs/version-0.70.0/01_try-overview.md
rename to website-v2/versioned_docs/version-0.95.1/01_try-overview.md
index 48be14900..29a059d4a 100644
--- a/website-v2/versioned_docs/version-0.70.0/01_try-overview.md
+++ b/website-v2/versioned_docs/version-0.95.1/01_try-overview.md
@@ -2,12 +2,13 @@
id: user-guide-introduction
title: Apache StreamPipes Documentation
sidebar_label: Overview
-original_id: user-guide-introduction
---
This is the documentation of Apache StreamPipes.
-
+
+
@@ -85,7 +86,7 @@ This is the documentation of Apache StreamPipes.
Tutorial Data Sources 🔗,
Tutorial Data Processors 🔗,
Tutorial Data Sinks 🔗,
-
Event Model 🔗,
+
Event Model 🔗,
Stream Requirements 🔗,
Static Properties 🔗,
Output Strategies 🔗
@@ -119,4 +120,15 @@ This is the documentation of Apache StreamPipes.
+
diff --git a/website-v2/versioned_docs/version-0.70.0/01_try-tutorial.md b/website-v2/versioned_docs/version-0.95.1/01_try-tutorial.md
similarity index 97%
rename from website-v2/versioned_docs/version-0.70.0/01_try-tutorial.md
rename to website-v2/versioned_docs/version-0.95.1/01_try-tutorial.md
index fb7f86174..c13d6f1dc 100644
--- a/website-v2/versioned_docs/version-0.70.0/01_try-tutorial.md
+++ b/website-v2/versioned_docs/version-0.95.1/01_try-tutorial.md
@@ -2,7 +2,6 @@
id: try-tutorial
title: Interactive Tutorial
sidebar_label: Interactive Tutorial
-original_id: try-tutorial
---
Once you've installed StreamPipes and see the home screen, you'll see a number of modules that are part of the StreamPipes toolbox.
diff --git a/website-v2/versioned_docs/version-0.70.0/02_concepts-adapter.md b/website-v2/versioned_docs/version-0.95.1/02_concepts-adapter.md
similarity index 73%
rename from website-v2/versioned_docs/version-0.70.0/02_concepts-adapter.md
rename to website-v2/versioned_docs/version-0.95.1/02_concepts-adapter.md
index 584f5a543..a94c38268 100644
--- a/website-v2/versioned_docs/version-0.70.0/02_concepts-adapter.md
+++ b/website-v2/versioned_docs/version-0.95.1/02_concepts-adapter.md
@@ -2,7 +2,6 @@
id: concepts-adapter
title: Data Adapters
sidebar_label: Data Adapters
-original_id: concepts-adapter
---
tbd
diff --git a/website-v2/versioned_docs/version-0.70.0/02_concepts-data-streams.md b/website-v2/versioned_docs/version-0.95.1/02_concepts-data-streams.md
similarity index 71%
rename from website-v2/versioned_docs/version-0.70.0/02_concepts-data-streams.md
rename to website-v2/versioned_docs/version-0.95.1/02_concepts-data-streams.md
index a8f25015d..329f9b908 100644
--- a/website-v2/versioned_docs/version-0.70.0/02_concepts-data-streams.md
+++ b/website-v2/versioned_docs/version-0.95.1/02_concepts-data-streams.md
@@ -2,7 +2,6 @@
id: concepts-data-streams
title: Data Streams
sidebar_label: Data Streams
-original_id: concepts-data-streams
---
tbd
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/02_concepts-glossary.md b/website-v2/versioned_docs/version-0.95.1/02_concepts-glossary.md
similarity index 70%
rename from website-v2/versioned_docs/version-0.70.0/02_concepts-glossary.md
rename to website-v2/versioned_docs/version-0.95.1/02_concepts-glossary.md
index 68a33967c..b401d1829 100644
--- a/website-v2/versioned_docs/version-0.70.0/02_concepts-glossary.md
+++ b/website-v2/versioned_docs/version-0.95.1/02_concepts-glossary.md
@@ -2,7 +2,6 @@
id: concepts-glossary
title: Glossary
sidebar_label: Glossary
-original_id: concepts-glossary
---
tbd
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.95.1/02_concepts-overview.md b/website-v2/versioned_docs/version-0.95.1/02_concepts-overview.md
new file mode 100644
index 000000000..f26f3cc07
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/02_concepts-overview.md
@@ -0,0 +1,46 @@
+---
+id: concepts-overview
+title: StreamPipes Concepts
+sidebar_label: Overview
+---
+
+To understand how StreamPipes works, it is helpful to understand a few core concepts, which are illustrated below.
+These encompass the entire data journey within StreamPipes: Starting with data collection ([adapters](#adapter)),
+through data exchange ([data streams](#data-stream)) and data processing ([data processors](#data-processor) and [pipelines](#pipeline)),
+to data persistence and distribution ([data sinks](#data-sink)).
+
+
+
+## Adapter
+An adapter connects to any external data source (e.g., OPC-UA, MQTT, S7 PLC, Modbus) and forwards the events it receives to the internal StreamPipes system.
+Adapters can either be created by using a predefined adapter for a data source available in our marketplace [StreamPipes Connect](./03_use-connect.md).
+An overview of all available adapters can be found under the menu bar **📚 Pipeline Elements**.
+When you select one of these adapters, you can easily connect to the data source using an intuitive and convenient UI dialog (see the Connect section for more details).
+Alternatively, you can define your own adapter by [using the provided Software Development Kit (SDK)](./06_extend-tutorial-adapters.md).
+Creating an adapter is always the first step when you want to get data into StreamPipes and process it further.
+
+## Data Stream
+**Data streams** are the primary source for working with events in StreamPipes.
+A stream is an ordered sequence of events, where an event typically consists of one or more observation values and additional metadata.
+The `structure` (or `schema` as we call it) of an event provided by a data stream is stored in StreamPipes' internal semantic schema registry.
+Data streams are primarily created by adapters, but can also be created by a [StreamPipes Function](./06_extend-sdk-functions.md).
+
+## Data Processor
+**Data processors** in StreamPipes transform one or more input streams into an output stream.
+Such transformations can be simple, such as filtering based on a predefined rule, or more complex, such as applying rule-based or learning-based algorithms to the data.
+Data processors can be applied to any data stream that meets the input requirements of a processor.
+In addition, most processors can be configured by providing custom parameters directly in the user interface.
+Processing elements define stream requirements, which are a set of minimum characteristics that an incoming event stream must provide.
+Data processors can maintain state or perform stateless operations.
+
+## Data Sink
+**Data sinks** consume event streams similar to data processors, but do not provide an output data stream.
+As such, data sinks typically perform some action or trigger a visualization as a result of a stream transformation.
+Similar to data processors, sinks also require the presence of specific input requirements from each bound data stream and can be customized.
+StreamPipes provides several internal data sinks, for example, to generate notifications, visualize live data, or persist historical data from incoming streams.
+In addition, StreamPipes provides several data sinks to forward data streams to external systems such as databases.
+
+## Pipeline
+A pipeline in Apache StreamPipes describes the transformation process from a data stream to a data sink.
+Typically, a pipeline consists of at least one data stream, zero or more data processors, and at least one data sink.
+Pipelines are created graphically by users using the [Pipeline Editor](./03_use-pipeline-editor.md) and can be started and stopped at any time.
diff --git a/website-v2/versioned_docs/version-0.70.0/02_concepts-pipeline.md b/website-v2/versioned_docs/version-0.95.1/02_concepts-pipeline.md
similarity index 70%
rename from website-v2/versioned_docs/version-0.70.0/02_concepts-pipeline.md
rename to website-v2/versioned_docs/version-0.95.1/02_concepts-pipeline.md
index 282642fc4..3d2c5369b 100644
--- a/website-v2/versioned_docs/version-0.70.0/02_concepts-pipeline.md
+++ b/website-v2/versioned_docs/version-0.95.1/02_concepts-pipeline.md
@@ -2,7 +2,6 @@
id: concepts-pipelines
title: Pipelines
sidebar_label: Pipelines
-original_id: concepts-pipelines
---
tbd
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.95.1/02_introduction.md b/website-v2/versioned_docs/version-0.95.1/02_introduction.md
new file mode 100644
index 000000000..6dc71b384
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/02_introduction.md
@@ -0,0 +1,85 @@
+---
+id: introduction
+title: Introduction
+sidebar_label: Introduction
+---
+
+## What is StreamPipes?
+
+Apache StreamPipes is a self-service Industrial IoT toolbox to enable non-technical users to connect, analyze and
+explore IoT data streams. The main goal of StreamPipes is to help users bridging the gap between operational
+technology (OT) and information technology (IT). This is achieved by providing a set of tools which help to make
+industrial data accessible for downstream tasks such as data analytics and condition monitoring.
+When working with industrial data and especially when building upon an open source stack for such tasks, users are often
+faced with the management and integration of a variety of different tools for data connectivity, messaging &
+integration, data enrichment, data storage, visualization and analytics. This results in an increasing operational
+complexity and hardly manageable software stacks.
+
+Apache StreamPipes addresses this problem: It provides a complete toolbox with a variety of different tools to easily
+gather data from OT systems such as Programmatic Logic Controllers (PLCs), industrial protocols (e.g., OPC-UA or
+Modbus), IT protocols (e.g., MQTT) and others. Data is integrated in the form of live data streams. Based on connected
+data, StreamPipes provides another module called the pipeline editor, which can be used to apply real-time analytics
+algorithms on connected data stream. To this end, a library of pre-defined algorithms can be used. Out of the box,
+StreamPipes provides more than 100 pipeline elements tailored at manufacturing data analytics. This includes simple
+rule-based algorithms (e.g., flank detection, peak detection, boolean timers), as well as the possibility to integrate
+more sophisticated ML-based algorithms. Finally, the pipeline editor allows to integrate with third-party systems by
+using a variety of data sinks (e.g., to forward data to messaging brokers such as Apache Kafka, MQTT or RocketMQ, to
+store data in databases such as PostgreSQL or Redis or to trigger notifications). Besides pipelines, an included data
+explorer allows to visually analyze industrial IoT data. For this purpose, a number of visualizations are integrated
+that allow non-technical users to quickly get first insights. Examples are correlations between several sensor values,
+value heatmaps, distributions or time-series visualizations. Further tools include a dashboard used for real-time
+monitoring, e.g., for visualizing live KPIs at shopfloor level.
+
+But StreamPipes is much more than just the user interface and an orchestration system for pipelines: It can be used as a
+whole developer platform for Industrial IoT application. Apache StreamPipes is made for extensibility - it provides
+several extension points, which allow the definition of custom algorithms, additional interfaces to third-party tools
+and proprietary data sources.
+
+StreamPipes includes developer support for Java and Python, making it easy to integrate custom-trained machine learning
+models into the data processing environment. With the built-in Python support, it is also possible to run online machine
+learning methods directly on data streams gathered by StreamPipes.
+
+## Where does StreamPipes help?
+
+Being positioned in the industrial IoT domain, the overall goal of StreamPipes is to help manufacturing companies to
+quickly build up an industrial IoT infrastructure and to analyse IIoT data without the need for manual programming.
+Oftentimes, StreamPipes is compared to other tools in this area such as Node-RED for visually wiring of pipelines, which
+is often used together with Grafana for data visualization and InfluxDB for time-series storage. The disadvantage of
+such architectures is the system complexity beyond the first prototype, especially when it comes to production
+deployments. Maintaining and securing multiple software instances is often a hard task requiring for substantial
+development effort. In addition, implementing single-sign-on and providing a unified user experience is another hurdle.
+This is where StreamPipes, as a single integrated tool with production-critical features such as access and role
+management, provides many advantages.
+StreamPipes has already a wide user range from the manufacturing domain. It helps users to quickly do the first steps
+related to industrial analytics but can also be used for monitoring whole production facilities, analysing data streams
+from multiple plants and sensors in real time using the integrated algorithm toolbox. Customization to individual use
+cases is easy due to several extension points:
+
+* Software development kit for adapters, data processors and sinks: The functionality of StreamPipes can be extending by
+ using the integrated SDK. For instance, it is possible to integrate custom-tailored algorithms for proprietary sensors
+ or models into the toolbox. Additional algorithms and data sinks can be installed at runtime.
+* Additional user interface plugins: StreamPipes allows to extend the default installation with additional UI views,
+ making use of a micro frontend approach. For instance, users can extend the system with custom-tailored views for a
+ specific machine or plant. Developers can use a platform API to communicate with the core StreamPipes instance.
+* UI customization: To ensure a consistent look and feel, StreamPipes can be customized to the company’s corporate
+ identity.
+
+## How does StreamPipes technically work in a nutshell?
+
+
+
+
+
+To foster extensibility, Apache StreamPipes is based on a microservice architecture as illustrated above. The main
+services provided or used by StreamPipes are the a) user interface, b) the core, c) a time-series storage, d) a
+publish/subscribe messaging layer and e) extensions services. Adapters are created over the user interface using an
+intuitive configuration wizard and connect to the underlying source systems. Raw events coming from adapters can be
+pre-processed (e.g., measurement unit conversions or datatype conversions). Afterwards, events are sent to the message
+broker, which is the central backbone to provide IIoT data to internal and external applications.
+
+Besides adapters, extensions microservices can also integrate additional business logic in form of data processors and
+data sinks. StreamPipes comes with over 100 built-in processors and sinks, covering basic use cases out-of-the-box. The StreamPipes core cares about orchestration of these pipeline elements and communicates with the user
+interface. In addition, a time-series storage ensures persistence and can be used by any extensions service to write
+data into the internal storage. The StreamPipes core provides a query interface to access historical data, which is, for
+instance, used by the data explorer UI component. The user interface itself provides several built-in modules but can
+also be extended with additional micro frontends.
diff --git a/website-v2/versioned_docs/version-0.70.0/03_use-configurations.md b/website-v2/versioned_docs/version-0.95.1/03_use-configurations.md
similarity index 98%
rename from website-v2/versioned_docs/version-0.70.0/03_use-configurations.md
rename to website-v2/versioned_docs/version-0.95.1/03_use-configurations.md
index fda2f1169..4aa953fbb 100644
--- a/website-v2/versioned_docs/version-0.70.0/03_use-configurations.md
+++ b/website-v2/versioned_docs/version-0.95.1/03_use-configurations.md
@@ -2,7 +2,6 @@
id: use-configurations
title: Configurations
sidebar_label: Configurations
-original_id: use-configurations
---
The configuration section is an admin-only interface for system-wide settings.
diff --git a/website-v2/versioned_docs/version-0.70.0/03_use-connect.md b/website-v2/versioned_docs/version-0.95.1/03_use-connect.md
similarity index 99%
rename from website-v2/versioned_docs/version-0.70.0/03_use-connect.md
rename to website-v2/versioned_docs/version-0.95.1/03_use-connect.md
index 34495cb6e..ba8146481 100644
--- a/website-v2/versioned_docs/version-0.70.0/03_use-connect.md
+++ b/website-v2/versioned_docs/version-0.95.1/03_use-connect.md
@@ -2,7 +2,6 @@
id: use-connect
title: StreamPipes Connect
sidebar_label: StreamPipes Connect
-original_id: use-connect
---
StreamPipes Connect is the module to connect external data sources with Apache StreamPipes directly from the user interface.
diff --git a/website-v2/versioned_docs/version-0.70.0/03_use-dashboard.md b/website-v2/versioned_docs/version-0.95.1/03_use-dashboard.md
similarity index 99%
rename from website-v2/versioned_docs/version-0.70.0/03_use-dashboard.md
rename to website-v2/versioned_docs/version-0.95.1/03_use-dashboard.md
index 4fc75c851..339bf6b57 100644
--- a/website-v2/versioned_docs/version-0.70.0/03_use-dashboard.md
+++ b/website-v2/versioned_docs/version-0.95.1/03_use-dashboard.md
@@ -2,7 +2,6 @@
id: use-dashboard
title: Live Dashboard
sidebar_label: Live Dashboard
-original_id: use-dashboard
---
The live dashboard can be used to visualize live data of data streams using a set of visualizations
diff --git a/website-v2/versioned_docs/version-0.70.0/03_use-data-explorer.md b/website-v2/versioned_docs/version-0.95.1/03_use-data-explorer.md
similarity index 99%
rename from website-v2/versioned_docs/version-0.70.0/03_use-data-explorer.md
rename to website-v2/versioned_docs/version-0.95.1/03_use-data-explorer.md
index af31c65ab..f84323bf6 100644
--- a/website-v2/versioned_docs/version-0.70.0/03_use-data-explorer.md
+++ b/website-v2/versioned_docs/version-0.95.1/03_use-data-explorer.md
@@ -2,7 +2,6 @@
id: use-data-explorer
title: Data Explorer
sidebar_label: Data Explorer
-original_id: use-data-explorer
---
The data explorer can be used to visualize and explore data streams that are persisted by using the **Data Lake** sink.
diff --git a/website-v2/versioned_docs/version-0.70.0/03_use-install-pipeline-elements.md b/website-v2/versioned_docs/version-0.95.1/03_use-install-pipeline-elements.md
similarity index 78%
rename from website-v2/versioned_docs/version-0.70.0/03_use-install-pipeline-elements.md
rename to website-v2/versioned_docs/version-0.95.1/03_use-install-pipeline-elements.md
index 10be7e572..852693200 100644
--- a/website-v2/versioned_docs/version-0.70.0/03_use-install-pipeline-elements.md
+++ b/website-v2/versioned_docs/version-0.95.1/03_use-install-pipeline-elements.md
@@ -2,7 +2,6 @@
id: use-install-pipeline-elements
title: Install Pipeline Elements
sidebar_label: Install Pipeline Elements
-original_id: use-install-pipeline-elements
---
## Install Pipeline Elements
diff --git a/website-v2/versioned_docs/version-0.70.0/03_use-managing-pipelines.md b/website-v2/versioned_docs/version-0.95.1/03_use-managing-pipelines.md
similarity index 98%
rename from website-v2/versioned_docs/version-0.70.0/03_use-managing-pipelines.md
rename to website-v2/versioned_docs/version-0.95.1/03_use-managing-pipelines.md
index 1aba73b1c..2c64b53ee 100644
--- a/website-v2/versioned_docs/version-0.70.0/03_use-managing-pipelines.md
+++ b/website-v2/versioned_docs/version-0.95.1/03_use-managing-pipelines.md
@@ -2,7 +2,6 @@
id: use-managing-pipelines
title: Managing Pipelines
sidebar_label: Managing Pipelines
-original_id: use-managing-pipelines
---
The pipeline view lists all created pipelines and provides several views and actions to manage the lifecycle of pipelines.
diff --git a/website-v2/versioned_docs/version-0.70.0/03_use-notifications.md b/website-v2/versioned_docs/version-0.95.1/03_use-notifications.md
similarity index 97%
rename from website-v2/versioned_docs/version-0.70.0/03_use-notifications.md
rename to website-v2/versioned_docs/version-0.95.1/03_use-notifications.md
index 627efca69..b5c64ed98 100644
--- a/website-v2/versioned_docs/version-0.70.0/03_use-notifications.md
+++ b/website-v2/versioned_docs/version-0.95.1/03_use-notifications.md
@@ -2,7 +2,6 @@
id: use-notifications
title: Notifications
sidebar_label: Notifications
-original_id: use-notifications
---
The notification module can be used to create internal notifications.
diff --git a/website-v2/versioned_docs/version-0.70.0/03_use-pipeline-editor.md b/website-v2/versioned_docs/version-0.95.1/03_use-pipeline-editor.md
similarity index 99%
rename from website-v2/versioned_docs/version-0.70.0/03_use-pipeline-editor.md
rename to website-v2/versioned_docs/version-0.95.1/03_use-pipeline-editor.md
index 9762e7819..f09cf8486 100644
--- a/website-v2/versioned_docs/version-0.70.0/03_use-pipeline-editor.md
+++ b/website-v2/versioned_docs/version-0.95.1/03_use-pipeline-editor.md
@@ -2,7 +2,6 @@
id: use-pipeline-editor
title: Pipeline Editor
sidebar_label: Pipeline Editor
-original_id: use-pipeline-editor
---
The pipeline editor module supports building pipelines that transform a data stream using a set of resuable data processors and data sinks.
diff --git a/website-v2/versioned_docs/version-0.95.1/05_deploy-choosing-the-right-flavor.md b/website-v2/versioned_docs/version-0.95.1/05_deploy-choosing-the-right-flavor.md
new file mode 100644
index 000000000..a140bf46e
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/05_deploy-choosing-the-right-flavor.md
@@ -0,0 +1,47 @@
+---
+id: choosing-the-right-flavor
+title: Choosing the right flavor
+sidebar_label: Service selection options
+---
+
+
+## Introduction
+
+StreamPipes comes with many different options to customize a deployment. This section introduces the various options you can choose from when installing StreamPipes.
+
+You can choose between various **deployment modes**, choose from two different core packages and several extension packages, wich are described below.
+
+## Deployment Mode
+
+For the deployment model, you choose between a standard multi-container `Docker-Compose` installation and the `Kubernetes` installation.
+we provide several `Docker-Compose` files for the various options shown here and a `helm chart`.
+See [Docker Deployment](05_deploy-docker.md) and [Kubernetes Deployment](05_deploy-kubernetes.md) for more details.
+
+### Running StreamPipes in a non-containerized environment
+
+Of course, it is also possible to launch StreamPipes in a non-containerized environment.
+You will need to build your own executable binaries by running `mvn package`.
+In addition, it is required to install the required 3rd party services (see [Architecture](07_technicals-architecture.md)) and configure the environment variables as described in [Environment Variables](05_deploy-environment-variables.md).
+
+## Core Service
+
+We provide two different pre-packaged versions of core services. The default `streampipes-service-core` is a packaged JAR file which includes client libraries for the various messaging systems StreamPipes supports at the cost of a larger file size.
+In case you plan to run StreamPipes on less resource-intensive hardware, we recommend to switch to the `streampipes-service-core-minimal` package, which only includes support for MQTT and NATS, but has a smaller file size and slightly improved startup performance.
+
+## Extension Services
+
+Similar to the core, we provide several pre-packaged extension services which differ mainly by their file size, number of supported adapters and pipeline elements and messaging systems.
+
+The following packages exist:
+
+* `streampipes-extensions-all-jvm` is the largest package and includes all official StreamPipes adapters and pipeline elements. It also includes support for all messaging systems Streampipes currently supports.
+* `streampipes-extensions-all-iiot` is a subset of the aforementioned package and excludes adapters and pipeline elements which are often not relevant for IIoT use cases. For instance, the package excludes text mining-related pipeline elements.
+* `streampipes-extensions-iiot-minimal` is a subset of the aforementioned package and includes only support for the lightweight messaging systems MQTT and NATS.
+
+Generally said, in cases where you plan to deploy StreamPipes on a resource-limited edge device, we recommend a combination of the `streampipes-service-core-minimal` and `streampipes-extensions-iiot-minimal` package. This could, for instance, be a device with less than 4GB memory.
+In other cases, it depends on the use case and if you need all adapters and pipeline elements or are ok with the IIoT-related extensions.
+
+## Messaging System
+
+StreamPipes can be configured to use different messaging systems for exchanging events between adapters and pipeline elements.
+The section [Messaging](07_technicals-messaging.md) includes detailed information on the configuration of messaging systems.
diff --git a/website-v2/versioned_docs/version-0.95.1/05_deploy-docker.md b/website-v2/versioned_docs/version-0.95.1/05_deploy-docker.md
new file mode 100644
index 000000000..881e16b43
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/05_deploy-docker.md
@@ -0,0 +1,104 @@
+---
+id: deploy-docker
+title: Docker Deployment
+sidebar_label: Docker Deployment
+---
+
+StreamPipes Compose is a simple collection of user-friendly `docker-compose` files that easily lets gain first-hand experience with Apache StreamPipes.
+
+> **NOTE**: We recommend StreamPipes Compose to only use for initial try-out and testing. If you are a developer and
+> want to develop new pipeline elements or core feature, use the [StreamPipes CLI](06_extend-cli.md).
+
+#### TL;DR: A one-liner to rule them all :-)
+
+```bash
+docker-compose up -d
+```
+Go to http://localhost to finish the installation in the browser. Once finished, switch to the pipeline editor and start the interactive tour or check the [online tour](https://streampipes.apache.org/docs/docs/user-guide-tour/) to learn how to create your first pipeline!
+
+## Prerequisites
+* Docker >= 17.06.0
+* Docker-Compose >= 1.17.0 (Compose file format: 3.4)
+* Google Chrome (recommended), Mozilla Firefox, Microsoft Edge
+
+Tested on: **macOS, Linux, Windows 10 upwards** (CMD, PowerShell, GitBash)
+
+**macOS** and **Windows** users can easily get Docker and Docker-Compose on their systems by installing **Docker for Mac/Windows** (recommended).
+
+> **NOTE**: On purpose, we disabled all port mappings except of http port **80** to access the StreamPipes UI to provide minimal surface for conflicting ports.
+
+## Usage
+We provide several options to get you going:
+
+- **default**: Default docker-compose file, called `docker-compose.yml`.
+- **nats**: The standard installation which uses Nats as message broker,called `docker-compose.nats.yml`.
+- **full**: Contains experimental Flink wrappers, called `docker-compose.full.yml`.
+- **quickstart**: Contains predefined example assets, called `docker-compose.quickstart.yml`. The Quickstart mode is a user-friendly feature which comes with predefined example assets like pipelines, dashboards, and data views. These ready-to-use components allow first-time users to get a feel of StreamPipes in IIoT with ease, serving as a practical demonstration of how StreamPipes can be utilized for efficient monitoring and analysis. We highly recommend first-time users to begin with the Quickstart mode to understand the simplicity and convenience that StreamPipes brings to the IIoT platform. Please follow the [User Guide for Quickstart Mode](user-guide-for-quickstart.md) if you want to explore it.
+
+
+:::info
+
+Other options include configurations for the internally used message broker. The current default is `Kafka`, but you can also start StreamPipes with `Nats`, `MQTT` or `Apache Pulsar`.
+Use one of the other provided docker-compose files.
+
+:::
+
+**Starting** the **default** option is as easy as simply running:
+> **NOTE**: Starting might take a while since `docker-compose up` also initially pulls all Docker images from Dockerhub.
+
+```bash
+docker-compose up -d
+# go to `http://localhost` after all services are started
+```
+After all containers are successfully started just got to your browser and visit http://localhost to finish the installation. Once finished, switch to the pipeline editor and start the interactive tour or check the [online tour](https://streampipes.apache.org/docs/docs/user-guide-tour/) to learn how to create your first pipeline!
+
+**Stopping** the **default** option is similarly easy:
+```bash
+docker-compose down
+# if you want to remove mapped data volumes, run:
+# docker-compose down -v
+```
+
+Starting the **nats** option is almost the same, just specify the `docker-compose.nats.yml` file:
+```bash
+docker-compose -f docker-compose.nats.yml up -d
+# go to `http://localhost` after all services are started
+```
+**Stopping** the **nats** option:
+```bash
+docker-compose -f docker-compose.nats.yml down
+```
+
+
+Starting the **full** option is almost the same, just specify the `docker-compose.full.yml` file:
+```bash
+docker-compose -f docker-compose.full.yml up -d
+#go to `http://localhost` after all services are started
+```
+Stopping the **full** option:
+```bash
+docker-compose -f docker-compose.nats.yml down
+#docker-compose -f docker-compose.nats.yml down -v
+```
+Starting the **quickstart** option:
+```bash
+docker-compose -f docker-compose.quickstart.yml build script-runner
+docker-compose -f docker-compose.quickstart.yml up -d
+#go to `http://localhost` after all services are started
+```
+Stopping the **quickstart** option:
+```bash
+docker-compose -f docker-compose.quickstart.yml down
+```
+
+## Update services
+To actively pull the latest available Docker images use:
+```bash
+docker-compose pull
+```
+
+## Upgrade
+To upgrade to another StreamPipes version, simply edit the `SP_VERSION` in the `.env` file.
+```
+SP_VERSION=
+```
diff --git a/website-v2/versioned_docs/version-0.95.1/05_deploy-environment-variables.md b/website-v2/versioned_docs/version-0.95.1/05_deploy-environment-variables.md
new file mode 100644
index 000000000..c4066fb70
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/05_deploy-environment-variables.md
@@ -0,0 +1,88 @@
+---
+id: deploy-environment-variables
+title: Environment Variables
+sidebar_label: Environment Variables
+---
+
+## Introduction
+
+A StreamPipes installation can be configured in many ways by providing environment variables.
+The following lists describe available environment variables along with a description.
+
+## StreamPipes Core Service
+
+### Internal
+
+| Env Variable Name | Default Value | Description |
+|--------------------------------|---------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------|
+| SP_DEBUG | false | Should only be set for local development to reroute traffic to localhost |
+| SP_INITIAL_ADMIN_EMAIL | admin@streampipes.apache.org | Installation-time variable for defining the default user name |
+| SP_INITIAL_ADMIN_PASSWORD | admin | Installation-time variable for defining the default user password |
+| SP_INITIAL_SERVICE_USER | sp-service-client | Installation-time variable for defining the initial service user (must be same to the configured user in the extension service) |
+| SP_INITIAL_SERVICE_USER_SECRET | my-apache-streampipes-secret-key-change-me | Installation-time variable for defining the initial service secret (minimum 35 chars) |
+| SP_JWT_SECRET | Empty for Docker, Auto-generated for K8s | JWT secret, base64-encoded, minimum 256 bits |
+| SP_JWT_SIGNING_MODE | HMAC | HMAC or RSA, RSA can be used to authenticate Core-Extensions communication |
+| SP_JWT_PRIVATE_KEY_LOC | Empty | Required id SP_JWT_SIGNING_MODE=RSA, path to the private key, can be generated in the UI (Settings->Security->Generate Key Pair) |
+| SP_ENCRYPTION_PASSCODE | eGgemyGBoILAu3xckolp for Docker, Auto-generated for K8s | Encryption passcode for `SecretStaticProperties` |
+| SP_PRIORITIZED_PROTOCOL | kafka | Messaging layer for data exchange between extensions |
+
+
+### Third-party services
+
+| Env Variable Name | Default Value | Description |
+|------------------------|---------------|---------------------------------------------------------------------------|
+| SP_COUCHDB_HOST | couchdb | The hostname or IP of the CouchDB database |
+| SP_COUCHDB_PROTOCOL | http | The protocol (http or https) of the CouchDB database |
+| SP_COUCHDB_PORT | 5984 | The port of the CouchDB database |
+| SP_COUCHDB_USER | admin | The user of the CouchDB database (must have permissions to add databases) |
+| SP_COUCHDB_PASSWORD | admin | The password of the CouchDB user |
+| SP_TS_STORAGE_HOST | influxdb | The hostname of the timeseries storage (currently InfluxDB) |
+| SP_TS_STORAGE_PORT | 8086 | The port of the timeseries storage |
+| SP_TS_STORAGE_PROTOCOL | http | The protocol of the timeseries storage (http or https) |
+| SP_TS_STORAGE_BUCKET | sp | The InfluxDB storage bucket name |
+| SP_TS_STORAGE_ORG | sp | The InfluxDB storage org |
+| SP_TS_STORAGE_TOKEN | sp-admin | The InfluxDB storage token |
+
+The InfluxDB itself can be configured by providing the variables `DOCKER_INFLUXDB_INIT_PASSWORD` and `DOCKER_INFLUXDB_INIT_ADMIN_TOKEN`. See the `docker-compose` file for details.
+
+## StreamPipes Extensions Service
+
+### Internal
+
+| Env Variable Name | Default Value | Description |
+|--------------------------------|--------------------------------------------|--------------------------------------------------------------------------------------------------------------------|
+| SP_CLIENT_USER | Empty | Service account for communication with Core |
+| SP_CLIENT_SECRET | Empty | Service secret for communication with Core |
+| SP_EXT_AUTH_MODE | sp-service-client | When set to AUTH: all interfaces are only accessible with authentication (requires SP_JET_PRIVATE_KEY_LOC in Core) |
+| SP_JWT_PUBLIC_KEY_LOC | my-apache-streampipes-secret-key-change-me | Path to the public key of the corresponding SP_JWT_PRIVATE_KEY defined in Core |
+
+### Third-party services
+
+The following variables are only required for extensions which require access to the internal time-series storage (the `Data Lake Sink`).
+
+| Env Variable Name | Default Value | Description |
+|------------------------|---------------|---------------------------------------------------------------------------|
+| SP_TS_STORAGE_HOST | influxdb | The hostname of the timeseries storage (currently InfluxDB) |
+| SP_TS_STORAGE_PORT | 8086 | The port of the timeseries storage |
+| SP_TS_STORAGE_PROTOCOL | http | The protocol of the timeseries storage (http or https) |
+| SP_TS_STORAGE_BUCKET | sp | The InfluxDB storage bucket name |
+| SP_TS_STORAGE_ORG | sp | The InfluxDB storage org |
+| SP_TS_STORAGE_TOKEN | sp-admin | The InfluxDB storage token |
+
+
+## Recommended variables
+
+For a standard deployment, it is recommended to customize the following variables:
+
+* Initiales Admin-Passwort (SP_INITIAL_ADMIN_PASSWORD, Core)
+* Initiales Client Secret (SP_INITIAL_SERVICE_USER_SECRET, Core)
+* Client Secret Extensions (SP_CLIENT_USER, Extensions)
+* Encryption Passcode (SP_ENCRYPTION_PASSCODE, Core)
+* CouchDB-Password (SP_COUCHDB_PASSWORD, Core + Extensions + CouchDB)
+* InfluxDB Storage Password (DOCKER_INFLUXDB_INIT_PASSWORD, InfluxDB)
+* InfluxDB Storage Token (SP_TS_STORAGE_TOKEN (Core, Extensions)
+ * DOCKER_INFLUXDB_INIT_ADMIN_TOKEN (InfluxDB service)
+
+## Auto-generation of variables in K8s setups
+
+See the [Kubernetes Guide](05_deploy-kubernetes.md) for an overview of auto-generated variables.
diff --git a/website-v2/versioned_docs/version-0.95.1/05_deploy-kubernetes.md b/website-v2/versioned_docs/version-0.95.1/05_deploy-kubernetes.md
new file mode 100644
index 000000000..6734676b4
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/05_deploy-kubernetes.md
@@ -0,0 +1,269 @@
+---
+id: deploy-kubernetes
+title: Kubernetes Deployment
+sidebar_label: Kubernetes Deployment
+---
+
+## Prerequisites
+Requires Helm (https://helm.sh/) and an actively running Kubernetes cluster.
+
+## Usage
+We provide helm chart options to get you going in the `installer/k8s`folder.
+
+**Starting** the default helm chart option is as easy as simply running the following command from the root of this folder:
+> **NOTE**: Starting might take a while since we also initially pull all Docker images from Dockerhub.
+
+```bash
+helm install streampipes ./
+```
+After a while, all containers should successfully started, indicated by the `Running` status.
+
+The `values.yaml` file contains several configuration options to customize your StreamPipes installation. See the section below for all configuration options.
+
+## Ingress
+
+The helm chart provides several options to configure an Ingress or to define an Ingressroute that directly integrates with Traefik.
+
+## Dynamic Volume Provisioning
+
+You can override the `storageClassName` variable to configure StreamPipes for dynamic volume provisioning.
+
+## Parameters
+
+Here is an overview of the supported parameters to configure StreamPipes.
+
+### Common parameters
+
+| Parameter Name | Description | Value |
+|--------------------------------------------------|---------------------------------------------------------|-----------------------------------------|
+| deployment | Deployment type (lite or full) | lite |
+| preferredBroker | Preferred broker for deployment | "nats" |
+| monitoringSystem | Enable monitoring system (true/false) | false |
+| pullPolicy | Image pull policy | "Always" |
+| restartPolicy | Restart policy for the container | Always |
+| persistentVolumeReclaimPolicy | Reclaim policy for persistent volumes | "Delete" |
+| persistentVolumeAccessModes | Access mode for persistent volumes | "ReadWriteOnce" |
+| initialDelaySeconds | Initial delay for liveness and readiness probes | 60 |
+| periodSeconds | Interval between liveness and readiness probes | 30 |
+| failureThreshold | Number of consecutive failures for readiness probes | 30 |
+| hostPath | Host path for the application | "" |
+
+### StreamPipes common parameters
+
+| Parameter Name | Description | Value |
+|-------------------------------------------------|---------------------------------------------------------|------------------------------------------|
+| streampipes.version | StreamPipes version | "0.93.0-SNAPSHOT" |
+| streampipes.registry | StreamPipes registry URL | "apachestreampipes" |
+| streampipes.auth.secretName | The secret name for storing secrets | "sp-secrets" |
+| streampipes.auth.users.admin.user | The initial admin user | "admin@streampipes.apache.org" |
+| streampipes.auth.users.admin.password | The initial admin password (leave empty for autogen) | "admin" |
+| streampipes.auth.users.service.user | The initial service account user | "sp-service-client" |
+| streampipes.auth.users.service.secret | The initial service account secret | empty (auto-generated) |
+| streampipes.auth.encryption.passcode | Passcode for value encryption | empty (auto-generated) |
+| streampipes.core.appName | StreamPipes backend application name | "backend" |
+| streampipes.core.port | StreamPipes backend port | 8030 |
+| streampipes.core.persistence.storageClassName | Storage class name for backend PVs | "hostpath" |
+| streampipes.core.persistence.storageSize | Size of the backend PV | "1Gi" |
+| streampipes.core.persistence.claimName | Name of the backend PersistentVolumeClaim | "backend-pvc" |
+| streampipes.core.persistence.pvName | Name of the backend PersistentVolume | "backend-pv" |
+| streampipes.core.service.name | Name of the backend service | "backend" |
+| streampipes.core.service.port | TargetPort of the StreamPipes backend service | 8030 |
+| streampipes.ui.appName | StreamPipes UI application name | "ui" |
+| streampipes.ui.resolverActive | Flag for enabling DNS resolver for Nginx proxy | true |
+| streampipes.ui.port | StreamPipes UI port | 8088 |
+| streampipes.ui.resolver | DNS resolver for Nginx proxy | "kube-dns.kube-system.svc.cluster.local" |
+| streampipes.ui.service.name | Name of the UI service | "ui" |
+| streampipes.ui.service.type | Type of the UI service | "ClusterIP" |
+| streampipes.ui.service.nodePort | Node port for the UI service | 8088 |
+| streampipes.ui.service.port | TargetPort of the StreamPipes UI service | 8088 |
+| streampipes.ingress.active | Flag for enabling Ingress for StreamPipes | false |
+| streampipes.ingress.annotations | Annotations for Ingress | {} |
+| streampipes.ingress.host | Hostname for Ingress | "" |
+| streampipes.ingressroute.active | Flag for enabling IngressRoute for StreamPipes | true |
+| streampipes.ingressroute.annotations | Annotations for IngressRoute | {} |
+| streampipes.ingressroute.entryPoints | Entry points for IngressRoute | ["web", "websecure"] |
+| streampipes.ingressroute.host | Hostname for IngressRoute | "" |
+| streampipes.ingressroute.certResolverActive | Flag for enabling certificate resolver for IngressRoute | true |
+| streampipes.ingressroute.certResolver | Certificate resolver for IngressRoute | "" |
+
+
+### Extensions common parameters
+
+| Parameter Name | Description | Value |
+|-------------------------------------------------|---------------------------------------------------------|------------------------------------------|
+| extensions.iiot.appName | IIoT extensions application name | extensions-all-iiot |
+| extensions.iiot.port | Port for the IIoT extensions application | 8090 |
+| extensions.iiot.service.name | Name of the IIoT extensions service | extensions-all-iiot |
+| extensions.iiot.service.port | TargetPort of the IIoT extensions service | 8090 |
+
+
+### External common parameters
+
+#### Couchdb common parameters
+
+| Parameter Name | Description | Value |
+|-------------------------------------------------|----------------------------------------------------------|------------------------------------------|
+| external.couchdb.appName | CouchDB application name | "couchdb" |
+| external.couchdb.version | CouchDB version | 3.3.1 |
+| external.couchdb.user | CouchDB admin username | "admin" |
+| external.couchdb.password | CouchDB admin password | empty (auto-generated) |
+| external.couchdb.port | Port for the CouchDB service | 5984 |
+| external.couchdb.service.name | Name of the CouchDB service | "couchdb" |
+| external.couchdb.service.port | TargetPort of the CouchDB service | 5984 |
+| external.couchdb.persistence.storageClassName | Storage class name for CouchDB PVs | "hostpath" |
+| external.couchdb.persistence.storageSize | Size of the CouchDB PV | "1Gi" |
+| external.couchdb.persistence.claimName | Name of the CouchDB PersistentVolumeClaim | "couchdb-pvc" |
+| external.couchdb.persistence.pvName | Name of the CouchDB PersistentVolume | "couchdb-pv" |
+
+#### Influxdb common parameters
+
+| Parameter Name | Description | Value |
+|-------------------------------------------------|----------------------------------------------------------|------------------------------------------|
+| external.influxdb.appName | InfluxDB application name | "influxdb" |
+| external.influxdb.version | InfluxDB version | 2.6 |
+| external.influxdb.username | InfluxDB admin username | "admin" |
+| external.influxdb.password | InfluxDB admin password | empty (auto-generated) |
+| external.influxdb.adminToken | InfluxDB admin token | empty (auto-generated) |
+| external.influxdb.initOrg | InfluxDB initial organization | "sp" |
+| external.influxdb.initBucket | InfluxDB initial bucket | "sp" |
+| external.influxdb.initMode | InfluxDB initialization mode | "setup" |
+| external.influxdb.apiPort | Port number for the InfluxDB service (API) | 8083 |
+| external.influxdb.httpPort | Port number for the InfluxDB service (HTTP) | 8086 |
+| external.influxdb.grpcPort | Port number for the InfluxDB service (gRPC) | 8090 |
+| external.influxdb.service.name | Name of the InfluxDB service | "influxdb" |
+| external.influxdb.service.apiPort | TargetPort of the InfluxDB service for API | 8083 |
+| external.influxdb.service.httpPort | TargetPort of the InfluxDB service for HTTP | 8086 |
+| external.influxdb.service.grpcPort | TargetPort of the InfluxDB service for gRPC | 8090 |
+| external.influxdb.persistence.storageClassName | Storage class name for InfluxDB PVs | "hostpath" |
+| external.influxdb.persistence.storageSize | Size of the InfluxDB PV | "1Gi" |
+| external.influxdb.persistence.storageSizeV1 | Size of the InfluxDB PV for v1 databases | "1Gi" |
+| external.influxdb.persistence.claimName | Name of the InfluxDBv2 PersistentVolumeClaim | "influxdb2-pvc" |
+| external.influxdb.persistence.claimNameV1 | Name of the InfluxDBv1 PersistentVolumeClaim | "influxdb-pvc" |
+| external.influxdb.persistence.pvName | Name of the InfluxDBv2 PersistentVolume | "influxdb2-pv" |
+| external.influxdb.persistence.pvNameV1 | Name of the InfluxDBv1 PersistentVolume | "influxdb-pv" |
+
+
+#### Nats common parameters
+
+| Parameter Name | Description | Value |
+|-------------------------------------------------|----------------------------------------------------------|------------------------------------------|
+| external.nats.appName | NATS application name | "nats" |
+| external.nats.port | Port for the NATS service | 4222 |
+| external.nats.version | NATS version | |
+| external.nats.service.type | Type of the NATS service | "NodePort" |
+| external.nats.service.externalTrafficPolicy | External traffic policy for the NATS service | "Local" |
+| external.nats.service.name | Name of the NATS service | "nats" |
+| external.nats.service.port | TargetPort of the NATS service | 4222 |
+
+
+#### Kafka common parameters
+
+| Parameter Name | Description | Value |
+|-------------------------------------------------|----------------------------------------------------------|------------------------------------------|
+| external.kafka.appName | Kafka application name | "kafka" |
+| external.kafka.version | Kafka version | 2.2.0 |
+| external.kafka.port | Port for the Kafka service | 9092 |
+| external.kafka.external.hostname | Name which will be advertised to external clients. Clients which use (default) port 9094 | "localhost"
+| external.kafka.service.name | Name of the Kafka service | "kafka" |
+| external.kafka.service.port | TargetPort of the Kafka service | 9092 |
+| external.kafka.service.portOutside | Port for Kafka client outside of the cluster | 9094 |
+| external.kafka.persistence.storageClassName | Storage class name for Kafka PVs | "hostpath" |
+| external.kafka.persistence.storageSize | Size of the Kafka PV | "1Gi" |
+| external.kafka.persistence.claimName | Name of the Kafka PersistentVolumeClaim | "kafka-pvc" |
+| external.kafka.persistence.pvName | Name of the Kafka PersistentVolume | "kafka-pv" |
+|
+
+#### Zookeeper common parameters
+
+| Parameter Name | Description | Value |
+|-------------------------------------------------|----------------------------------------------------------|------------------------------------------|
+| external.zookeeper.appName | ZooKeeper application name | "zookeeper" |
+| external.zookeeper.version | ZooKeeper version | 3.4.13 |
+| external.zookeeper.port | Port for the ZooKeeper service | 2181 |
+| external.zookeeper.service.name | Name of the ZooKeeper service | "zookeeper" |
+| external.zookeeper.service.port | TargetPort of the ZooKeeper service | 2181 |
+| external.zookeeper.persistence.storageClassName | Storage class name for ZooKeeper PVs | "hostpath" |
+| external.zookeeper.persistence.storageSize | Size of the ZooKeeper PV | "1Gi" |
+| external.zookeeper.persistence.claimName | Name of the ZooKeeper PersistentVolumeClaim | "zookeeper-pvc" |
+| external.zookeeper.persistence.pvName | Name of the ZooKeeper PersistentVolume | "zookeeper-pv" |
+
+
+#### Pulsar common parameters
+
+| Parameter Name | Description | Value |
+|-------------------------------------------------|----------------------------------------------------------|------------------------------------------|
+| external.pulsar.appName | pulsar application name | "pulsar" |
+| external.pulsar.version | pulsar version | 3.0.0 |
+| external.pulsar.port | Port for the pulsar service | 6650 |
+| external.pulsar.service.name | Name of the pulsar service | "pulsar" |
+| external.pulsar.service.port | TargetPort of the pulsar service | 6650 |
+| external.pulsar.persistence.storageClassName | Storage class name for pulsar PVs | "hostpath" |
+| external.pulsar.persistence.storageSize | Size of the pulsar PV | "1Gi" |
+| external.pulsar.persistence.claimName | Name of the pulsar PersistentVolumeClaim | "pulsar-pvc" |
+| external.pulsar.persistence.pvName | Name of the pulsar PersistentVolume | "pulsar-pv" |
+
+### Monitoring common parameters
+
+#### Monitoring - Prometheus
+
+| Parameter Name | Description | Value |
+|-------------------------------------------------|----------------------------------------------------------|------------------------------------------|
+| prometheus.appName | Prometheus application name | "prometheus" |
+| prometheus.version | Prometheus version | 2.45.0 |
+| prometheus.port | Prometheus port | 9090 |
+| prometheus.service.name | Prometheus service name | "prometheus" |
+| prometheus.service.port | Prometheus service port | 9090 |
+| prometheus.persistence.storageClassName | Prometheus storage class name | "hostpath" |
+| prometheus.persistence.storageSize | Prometheus storage size | "2Gi" |
+| prometheus.persistence.claimName | Prometheus PVC claim name | "prometheus-pvc" |
+| prometheus.persistence.pvName | Prometheus PV name | "prometheus-pv" |
+| prometheus.persistence.tokenStorageSize | Prometheus token storage size | "16Ki" |
+| prometheus.config.scrapeInterval | Prometheus scrape interval | 10s |
+| prometheus.config.evaluationInterval | Prometheus evaluation interval | 15s |
+| prometheus.config.backendJobName | Prometheus backend job name | "backend" |
+| prometheus.config.extensionsName | Prometheus extensions job name | "extensions-all-iiot" |
+| prometheus.config.tokenFileName | Prometheus token file name | "token" |
+| prometheus.config.tokenFileDir | Prometheus token file directory | "/opt/data"
+
+#### Monitoring - Grafana
+
+| Parameter Name | Description | Value |
+|-------------------------------------------------|----------------------------------------------------------|------------------------------------------|
+| grafana.appName | Grafana application name | "grafana" |
+| grafana.version | Grafana version | 10.1.2 |
+| grafana.port | Grafana port | 3000 |
+| grafana.service.name | Grafana service name | "grafana" |
+| grafana.service.port | Grafana service port | 3000 |
+| grafana.persistence.storageClassName | Grafana storage class name | "hostpath" |
+| grafana.persistence.storageSize | Grafana storage size | "1Gi" |
+| grafana.persistence.claimName | Grafana PVC claim name | "grafana-pvc" |
+| grafana.persistence.pvName | Grafana PV name | "grafana-pv" |
+
+
+## Auto-generation of parameters.
+
+The helm chart includes a `secrets.yaml` file which auto-generates several settings as follows:
+
+```yaml
+
+apiVersion: v1
+kind: Secret
+metadata:
+ name: sp-secrets
+ namespace: {{ .Release.Namespace | quote }}
+type: Opaque
+data:
+ sp-initial-admin-password: {{ ternary (randAlphaNum 10) .Values.streampipes.auth.users.admin.password (empty .Values.streampipes.auth.users.admin.password) | b64enc | quote }}
+ sp-initial-client-secret: {{ ternary (randAlphaNum 35) .Values.streampipes.auth.users.service.secret (empty .Values.streampipes.auth.users.service.secret) | b64enc | quote }}
+ sp-encryption-passcode: {{ ternary (randAlphaNum 20) .Values.streampipes.auth.encryption.passcode (empty .Values.streampipes.auth.encryption.passcode) | b64enc | quote }}
+ sp-couchdb-password: {{ ternary (randAlphaNum 20) .Values.external.couchdb.password (empty .Values.external.couchdb.password) | b64enc | quote }}
+ sp-ts-storage-password: {{ ternary (randAlphaNum 20) .Values.external.influxdb.password (empty .Values.external.influxdb.password) | b64enc | quote }}
+ sp-ts-storage-token: {{ ternary (randAlphaNum 20) .Values.external.influxdb.adminToken (empty .Values.external.influxdb.adminToken) | b64enc | quote }}
+
+```
+
+
+## Deleting the current helm chart deployment:
+```bash
+helm uninstall streampipes
+```
diff --git a/website-v2/versioned_docs/version-0.70.0/05_deploy-security.md b/website-v2/versioned_docs/version-0.95.1/05_deploy-security.md
similarity index 99%
rename from website-v2/versioned_docs/version-0.70.0/05_deploy-security.md
rename to website-v2/versioned_docs/version-0.95.1/05_deploy-security.md
index b9958c436..cae5bdcbf 100644
--- a/website-v2/versioned_docs/version-0.70.0/05_deploy-security.md
+++ b/website-v2/versioned_docs/version-0.95.1/05_deploy-security.md
@@ -2,7 +2,6 @@
id: deploy-security
title: Security
sidebar_label: Security
-original_id: deploy-security
---
## Overriding default settings
diff --git a/website-v2/versioned_docs/version-0.70.0/05_deploy-use-ssl.md b/website-v2/versioned_docs/version-0.95.1/05_deploy-use-ssl.md
similarity index 97%
rename from website-v2/versioned_docs/version-0.70.0/05_deploy-use-ssl.md
rename to website-v2/versioned_docs/version-0.95.1/05_deploy-use-ssl.md
index f8c44a68b..d5762b8dc 100644
--- a/website-v2/versioned_docs/version-0.70.0/05_deploy-use-ssl.md
+++ b/website-v2/versioned_docs/version-0.95.1/05_deploy-use-ssl.md
@@ -2,7 +2,6 @@
id: deploy-use-ssl
title: Use SSL
sidebar_label: Use SSL
-original_id: deploy-use-ssl
---
This page explains how SSL Certificates can be used to provide transport layer security between your Browser and the Streampipes Backend.
diff --git a/website-v2/versioned_docs/version-0.70.0/06_extend-archetypes.md b/website-v2/versioned_docs/version-0.95.1/06_extend-archetypes.md
similarity index 73%
rename from website-v2/versioned_docs/version-0.70.0/06_extend-archetypes.md
rename to website-v2/versioned_docs/version-0.95.1/06_extend-archetypes.md
index 2880d9289..a6907f0ee 100644
--- a/website-v2/versioned_docs/version-0.70.0/06_extend-archetypes.md
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-archetypes.md
@@ -2,7 +2,6 @@
id: extend-archetypes
title: Maven Archetypes
sidebar_label: Maven Archetypes
-original_id: extend-archetypes
---
In this tutorial we explain how you can use the Maven archetypes to develop your own StreamPipes processors and sinks.
@@ -21,7 +20,10 @@ First, the ``groupId`` of the resulting Maven artifact must be set.
We use ``groupId``: ``org.example`` and ``artifactId``: ``ExampleProcessor``.
You can keep the default values for the other settings, confirm them by hitting enter.
-The current {sp.version} is 0.69.0 (for a pre-release version, use the SNAPSHOT appendix, e.g. 0.69.0-SNAPSHOT)
+:::info Choosing the right version
+Make sure that the version used to create your archetype matches your running Apache StreamPipes version.
+In the example below, replace `{sp.version}` with the proper version, e.g., `0.92.0`.
+:::
```bash
mvn archetype:generate \
@@ -29,25 +31,6 @@ mvn archetype:generate \
-DarchetypeArtifactId=streampipes-archetype-extensions-jvm \
-DarchetypeVersion={sp.version}
```
-
- Other archetypes
-
-## Processors Flink
-```bash
-mvn archetype:generate \
- -DarchetypeGroupId=org.apache.streampipes \
- -DarchetypeArtifactId=streampipes-archetype-pe-processors-flink \
- -DarchetypeVersion={sp.version}
-```
-
-## Sinks Flink
-```bash
-mvn archetype:generate \
- -DarchetypeGroupId=org.apache.streampipes \
- -DarchetypeArtifactId=streampipes-archetype-pe-sinks-flink \
- -DarchetypeVersion={sp.version}
-```
-
## Project structure
@@ -61,5 +44,3 @@ For details, have a look at the other parts of the Developer Guide, where these
## Next steps
Click [here](06_extend-first-processor.md) to learn how to create your first data processor.
-
-
diff --git a/website-v2/versioned_docs/version-0.70.0/06_extend-cli.md b/website-v2/versioned_docs/version-0.95.1/06_extend-cli.md
similarity index 86%
rename from website-v2/versioned_docs/version-0.70.0/06_extend-cli.md
rename to website-v2/versioned_docs/version-0.95.1/06_extend-cli.md
index ee7c7d765..e5f93cfd6 100644
--- a/website-v2/versioned_docs/version-0.70.0/06_extend-cli.md
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-cli.md
@@ -2,7 +2,6 @@
id: extend-cli
title: StreamPipes CLI
sidebar_label: StreamPipes CLI
-original_id: extend-cli
---
The StreamPipes command-line interface (CLI) is focused on developers in order to provide an easy entrypoint to set up a suitable dev environment, either planning on developing
@@ -24,7 +23,7 @@ pipeline-element
streampipes env --set pipeline-element
streampipes up -d
```
-> **NOTE**: use `./streampipes` if you haven't add it to the PATH and sourced it (see section "Run `streampipes` from anywhere?").
+> **NOTE**: use `./installer/cli/streampipes` if you haven't add it to the PATH and sourced it (see section "Run `streampipes` from anywhere?").
## Prerequisites
The CLI is basically a wrapper around multiple `docker` and `docker-compose` commands plus some additional sugar.
@@ -35,9 +34,9 @@ The CLI is basically a wrapper around multiple `docker` and `docker-compose` com
* For Windows Developer: GitBash only
-Tested on: **macOS**, **Linux**, **Windows***)
+Tested on: (***macOS**, **Linux**, **Windows***)
-> **NOTE**: *) If you're using Windows the CLI only works in combination with GitBash - CMD, PowerShell won't work.
+> **NOTE**: If you're using Windows the CLI only works in combination with GitBash - CMD, PowerShell won't work.
## CLI commands overview
@@ -83,7 +82,7 @@ streampipes env --set pipeline-element
```
**Start** environment ( default: `dev` mode). Here the service definition in the selected environment is used to start the multi-container landscape.
-> **NOTE**: `dev` mode is enabled by default since we rely on open ports to core service such as `consul`, `couchdb`, `kafka` etc. to reach from the IDE when developing. If you don't want to map ports (except the UI port), then use the `--no-ports` flag.
+> **NOTE**: `dev` mode is enabled by default since we rely on open ports to core service such as `couchdb`, `kafka` etc. to reach from the IDE when developing. If you don't want to map ports (except the UI port), then use the `--no-ports` flag.
```bash
streampipes up -d
@@ -92,7 +91,7 @@ streampipes up -d
```
Now you're good to go to write your new pipeline element :tada: :tada: :tada:
-> **HINT for extensions**: Use our [Maven archetypes](https://streampipes.apache.org/docs/docs/dev-guide-archetype/) to setup a project skeleton and use your IDE of choice for development. However, we do recommend using IntelliJ.
+> **HINT for extensions**: Use our [Maven archetypes](https://streampipes.apache.org/docs/docs/extend-archetypes/) to set up a project skeleton and use your IDE of choice for development. However, we do recommend using IntelliJ.
> **HINT for core**: To work on `backend` or `ui` features you need to set the template to `backend` and clone the core repository [streampipes](https://github.com/apache/streampipes) - check the prerequisites there for more information.
@@ -105,12 +104,12 @@ streampipes down
## Additionally, useful commands
-**Start individual services only?** We got you! You chose a template that suits your needs and now you only want to start individual services from it, e.g. only Kafka and Consul.
+**Start individual services only?** We got you! You chose a template that suits your needs and now you only want to start individual services from it, e.g. only Kafka and InfluxDB.
> **NOTE**: the service names need to be present and match your current `.spenv` environment.
```bash
-streampipes up -d kafka consul
+streampipes up -d kafka influxdb
```
**Get current environment** (if previously set using `streampipes env --set `).
@@ -131,8 +130,8 @@ streampipes pull
**Restart** all services of current environment or specific services
```bash
streampipes restart
-# restart backend & consul
-# streampipes restart backend consul
+# restart backend
+# streampipes restart backend
```
**Clean** your system and remove created StreamPipes Docker volumes, StreamPipes docker network and dangling StreamPipes images of old image layers.
@@ -184,7 +183,7 @@ For **macOS**, or **Linux**:
export PATH="/path/to/streampipes-installer/installer/cli:$PATH"
```
-For **Windows 10**, e.g. check this [documentation](https://helpdeskgeek.com/windows-10/add-windows-path-environment-variable/).
+For **Windows** add `installer\cli` to environment variables, e.g. check this [documentation](https://helpdeskgeek.com/windows-10/add-windows-path-environment-variable/).
## Upgrade to new version
diff --git a/website-v2/versioned_docs/version-0.95.1/06_extend-client.md b/website-v2/versioned_docs/version-0.95.1/06_extend-client.md
new file mode 100644
index 000000000..f584c3d2c
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-client.md
@@ -0,0 +1,204 @@
+---
+id: extend-client
+title: StreamPipes Client
+sidebar_label: StreamPipes Client
+---
+
+
+:::info Looking for Python support?
+
+This section explains how to use the Apache StreamPipes Java Client. Please read the Python docs to find out how to use
+the client for Python.
+
+:::
+
+## About the StreamPipes client
+
+Sometimes you don't want to write your own extensions to StreamPipes, but want to interact with StreamPipes from
+external application.
+One example is to influence the lifecycle of pipelines - think of a feature which automatically starts or stops specific
+pipelines that monitor the production of a specific product.
+
+Another example is to gather live data from Apache StreamPipes, e.g., to consume data that has been previously connected
+by an external, standalone application.
+
+For such use cases, we provide the StreamPipes client, which is currently available in Python and Java. This section
+covers the usage of the Java client.
+
+## Using the StreamPipes client
+
+:::info Choosing the right version
+
+Your client library version should match the installed Apache StreamPipes version. Replace `${streampipes.version}` with
+the version of your installation, e.g., `0.92.0`.
+
+:::
+
+In your Java project, add the following dependency to your pom file:
+
+```xml
+
+
+ org.apache.streampipes
+ streampipes-client
+ ${streampipes.version}
+
+
+```
+
+## Obtaining an API token
+
+
+
+To communicate with Apache StreamPipes, you need to provide proper credentials. There are two ways to obtain
+credentials:
+
+* An API token, which is bound to a user. The API token can be generate from the UI clicking on the user icon and then
+ navigate to `Profile/API`.
+* A service user, which can be created by users with role `Admin`.
+
+Service users can have their own permissions, while API tokens inherit all permissions from the corresponding user.
+
+## Connecting to StreamPipes
+
+Once you have your API token and configured your dependencies, you can connect to an Apache StreamPipes instance as
+follows:
+
+```java
+
+CredentialsProvider credentials=StreamPipesCredentials
+ .withApiKey("admin@streampipes.apache.org","YOUR_API_KEY");
+
+// Create an instance of the StreamPipes client
+ StreamPipesClient client=StreamPipesClient
+ .create("localhost",8082,credentials,true);
+
+```
+
+The following configurations are required:
+
+* The `withApiKey` method expects the username and the API key. Alternatively, use the `withServiceToken` method to
+ authenticate as a service user.
+* The client instance requires the hostname or IP address of your running StreamPipes instance. In addition, you need to
+ provide the port, the credentials object and a flag which needs to be set in case the StreamPipes instance is not
+ served over HTTPS.
+* There are short-hand convenience options to create a client instance.
+
+## Working with the client
+
+Here are some examples how you can work with the StreamPipes client:
+
+```java
+
+// Get streams
+List streams=client.streams().all();
+
+// Get a specific stream
+ Optional stream=client.streams().get("STREAM_ID");
+
+// see the schema of a data stream
+ EventSchema schema=stream.get().getEventSchema();
+
+// print the list of fields of this stream
+ List fields=schema.getEventProperties();
+
+// Get all pipelines
+ List pipelines=client.pipelines().all();
+
+// Start a pipeline
+ PipelineOperationStatus status=client.pipelines().start(pipelines.get(0));
+
+// Stop a pipeline with providing a pipeline Id
+ PipelineOperationStatus status=client.pipelines().stop("PIPELINE_ID");
+
+// Get all pipeline element templates
+ List templates=client.pipelineElementTemplates().all();
+
+// Get all data sinks
+ List dataSinks=client.sinks().all();
+
+
+```
+
+## Consuming live data
+
+StreamPipes supports a variety of messaging protocols to internally handle data streams. If you plan to gather live data
+from the client library, you also need to add one or more of the supported messaging
+protocols to the pom file. The default protocol depends on the StreamPipes configuration and is set in the `.env` file
+in your installation folder.
+
+```xml
+
+
+
+ org.apache.streampipes
+ streampipes-messaging-kafka
+ ${streampipes.version}
+
+
+
+
+org.apache.streampipes
+streampipes-messaging-nats
+${streampipes.version}
+
+
+
+
+
+org.apache.streampipes
+streampipes-messaging-mqtt
+${streampipes.version}
+
+
+```
+
+In addition, add the message format that is used internally by StreamPipes. The default message format used by
+StreamPipes is JSON, so let's include the dependency as well:
+
+```xml
+
+
+
+ org.apache.streampipes
+ streampipes-dataformat-json
+ ${streampipes.version}
+
+
+```
+
+Once you've imported the dependencies, it is easy to consume live data. First, register the protocols and formats in
+your client instance:
+
+```java
+
+client.registerProtocol(new SpKafkaProtocolFactory());
+
+// or Nats:
+ client.registerProtocol(new SpNatsProtocolFactory());
+
+// data format:
+ client.registerDataFormat(new JsonDataFormatFactory());
+
+```
+
+Then, you are ready to consume data:
+
+```java
+
+client.streams().subscribe(dataStreams.get(0),new EventProcessor() {
+@Override
+public void onEvent(Event event) {
+ // example
+ MapUtils.debugPrint(System.out,"event",event.getRaw());
+ }
+ });
+
+```
+
+:::tip
+
+There are many more options to work with the StreamPipes Client - e.g., you can trigger emails directly from the API.
+Just explore the various classes and interfaces provided by the client!
+
+:::
diff --git a/website-v2/versioned_docs/version-0.95.1/06_extend-customize-ui.md b/website-v2/versioned_docs/version-0.95.1/06_extend-customize-ui.md
new file mode 100644
index 000000000..c09823ffb
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-customize-ui.md
@@ -0,0 +1,226 @@
+---
+id: extend-customize-ui
+title: UI customization
+sidebar_label: UI customization
+---
+
+
+## Custom theme
+
+It is possible to use a custom theme with individual styles, logos and images instead of the default StreamPipes theme.
+
+In this section, we describe the necessary steps to build and deploy a custom theme.
+
+
+### Prerequisite: Learn how to run and build the UI
+
+To use a custom theme, it is required to build the UI with the custom settings.
+In general, the UI can be found in the `ui` folder of the source code.
+
+Perform the following steps to build the UI;
+
+```bash
+
+# Install all necessary packages
+npm install
+
+# Start the UI for development purposes
+npm run start
+
+# Build the StreamPipes UI
+npm run build
+
+```
+
+## Customizable assets
+
+The following assets can be provided in a customized theme:
+
+* **Logo** This is the main logo image, which is shown e.g., on the login page.
+* **Navigation Logo** This is the logo which appears in the top navigation bar after successful login
+* **Favicon** The favicon is shown in the browser navbar. It is also used as the loading animation in StreamPipes.
+* **String constants** Customizable strings, e.g., when you want to use another application name than **Apache StreamPipes**.
+* **Theme variables** An scss file which defines custom colors and layouts.
+
+## Customize constants
+
+To customize constants, you can create a custom file `app.constants.ts` and modify the content based on the template below:
+
+```javascript
+
+import {Injectable} from '@angular/core';
+
+@Injectable()
+export class AppConstants {
+
+ public readonly APP_NAME = "Apache StreamPipes";
+ public readonly APP_TITLE = 'Apache StreamPipes';
+ public readonly EMAIL = "admin@streampipes.apache.org";
+}
+
+
+```
+
+## Customize theme
+
+To customize the theme, we provide a file named `variables.scss` which can be overridden with default color and style settings.
+
+See the example below:
+
+```scss
+
+/*!
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+$sp-color-primary: rgb(57, 181, 74);
+$sp-color-primary-600: #06c12a;
+
+$sp-color-accent: #1b1464;
+
+$sp-color-accent-light-blue: rgb(59, 92, 149);
+$sp-color-accent-light: rgb(156, 156, 156);
+$sp-color-accent-light-transparent: rgba(156, 156, 156, 0.4);
+
+$sp-color-accent-dark: #83a3de;
+
+$sp-color-adapter: #7f007f;
+$sp-color-stream: #ffeb3b;
+$sp-color-processor: #009688;
+$sp-color-sink: #3f51b5;
+
+$sp-color-error: #b71c1c;
+
+body {
+ --color-data-view: rgb(122, 206, 227);
+ --color-dashboard: rgb(76, 115, 164);
+ --color-adapter: rgb(182, 140, 97);
+ --color-data-source: #ffeb3b;
+ --color-pipeline: rgb(102, 185, 114);
+ --color-measurement: rgb(39, 164, 155);
+ --color-file: rgb(163, 98, 190);
+
+ --button-border-radius: 5px;
+ --iconbar-width: 35px;
+ --navbar-icon-border-radius: 0;
+ --navbar-icon-padding: 0;
+}
+
+:root {
+ --color-loading-bar: #{$sp-color-accent};
+}
+
+.dark-mode {
+ --color-primary: #{$sp-color-primary};
+ --color-accent: #{$sp-color-accent-dark};
+ --color-bg-outer: var(--color-bg-1);
+ --color-bg-page-container: var(--color-bg-0);
+ --color-bg-main-panel-header: var(--color-bg-0);
+ --color-bg-main-panel-content: var(--color-bg-0);
+ --color-bg-navbar-icon: inherit;
+ --color-bg-navbar-icon-selected: inherit;
+ --color-bg-0: #121212;
+ --color-bg-1: #282828;
+ --color-bg-2: #404040;
+ --color-bg-3: #424242;
+ --color-bg-4: #5f5f5f;
+ --color-bg-dialog: rgb(66, 66, 66);
+ --color-shadow: #c4c4c4;
+ --color-pe: #404040;
+ --color-default-text: rgba(255, 255, 255, 0.87);
+ --color-warn: #b36161;
+
+ --color-tab-border: #cccccc;
+
+ --color-navigation-bg: var(--color-primary);
+ --color-navigation-link-text: var(--color-bg-0);
+ --color-navigation-text: #121212;
+ --color-navigation-selected: #{$sp-color-primary};
+ --color-navigation-hover: #{$sp-color-primary-600};
+ --color-navigation-bg-selected: var(--color-bg-1);
+ --color-navigation-divider: #{$sp-color-primary};
+
+ --content-box-color: #404040;
+ --canvas-color: linear-gradient(
+ 90deg,
+ rgba(50, 50, 50, 0.5) 10%,
+ transparent 0%
+ ),
+ linear-gradient(rgba(50, 50, 50, 0.5) 10%, transparent 0%);
+}
+
+.light-mode {
+ --color-primary: #{$sp-color-primary};
+ --color-accent: #{$sp-color-accent};
+ --color-bg-outer: var(--color-bg-1);
+ --color-bg-page-container: var(--color-bg-0);
+ --color-bg-main-panel-header: var(--color-bg-0);
+ --color-bg-main-panel-content: var(--color-bg-0);
+ --color-bg-navbar-icon: inherit;
+ --color-bg-navbar-icon-selected: inherit;
+ --color-bg-0: #ffffff;
+ --color-bg-1: #fafafa;
+ --color-bg-2: #f1f1f1;
+ --color-bg-3: rgb(224, 224, 224);
+ --color-bg-4: rgb(212, 212, 212);
+ --color-bg-dialog: #ffffff;
+ --color-shadow: #555;
+ --color-pe: #ffffff;
+ --color-default-text: #121212;
+ --color-warn: #b71c1c;
+
+ --color-tab-border: #cccccc;
+
+ --color-navigation-bg: var(--color-primary);
+ --color-navigation-link-text: var(--color-bg-0);
+ --color-navigation-text: #ffffff;
+ --color-navigation-selected: #{$sp-color-primary};
+ --color-navigation-hover: #{$sp-color-primary-600};
+ --color-navigation-bg-selected: var(--color-bg-1);
+ --color-navigation-divider: var(--color-primary);
+
+ --content-box-color: rgb(156, 156, 156);
+ --canvas-color: linear-gradient(
+ 90deg,
+ rgba(208, 208, 208, 0.5) 10%,
+ transparent 0%
+ ),
+ linear-gradient(rgba(208, 208, 208, 0.5) 10%, transparent 0%);
+}
+
+```
+## Run a customized build
+
+To create a new UI build with customized themes, use the following command:
+
+````bash
+
+UI_LOC=PATH_TO_FOLDER_WITH_CUSTOM_TEMPLATES \\
+THEME_LOC=$UI_LOC/_variables.scss \\
+LOGO_HEADER_LOC=$UI_LOC/img/logo.png \\
+FAVICON_LOC=$UI_LOC/img/favicon.png \\
+LOGO_NAV_LOC=$UI_LOC/img/logo-navigation.png \\
+CONSTANTS_FILE=$UI_LOC/app.constants.ts \\
+npm run build
+
+````
+
+First, we create a helper environment variable that links to a folder which includes custom logos, the theme file and constants.
+Next, we set the variables above to override default logos and stylings.
+Finally, the usual build process is executed.
+
+Once finished, you've successfully customized an Apache StreamPipes instance!
diff --git a/website-v2/versioned_docs/version-0.70.0/06_extend-first-processor.md b/website-v2/versioned_docs/version-0.95.1/06_extend-first-processor.md
similarity index 95%
rename from website-v2/versioned_docs/version-0.70.0/06_extend-first-processor.md
rename to website-v2/versioned_docs/version-0.95.1/06_extend-first-processor.md
index aa00bbb5d..96080508f 100644
--- a/website-v2/versioned_docs/version-0.70.0/06_extend-first-processor.md
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-first-processor.md
@@ -2,7 +2,6 @@
id: extend-first-processor
title: Your first data processor
sidebar_label: Your first data processor
-original_id: extend-first-processor
---
In this section, we will explain how to start a pipeline element service and install it using the StreamPipes UI.
@@ -27,22 +26,19 @@ Once you start an extensions service, you will see the chosen IP in printed in t
If you see such an IP or the extensions service complains that it cannot resolve the IP, you can manually set the IP address of the extensions service. You can do so by providing an SP_HOST
environment variable.
-To check if the service is up and running, open the browser on *'localhost:8090'* (or the port defined in the service definition). The machine-readable description of the processor should be visible as shown below.
-
+To check if the service is up and running, open the browser on *'localhost:8090'* (or the port defined in the service definition). The machine-readable description of the processor should be visible as shown below.
+
-
-
Common Problems
-
+:::caution Common Problems
If the service description is not shown on 'localhost:8090', you might have to change the port address.
This needs to be done in the configuration of your service, further explained in the configurations part of the developer guide.
If the service does not show up in the StreamPipes installation menu, click on 'MANAGE ENDPOINTS' and add 'http://YOUR_IP_OR_DNS_NAME:8090'.
Use the IP or DNS name you provided as the SP_HOST variable or the IP (if resolvable) found by the auto-discovery service printed in the console.
After adding the endpoint, a new processor with the name *Example* should show up.
-
-
+:::
Now you can go to StreamPipes.
Your new processor *'Example'* should now show up in the installation menu ("Install Pipeline Elements" in the left navigation bar).
diff --git a/website-v2/versioned_docs/version-0.70.0/06_extend-sdk-event-model.md b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-event-model.md
similarity index 98%
rename from website-v2/versioned_docs/version-0.70.0/06_extend-sdk-event-model.md
rename to website-v2/versioned_docs/version-0.95.1/06_extend-sdk-event-model.md
index f4bb8ed1e..42cc8d472 100644
--- a/website-v2/versioned_docs/version-0.70.0/06_extend-sdk-event-model.md
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-event-model.md
@@ -2,7 +2,6 @@
id: extend-sdk-event-model
title: "SDK Guide: Event Model"
sidebar_label: "SDK: Event Model"
-original_id: extend-sdk-event-model
---
## Introduction
@@ -11,7 +10,7 @@ This guide explains the usage of the event model to manipulate runtime events fo
## Prerequisites
-This guide assumes that you are already familiar with the basic setup of [data processors](extend-first-processor).
+This guide assumes that you are already familiar with the basic setup of [data processors](06_extend-first-processor.md).
### Property Selectors
diff --git a/website-v2/versioned_docs/version-0.70.0/06_extend-sdk-functions.md b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-functions.md
similarity index 94%
rename from website-v2/versioned_docs/version-0.70.0/06_extend-sdk-functions.md
rename to website-v2/versioned_docs/version-0.95.1/06_extend-sdk-functions.md
index 77d2bb966..659690b71 100644
--- a/website-v2/versioned_docs/version-0.70.0/06_extend-sdk-functions.md
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-functions.md
@@ -2,7 +2,6 @@
id: extend-sdk-functions
title: "SDK Guide: Functions"
sidebar_label: "SDK: Functions"
-original_id: extend-sdk-functions
---
## Introduction
@@ -25,11 +24,13 @@ and run until the service is stopped.
## Writing a function
-
-
Work in Progress
-
Functions are currently in preview mode and are not yet recommended for production usage.
-APIs are subject to change in a future version.
-
+:::caution Work in Progress
+
+Functions are currently in preview mode and are not yet recommended for production usage.
+APIs are subject to change in a future version.
+
+:::
+
To define a function, create a new extensions service using the [Maven Archetypes](06_extend-archetypes.md) or use an already existing service.
diff --git a/website-v2/versioned_docs/version-0.70.0/06_extend-sdk-migration-sd.md b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-migration-sd.md
similarity index 84%
rename from website-v2/versioned_docs/version-0.70.0/06_extend-sdk-migration-sd.md
rename to website-v2/versioned_docs/version-0.95.1/06_extend-sdk-migration-sd.md
index 054d11c60..1fd5200ad 100644
--- a/website-v2/versioned_docs/version-0.70.0/06_extend-sdk-migration-sd.md
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-migration-sd.md
@@ -2,7 +2,6 @@
id: extend-sdk-migration-service-discovery
title: "Migration Guide: New Service Discovery in 0.69.0"
sidebar_label: "Migration Guide: 0.69.0"
-original_id: extend-sdk-migration-service-discovery
---
@@ -99,15 +98,16 @@ Configs can be easily accessed from the ``EventProcessorRuntimeContext`` (or ``E
### Service Discovery
-An extensions service can be started by executing the Init class. StreamPipes will now automatically select the proper service IP address and register the service in Consul.
+An extensions service can be started by executing the Init class.
+StreamPipes will now automatically select the proper service IP address and register the service at the backend.
You can inspect the selected IP address in the console:
```
-16:41:58.342 SP [main] INFO o.a.s.commons.networking.Networking - Using auto-discovered IP: 172.30.80.1
-16:41:58.364 SP [main] INFO o.a.s.commons.networking.Networking - Using port from provided environment variable SP_PORT: 6025
-16:41:58.367 SP [main] INFO o.a.s.c.init.DeclarersSingleton - Registering 0 configs in key/value store
-16:41:58.400 SP [main] INFO o.a.s.s.consul.ConsulProvider - Checking if consul is available...
-16:41:58.419 SP [main] INFO o.a.s.s.consul.ConsulProvider - Successfully connected to Consul
+2024-05-16T11:03:37.158+02:00 INFO --- [ main] o.a.s.commons.networking.Networking : Using auto-discovered IP: 192.168.178.22
+2024-05-16T11:03:37.158+02:00 INFO --- [ main] o.a.s.commons.networking.Networking : Using port from provided environment variable SP_PORT: 7023
+2024-05-16T11:03:37.372+02:00 INFO --- [ main] a.s.s.e.StreamPipesExtensionsServiceBase : Registering service org.apache.streampipes.extensions.all.jvm with id org.apache.streampipes.extensions.all.jvm-FUt84Y at core
+2024-05-16T11:03:37.814+02:00 INFO --- [ main] o.a.s.s.extensions.CoreRequestSubmitter : Successfully registered service at core.
+2024-05-16T11:03:37.814+02:00 INFO --- [ main] a.s.s.e.StreamPipesExtensionsServiceBase : Registering 1 service configs for service org.apache.streampipes.extensions.all.jvm
```
In some (rare) cases, a non-resolvable IP will be selected. In this case, you can manually override the IP by providing a ``SP_HOST`` environment variable. This falls back to a similar behaviour as in pre-0.69.0-versions and will use the manually provided IP.
diff --git a/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-migrations.md b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-migrations.md
new file mode 100644
index 000000000..bf822c1b8
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-migrations.md
@@ -0,0 +1,179 @@
+---
+id: extend-sdk-migration
+title: "SDK Guide: Pipeline Element Migration"
+sidebar_label: "SDK: PE Migration"
+---
+
+Pipeline element migrations allow you to automatically update and migrate both existing and pipeline elements when a new
+version of StreamPipes is installed. This means that whenever you upgrade StreamPipes, all existing and future
+pipeline elements will be directly compatible with the new version without any manual interaction. Pipeline elements
+include adapters, data processors, and data sinks.
+
+:::info
+Migrations will make their debut in StreamPipes version `0.93.0` and will be an integral part of the system going
+forward.
+However, it's important to note that this feature is not available in any of the previous versions of StreamPipes. To
+take full advantage of migrations and their benefits, it is recommended to upgrade to version `0.93.0` or later. This
+will
+ensure that you have access to the latest enhancements and maintain compatibility with the evolving StreamPipes
+platform.
+:::
+
+## Define Migrations
+
+Whenever a pipeline element, be it an adapter, data processor, or data sink, undergoes changes that result in
+modifications to its configuration options, developers must additionally create a migration procedure. This migration
+process should be capable of smoothly transitioning all affected instances from the previous version to the new one.
+The migration itself is automatically managed and executed by StreamPipes. Developers are only responsible for two key
+aspects:
+
+* **Implementing the concrete migration**: Developers need to craft the specific migration logic that facilitates the
+ seamless transition of configuration options.
+* **Registering the migration**: Developers should register their migration procedures at the extensions service,
+ allowing StreamPipes to identify and apply the necessary updates to affected instances.
+
+By adhering to these two essential tasks, developers can ensure a hassle-free evolution of pipeline elements while
+StreamPipes handles the orchestration of the migration process.
+
+The following gives a concrete example of creating a migration for
+the [S7 adapter](./pe/org.apache.streampipes.connect.iiot.adapters.plc4x.s7.md).
+Thereby, we assume this adapter has received a new input element which determines whether the connection should be made
+authenticated or not.
+This is represented by a simple boolean that is visualized as a toggle button in the UI.
+
+### Implementing a Concrete Migration
+
+StreamPipes offers three distinct migration mechanisms tailored to specific types of pipeline
+elements: `IAdapterMigrator`, `IDataProcessorMigrator`, and `IDataSinkMigrator`.
+These migration mechanisms are presented as interfaces and require the implementation of two fundamental methods:
+
+* `config()`: This method defines the configuration for the migration, encompassing all essential metadata related to
+ the migration process.
+* `migrate()`: Within this method, the actual migration logic is to be implemented. It serves as the operational core
+ for facilitating the migration for the respective pipeline element.
+
+In accordance with the example described above, we will implement the `Plc4xS7AdapterMigrationV1` in the following.
+
+:::note
+Before we begin, it's important to familiarize ourselves with two key conventions that guide our approach to migrations:
+
+* To maintain clarity and organization, all migration classes associated with a specific pipeline element are located
+ within a dedicated sub-package named `migration`. This sub-package is nested within the package of the respective
+ pipeline element.
+* Migration classes are named according to a specific schema: `MigrationV`. For
+ example, if you are working on a migration for the PLC4x S7 adapter targeting version 1, the migration class would be
+ named `Plc4xS7AdapterMigrationV1`.
+:::
+
+Let's begin with providing the migration's configuration:
+
+```java
+@Override
+public ModelMigratorConfig config() {
+ return new ModelMigratorConfig(
+ "org.apache.streampipes.connect.iiot.adapters.plc4x.s7",
+ SpServiceTagPrefix.ADAPTER,
+ 0,
+ 1
+ );
+}
+```
+
+The migration config consists of the following four parts:
+
+* `targetAppId`: this needs to equal the app id of the targeted element
+* `modelType`: the type of the element to be migrated, this can be one
+ of: `SpServiceTagPrefix.ADAPTER`, `SpServiceTagPrefix.DATA_PROCESSOR`, `SpServiceTagPrefix.DATA_SINK`.
+* `fromVersion`: the version of the element that the migration expects as input
+* `toVersion`: the version the element has after the migration (needs to be at least `fromVersion + 1`)
+
+The second step is to implement the actual migration logic.
+In our example, we need to extend the existing static properties by an additional boolean property.
+
+```java
+@Override
+public MigrationResult migrate(AdapterDescription element, IStaticPropertyExtractor extractor) throws RuntimeException {
+
+ var config = element.getConfig();
+
+ var slideToggle = new SlideToggleStaticProperty();
+ slideToggle.setDefaultValue(false);
+ slideToggle.setLabel("Authentication required?");
+ config.add(slideToggle);
+
+ element.setConfig(config);
+ return MigrationResult.success(element);
+}
+```
+
+We've completed all the necessary steps for our migration. The final task remaining is to register the migration within
+the service definition.
+
+### Registering the Migration
+
+Only when the migration is registered at the service definition, the migration is sent to the StreamPipes core service.
+Therefore, we need to add the migration to the same service definition as the element to migrate.
+In our example this is defined in `ConnectAdapterIiotInit`:
+
+```java jsx {22-24} showLineNumbers
+@Override
+public SpServiceDefinition provideServiceDefinition() {
+ return SpServiceDefinitionBuilder.create("connect-adapter-iiot",
+ "StreamPipes connect worker containing adapters relevant for the IIoT",
+ "",
+ 8001)
+ .registerAdapter(new MachineDataSimulatorAdapter())
+ .registerAdapter(new FileReplayAdapter())
+ .registerAdapter(new IfmAlMqttAdapter())
+ .registerAdapter(new RosBridgeAdapter())
+ .registerAdapter(new OpcUaAdapter())
+ .registerAdapter(new Plc4xS7Adapter())
+ .registerAdapter(new Plc4xModbusAdapter())
+ .registerAdapter(new KafkaProtocol())
+ .registerAdapter(new MqttProtocol())
+ .registerAdapter(new NatsProtocol())
+ .registerAdapter(new HttpStreamProtocol())
+ .registerAdapter(new PulsarProtocol())
+ .registerAdapter(new RocketMQProtocol())
+ .registerAdapter(new HttpServerProtocol())
+ .registerAdapter(new TubeMQProtocol())
+ .registerMigrators(
+ new Plc4xS7AdapterMigrationV1()
+ )
+ .build();
+```
+
+
+
+## How Migrations are Handled Internally
+
+Migrations are handled by an interplay between the Extension Service, which provides the migrations,
+and the StreamPipes Core Service, which manages the migrations, as shown in the figure below:
+
+
+When an extensions service is initiated and has successfully registered itself with the core, it proceeds to send a
+request to the core. This request includes a comprehensive list of all available migrations that have been registered
+for it. Since this collection of migrations may encompass multiple migrations that affect the same pipeline element,
+the migrations are first de-duplicated and then sorted based on their version range before being transmitted.
+
+Upon receiving these migrations, the core's actions can be categorized into two distinct parts:
+
+* Update descriptions for new elements
+* Update descriptions for existing elements
+
+### Update Descriptions for New Elements
+
+Each migration transmitted from the extensions service to the core triggers the core to update the description of the
+corresponding element stored in CouchDB. This is achieved by requesting the current configuration from the extensions
+service and subsequently overwriting the existing configuration in the storage.
+
+### Update Descriptions for Existing Elements
+
+For each migration sent from the extensions service to the core, the core conducts a thorough check to determine if any
+existing elements are affected by this migration. If such elements are identified, the extensions service is tasked with
+requesting and subsequently executing the migration on behalf of the core.
+
+In scenarios where multiple applicable migrations exist for a single pipeline element, they are sequentially applied.
+Success in this process allows the core to seamlessly update the configuration. However, if any issues arise, the
+corresponding pipeline element is halted. In the case of processors and sinks, the associated pipeline is even marked
+with a `needs attention` label, which comes apparent in the UI.
diff --git a/website-v2/versioned_docs/version-0.70.0/06_extend-sdk-output-strategies.md b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-output-strategies.md
similarity index 96%
rename from website-v2/versioned_docs/version-0.70.0/06_extend-sdk-output-strategies.md
rename to website-v2/versioned_docs/version-0.95.1/06_extend-sdk-output-strategies.md
index fb0412f8e..feb224856 100644
--- a/website-v2/versioned_docs/version-0.70.0/06_extend-sdk-output-strategies.md
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-output-strategies.md
@@ -2,7 +2,6 @@
id: extend-sdk-output-strategies
title: "SDK Guide: Output Strategies"
sidebar_label: "SDK: Output Strategies"
-original_id: extend-sdk-output-strategies
---
## Introduction
@@ -11,12 +10,12 @@ As the exact input schema of a processor is usually not yet known at development
The following reference describes how output strategies can be defined using the SDK.
-
-
Code on Github
-
For all examples, the code can be found on Github.
-
+:::tip Code on Github
+
+For all examples, the code can be found on [Github](https://www.github.com/apache/streampipes-examples/tree/dev/streampipes-pipeline-elements-examples-processors-jvm/src/main/java/org/apache/streampipes/pe/examples/jvm/outputstrategy/)
+
+:::
+
## Reference
diff --git a/website-v2/versioned_docs/version-0.70.0/06_extend-sdk-static-properties.md b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-static-properties.md
similarity index 92%
rename from website-v2/versioned_docs/version-0.70.0/06_extend-sdk-static-properties.md
rename to website-v2/versioned_docs/version-0.95.1/06_extend-sdk-static-properties.md
index 51fec4c3e..39c2d5d6c 100644
--- a/website-v2/versioned_docs/version-0.70.0/06_extend-sdk-static-properties.md
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-static-properties.md
@@ -2,7 +2,6 @@
id: extend-sdk-static-properties
title: "SDK Guide: Static Properties"
sidebar_label: "SDK: Static Properties"
-original_id: extend-sdk-static-properties
---
## Introduction
@@ -11,12 +10,11 @@ Processing elements can specify required static properties, which will render di
The following reference describes how static properties can be defined using the SDK.
-
-
Code on Github
-
For all examples, the code can be found on Github.
-
+:::tip Code on Github
+
+For all examples, the code can be found on [Github](https://github.com/apache/streampipes-examples/tree/dev/streampipes-pipeline-elements-examples-processors-jvm/src/main/java/org/apache/streampipes/pe/examples/jvm/staticproperty).
+
+:::
## Reference
@@ -170,12 +168,12 @@ To extract the selected value, use the following method from the parameter extra
String selectedSingleValue = extractor.selectedSingleValue("id", String.class);
```
-
-
Declaring options
-
Sometimes, you may want to use an internal name that differs from the display name of an option.
-For that, you can use the method Options.from(Tuple2{'<'}String, String{'>'}) and the extractor method selectedSingleValueInternalName.
-
+:::tip Declaring options
+
+Sometimes, you may want to use an internal name that differs from the display name of an option.
+For that, you can use the method Options.from(Tuple2{'<'}String, String{'>'}) and the extractor method selectedSingleValueInternalName.
+:::tip
### Multi-Value Selections
@@ -259,9 +257,11 @@ The UI will render a single-value parameter based on the options provided at run
The parameter extraction does not differ from the extraction of static single-value parameters.
-
-
Multi-value selections
-
Although this example shows the usage of runtime-resolvable selections using single value selections, the same also works for multi-value selections!
-
+
+:::info Multi-value selections
+
+Although this example shows the usage of runtime-resolvable selections using single value selections, the same also works for multi-value selections!
+
+:::
diff --git a/website-v2/versioned_docs/version-0.70.0/06_extend-sdk-stream-requirements.md b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-stream-requirements.md
similarity index 93%
rename from website-v2/versioned_docs/version-0.70.0/06_extend-sdk-stream-requirements.md
rename to website-v2/versioned_docs/version-0.95.1/06_extend-sdk-stream-requirements.md
index 98a9215a7..409c5164d 100644
--- a/website-v2/versioned_docs/version-0.70.0/06_extend-sdk-stream-requirements.md
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-sdk-stream-requirements.md
@@ -2,7 +2,6 @@
id: extend-sdk-stream-requirements
title: "SDK Guide: Stream Requirements"
sidebar_label: "SDK: Stream Requirements"
-original_id: extend-sdk-stream-requirements
---
## Introduction
@@ -11,12 +10,15 @@ Data processors and data sinks can define ``StreamRequirements``. Stream require
Once users create pipelines in the StreamPipes Pipeline Editor, these requirements are verified against the connected event stream.
By using this feature, StreamPipes ensures that only pipeline elements can be connected that are syntactically and semantically valid.
-This guide covers the creation of stream requirements. Before reading this section, we recommend that you make yourself familiar with the SDK guide on [data processors](dev-guide-processor-sdk.md) and [data sinks](dev-guide-sink-sdk.md).
+This guide covers the creation of stream requirements. Before reading this section, we recommend that you make yourself familiar with the SDK guide on [data processors](extend-first-processor).
+
+
+:::tip Code on Github
+
+For all examples, the code can be found on [Github](https://www.github.com/apache/streampipes-examples/tree/dev/streampipes-pipeline-elements-examples-processors-jvm/src/main/java/org/apache/streampipes/pe/examples/jvm/requirements/).
+
+:::
-
-
Code on Github
-
For all examples, the code can be found on Github.
-
## The StreamRequirementsBuilder
diff --git a/website-v2/versioned_docs/version-0.70.0/06_extend-setup.md b/website-v2/versioned_docs/version-0.95.1/06_extend-setup.md
similarity index 94%
rename from website-v2/versioned_docs/version-0.70.0/06_extend-setup.md
rename to website-v2/versioned_docs/version-0.95.1/06_extend-setup.md
index bea5d2c5f..12a96eb8e 100644
--- a/website-v2/versioned_docs/version-0.70.0/06_extend-setup.md
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-setup.md
@@ -2,7 +2,6 @@
id: extend-setup
title: Development Setup
sidebar_label: Development Setup
-original_id: extend-setup
---
Pipeline elements in StreamPipes are provided as standalone microservices. New pipeline elements can be easily developed using the provided Maven archetypes and can be installed in StreamPipes at runtime.
@@ -11,7 +10,7 @@ In this section, we describe our recommended minimum setup for locally setting u
## IDE & required dev tools
StreamPipes does not have specific requirements on the IDE - so feel free to choose the IDE of your choice.
-The only requirements in terms of development tools are that you have Java 8 and Maven installed.
+The only requirements in terms of development tools are that you have Java 17 and Maven installed.
## StreamPipes CLI: Docker-based local StreamPipes instance
In order to quickly test developed pipeline elements without needing to install all services required by StreamPipes, we provide a CLI tool that allows you to selectively start StreamPipes components.
@@ -48,4 +47,4 @@ Create the Maven archetype as described in the [Maven Archetypes](06_extend-arch
### Examples
-We provide several examples that explain the usage of some concepts in this [Github repo](https://github.com/apache/incubator-streampipes-examples).
+We provide several examples that explain the usage of some concepts in this [Github repo](https://github.com/apache/streampipes-examples).
diff --git a/website-v2/versioned_docs/version-0.95.1/06_extend-tutorial-adapters.md b/website-v2/versioned_docs/version-0.95.1/06_extend-tutorial-adapters.md
new file mode 100644
index 000000000..4e95cba9a
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-tutorial-adapters.md
@@ -0,0 +1,612 @@
+---
+id: extend-tutorial-adapters
+title: "Tutorial: Build Custom Adapters"
+sidebar_label: "Tutorial: Adapters"
+---
+
+In this tutorial, we will create a new data source consisting of a single data stream.
+By the end of the tutorial, you will be able to implement custom adapters that allow you to connect to data sources
+other than those officially supported by StreamPipes.
+To do this, we will split the tutorial into two parts.
+The [first part](#building-a-basic-adapter) focuses on creating the adapter and defining the event stream.
+At the end, we will have a working adapter that produces an event stream that can be used in StreamPipes.
+This adapter does not provide any way to configure its behavior, so in
+the [second part](#building-a-more-advanced-adapter-by-processing-ui-input) of the tutorial
+to show how we can extend our existing adapter to be configurable via the UI.
+
+:::info
+This tutorial shows how to build your own type of adapter.
+It is intended for people who are interested in extending StreamPipes to meet their own needs.
+If you are here to explore StreamPipes and are interested in using an adapter, you may want to
+continue [here](./03_use-connect.md).
+:::
+
+## Objective
+
+We are going to create an adapter that will simulate a stream of data generated by a control station in a logistics
+center that is used to sort packages.
+This station consists of two sensors: a light barrier that detects when a package passes through, and a weight sensor.
+
+This sensor produces a continuous stream of events containing the current time stamp, an indicator of whether a package
+is present or the conveyor is empty, and the weight of the package in kilograms.
+The events are published in JSON format as follows
+
+```json
+{
+ "timestamp": 1697720916959,
+ "parcelPresent": true,
+ "weight": 3.520
+}
+```
+
+In the following section, we will show you how to develop an adapter that is capable of generating this stream so that
+it is available for further processing in StreamPipes.
+
+## Project Set Up
+
+Instead of creating a new project from scratch, we recommend to use our Maven archetype to create a new project
+skeleton (`streampipes-archetype-extensions-jvm`).
+Enter the following command in a command line of your choice (please ensure
+that [Apache Maven](https://maven.apache.org/install.html) isinstalled):
+
+```bash
+mvn archetype:generate \
+-DarchetypeGroupId=org.apache.streampipes -DarchetypeArtifactId=streampipes-archetype-extensions-jvm \
+-DarchetypeVersion=0.93.0 -DgroupId=org.apache.streampipes \
+-DartifactId=streampipes-archetype-extensions-jvm -DclassNamePrefix=ParcelControlStation -DpackageName=parcelcontrol
+```
+
+This command will ask you for input twice, you can just skip both of them by hitting *enter*.
+The first dialog sets the version to use for our `streampipes-archetype-extensions-jvm` module.
+Feel free to change this if you like.
+
+```bash
+Define value for property 'version' 1.0-SNAPSHOT: :
+
+ Y: :
+```
+
+The `mvn archetype:generate` command generates some required files, the required file structure, and some boilerplate
+code.
+The generated file structure should look like the following:
+
+:::info
+Note that you can customize the parameters of the mvn command to affect the file structure and file naming.
+:::
+
+```bash
+
+|streampipes-archetype-extensions # name is determined by '-DartifactId'
+|-- development
+| |-- env
+|-- src
+| |-- main
+| | |-- java.org.apache.streampipes # name after .java. is determined by '-DgroupId'
+| | | |-- pe.parcelcontrol # name after .pe. is determined by '-DpackageName'
+| | | | |-- ParcelControlStationDataProcessor.java # class name is determined by '-DclassNamePrefix'
+| | | | |-- ParcelControlStationDataSink.java
+| | | | |-- ParcelControlStationGenericAdapter.java
+| | | | |-- ParcelControlStationSpecificAdapter.java
+| | | |-- Init.java
+| | |-- resources
+| | | |-- org.apache.streampipes.pe.parcelcontrol.genericadapter
+| | | | |-- documentation.md
+| | | | |-- icon.png
+| | | | |-- strings.en
+| | | |-- org.apache.streampipes.pe.parcelcontrol.processor
+| | | | |-- documentation.md
+| | | | |-- icon.png
+| | | | |-- strings.en
+| | | |-- org.apache.streampipes.pe.parcelcontrol.sink
+| | | | |-- documentation.md
+| | | | |-- icon.png
+| | | | |-- strings.en
+| | | |-- org.apache.streampipes.pe.parcelcontrol.specificadapter
+| | | | |-- documentation.md
+| | | | |-- icon.png
+| | | | |-- strings.en
+| |-- test.java.org.apache.streampipes # name after .java. is determined by '-DgroupId'
+| | |-- InitTest.java
+|-- Dockerfile
+|-- pom.xml
+
+```
+
+:::tip
+In addition to the basic project skeleton, the sample project also includes a sample `Dockerfile` that you can use to
+package your application into a Docker container.
+:::
+
+## Building a Basic Adapter
+
+In the following, we will demonstrate how to use the boilerplate code generated by the Maven plugin (
+see [Project setup](#project-setup)).
+Within this section, we will focus on creating an event stream that can be used within StreamPipes.
+The following section shows how to configure the created adapter with UI input.
+
+Attentive readers may have noticed that two adapter classes have been generated.
+We will focus on the `ParcelControlStationSpecificAdapter` first, the `ParcelControlStationSimulatorGenericAdapter` will
+be used later for more advanced adapter features.
+First, let us take a look at the `ParcelControlStationSpecificAdapter.java` file as generated by the Maven
+archetype.
+
+```java jsx showLineNumbers
+package org.apache.streampipes.pe.parcelcontrol;
+
+import org.apache.streampipes.commons.exceptions.connect.AdapterException;
+import org.apache.streampipes.extensions.api.connect.IAdapterConfiguration;
+import org.apache.streampipes.extensions.api.connect.IEventCollector;
+import org.apache.streampipes.extensions.api.connect.StreamPipesAdapter;
+import org.apache.streampipes.extensions.api.connect.context.IAdapterGuessSchemaContext;
+import org.apache.streampipes.extensions.api.connect.context.IAdapterRuntimeContext;
+import org.apache.streampipes.extensions.api.extractor.IAdapterParameterExtractor;
+import org.apache.streampipes.model.AdapterType;
+import org.apache.streampipes.model.connect.guess.GuessSchema;
+import org.apache.streampipes.sdk.builder.adapter.AdapterConfigurationBuilder;
+import org.apache.streampipes.sdk.builder.adapter.GuessSchemaBuilder;
+import org.apache.streampipes.sdk.helpers.Labels;
+import org.apache.streampipes.sdk.helpers.Locales;
+
+import java.util.HashMap;
+import java.util.Map;
+
+public class ParcelControlStationSpecificAdapter implements StreamPipesAdapter {
+
+ private boolean running = false;
+
+ @Override
+ public IAdapterConfiguration declareConfig() {
+ return AdapterConfigurationBuilder.create(
+ "org.apache.streampipes.pe.parcelcontrol.specificadapter",
+ ParcelControlStationSpecificAdapter::new
+ )
+ .withAssets(Assets.DOCUMENTATION, Assets.ICON)
+ .withCategory(AdapterType.Manufacturing)
+ .withLocales(Locales.EN)
+ .buildConfiguration();
+ }
+
+ @Override
+ public void onAdapterStarted(IAdapterParameterExtractor extractor,
+ IEventCollector collector,
+ IAdapterRuntimeContext adapterRuntimeContext) throws AdapterException {
+
+ Runnable demo = () -> {
+ while (running) {
+ // make event
+ Map event = new HashMap<>();
+ // forward the event to the adapter pipeline
+ collector.collect(event);
+ }
+ };
+ running = true;
+ new Thread(demo).start();
+ }
+
+ @Override
+ public void onAdapterStopped(IAdapterParameterExtractor extractor,
+ IAdapterRuntimeContext adapterRuntimeContext) throws AdapterException {
+
+ // do cleanup
+ running = false;
+ }
+
+ @Override
+ public GuessSchema onSchemaRequested(IAdapterParameterExtractor extractor,
+ IAdapterGuessSchemaContext adapterGuessSchemaContext) throws AdapterException {
+
+ // build the schema by adding properties to the schema builder and a preview if possible
+ return GuessSchemaBuilder
+ .create()
+ .build();
+ }
+ }
+
+```
+
+The class extends `StreamPipesAdapter`, which is the interface that all adapters within StreamPipes must implement.
+This interface requires us to implement four methods:
+
+* `declareConfig()`: This method is expected to return the configuration of the adapter. The configuration includes
+ metadata about the adapter and its input parameters.
+* `onAdapterStarted()`: This method is expected to contain the actual adapter logic. It is called when the adapter is
+ started, and is responsible for sending incoming data to StreamPipes as an event.
+* `onAdapterStopped()`: This method is called when the adapter is stopped and is responsible for gracefully exiting the
+ adapter.
+ gracefully and usually performs some cleanup tasks.
+* `onSchemaRequested()`: This method is expected to return the schema of the event stream. This is ideally done
+ dynamically based on some incoming data (*guess*) or provided statically if not otherwise possible.
+
+### Describing the Adapter via the Configuration
+
+The standard code generated here is already sufficient for us.
+So let's have a quick look at the important aspects:
+
+* `Line 4`: Here we define a unique identifier for our adapter. This allows us to identify all instances of the same
+ adapter. Including your own namespace is always a good choice to avoid conflicts.
+* `Line 7`: Here we define what assets are available for this adapter. In this case, we provide a documentation file and
+ an icon. Both assets are located in the `resource' directory (see file tree above).
+* `Line 8`: This defines a rough categorization along predefined adapter types.
+* `Line 9`: Here we define which locales are available for this adapter. Since we only provide one `strings.en' file so
+ far (see file tree above), the current selection is sufficient. Theoretically you can support multiple languages, but
+ this is not fully supported yet.
+
+```java jsx {4,7-9} showLineNumbers
+ @Override
+ public IAdapterConfiguration declareConfig() {
+ return AdapterConfigurationBuilder.create(
+ "org.apache.streampipes.pe.parcelcontrol.specificadapter",
+ ParcelControlStationSpecificAdapter::new
+ )
+ .withAssets(Assets.DOCUMENTATION, Assets.ICON)
+ .withCategory(AdapterType.Manufacturing)
+ .withLocales(Locales.EN)
+ .buildConfiguration();
+ }
+```
+
+Before we continue, let's quickly have a look at the `strings.en` file that defines our locales.
+Here we can define a meaningful and human-readable adapter tile in the first line and a short description:
+
+```text
+org.apache.streampipes.pe.parcelcontrol.specificadapter.title=Parcel Control Station (simple)
+org.apache.streampipes.pe.parcelcontrol.specificadapter.description=This adapter simulates data coming from a parcel control station in a logistics center.
+```
+
+Now we have successfully configured our adapter and prepared all descriptive elements, we can focus on the actual logic.
+
+### Creating the Data Stream
+
+The logic that creates events that are then populated via StreamPipes is defined in `onAdapterStarted()`.
+Within this method, connectors usually connect to the data source and extract data.
+In our case, however, we simply want to create some sample data directly.
+The two main parts that should always happen within this method are highlighted in the provided skeleton code:
+
+* `Line 10`: Creating an event is crucial for our adapters. This event is then filled with data by the adapter before it
+ is distributed.
+* `Line 13`: The event must finally be passed to the `collector`, which then takes the data and distributes it within
+ StreamPipes in the form of a [data stream](./02_concepts-overview.md#data-stream).
+
+```java jsx {10,13} showLineNumbers
+@Override
+public void onAdapterStarted(IAdapterParameterExtractor extractor,
+ IEventCollector collector,
+ IAdapterRuntimeContext adapterRuntimeContext) throws AdapterException {
+
+ Runnable demo = () -> {
+ while (running) {
+
+ // make event
+ Map event = new HashMap<>();
+
+ // forward the event to the adapter pipeline
+ collector.collect(event);
+ }
+ };
+ running = true;
+ new Thread(demo).start();
+}
+```
+
+So the only thing left to do is to create the actual events.
+In our scenario, we want to create two types of events: one describing an empty conveyor and one describing a detected
+and weighed package.
+To keep the implementation simple, we simply want to have a parcel event every five seconds. We can implement this as
+follows:
+
+```java
+ Runnable parcelControl = () -> {
+ while (running) {
+
+ // get the current time in seconds
+ long timestamp = System.currentTimeMillis();
+ long timeInSeconds = (int) timestamp / 1000;
+
+ // make event
+ Map event = new HashMap<>();
+ event.put("timestamp", timestamp);
+
+ if (timeInSeconds % 5 == 0) {
+ event.put("parcelPresent", true);
+ event.put("weight", ThreadLocalRandom.current().nextDouble(0, 10));
+
+ } else {
+ event.put("parcelPresent", false);
+ event.put("weight", 0);
+ }
+
+ // forward the event to the adapter pipeline
+ collector.collect(event);
+
+ try {
+ Thread.sleep(1000);
+ } catch (InterruptedException e) {
+ throw new RuntimeException(e);
+ }
+ }
+ };
+ running = true;
+ new Thread(parcelControl).start();
+```
+
+This is already enough to get a data stream into StreamPipes.
+As the next step we need to describe to event schema.
+
+### Defining the Event Schema
+
+In StreamPipes, each data stream comes with an event schema that describes what information the event contains,
+in what data formats, and some semantic type information.
+This allows StreamPipes to provide easy and convenient stream handling with a lot of automatic conversions and
+validations.
+For example, whether a particular data processor is suitable for a given event stream.
+This event schema is provided by `onSchemaRequested()`:
+
+```java
+@Override
+public GuessSchema onSchemaRequested(IAdapterParameterExtractor extractor,
+ IAdapterGuessSchemaContext adapterGuessSchemaContext) throws AdapterException {
+
+ // build the schema by adding properties to the schema builder and a preview if possible
+ return GuessSchemaBuilder
+ .create()
+ .build();
+ }
+
+```
+
+Normally, the event schema is determined automatically and dynamically, since an adapter is usually quite generic (read
+more in the [Advanced section](#advanced)).
+But in our case, we already know the event schema, and it never changes, so we can just define it:
+
+```java jsx {3,13-20} showLineNumbers
+@Override
+public GuessSchema onSchemaRequested(IAdapterParameterExtractor extractor,
+ IAdapterGuessSchemaContext adapterGuessSchemaContext) throws AdapterException {
+
+ // build the schema by adding properties to the schema builder and a preview if possible
+ return GuessSchemaBuilder.create()
+ .property(timestampProperty("timestamp"))
+ .sample("timestamp", System.currentTimeMillis())
+ .property(PrimitivePropertyBuilder
+ .create(Datatypes.Boolean, "parcelPresent")
+ .label("Parcel Present")
+ .description("Indicates if a parcel is weighed.")
+ .domainProperty(SO.BOOLEAN)
+ .scope(PropertyScope.MEASUREMENT_PROPERTY)
+ .build())
+ .sample("parcelPresent", true)
+ .property(PrimitivePropertyBuilder
+ .create(Datatypes.Double, "weight")
+ .label("Parcel Weight")
+ .description("Parcel weight")
+ .domainProperty(SO.WEIGHT)
+ .scope(PropertyScope.MEASUREMENT_PROPERTY)
+ .build())
+ .sample("weight", 3.520)
+ .build();
+```
+
+An attribute of an Event is referred to as `property` in StreamPipes.
+So in our case we have three properties.
+Since StreamPipes creates a sample event in the UI when configuring the adapter (
+see [here](./03_use-connect.md#schema-editor)),
+providing a meaningful sample value for every property allows StreamPipes to demonstrate its full potential.
+
+Since every event schema is required to have a timestamp property, we provide a convenience definition (see `line 3`).
+For all other properties the recommend way of definition is using the `PrimitivePropertyBuilder` (see `line 13-20`) and
+consists of the following steps:
+
+* `Line 14`: every property must have a data type specified and a property name
+* `Line 15`: In addition to the property name we can define a label that is designed for the end user and shown in the
+ UI.
+* `Line 16`: Assigns a human-readable description to the event property. The description is used in the StreamPipes UI
+ for better explaining users the meaning of the property.
+* `Line 17`: Specifies the semantics of the property (e.g., whether a double value stands for weight or temperature
+ value).
+* `Line 18`: Assigns a property scope to the event property. This determines how the property is handled internally.
+
+:::note
+StreamPipes does not require you to provide all of this information about a property.
+Anything beyond line `14` (up to line `20`) is optional, but the more you provide, the better StreamPipes can show it's
+full potential and feature richness.
+:::
+
+This makes our adapter almost complete, there is only one little step left.
+
+### Defining the Adapter Termination
+
+As a final step, we need to define what should happen if the adapter is stopped.
+In general, the adapter should not fire any events after that.
+Normally, this step includes things like closing connections and clearing resources.
+In our case this is quite simple, we just need to stop our thread:
+
+```java
+@Override
+public void onAdapterStopped(IAdapterParameterExtractor extractor,
+ IAdapterRuntimeContext adapterRuntimeContext) throws AdapterException {
+
+ // do cleanup
+ running = false;
+}
+```
+
+Now it's time to start our adapter and observe it in action!
+
+### Register and Run the Adapter
+
+Before we actually use our adapter, let's take a quick look at the `Init` class. This class is responsible for
+registering our adapter service with the core to make the adapter available in StreamPipes.
+This is done within `provideServiceDefinition()`. Since we don't have the generic adapter ready yet,
+we'll comment out its registration (`line 7`). Now we can run the `Init` class to register the adapter with your running
+StreamPipes instance. If you don't have a running instance at your hand,
+you can take a look at our [Installation Guide](./01_try-installation.md).
+
+```java jsx {7-8} showLineNumbers
+@Override
+public SpServiceDefinition provideServiceDefinition() {
+ return SpServiceDefinitionBuilder.create("org.apache.streampipes",
+ "human-readable service name",
+ "human-readable service description", 8090)
+ .registerRuntimeProvider(new StandaloneStreamPipesRuntimeProvider())
+ //.registerAdapter(new ParcelControlStationGenericAdapter())
+ .registerAdapter(new ParcelControlStationSpecificAdapter())
+ .registerMessagingFormats(
+ new JsonDataFormatFactory(),
+ new CborDataFormatFactory(),
+ new SmileDataFormatFactory(),
+ new FstDataFormatFactory())
+ .registerMessagingProtocols(
+ new SpKafkaProtocolFactory(),
+ new SpJmsProtocolFactory(),
+ new SpMqttProtocolFactory(),
+ new SpNatsProtocolFactory(),
+ new SpPulsarProtocolFactory())
+ .build();
+}
+ ```
+
+:::tip
+When executing the `main()` method of the `Init` class, make sure that all environment variables are set from
+the `development/env` file are set.
+If they are not set, the adapter may not be able to register with StreamPipes.
+:::
+
+Once you see the following log message in the console, the adapter is ready, and you can switch to the UI of your
+StreamPipes instance.
+
+```bash
+s.s.e.c.ConnectWorkerRegistrationService : Successfully connected to master. Worker is now running.
+```
+
+Please go to the connect module and click on `New Adapter`,
+you should now be able to see your adapter `Parcel Control Station (simple)`:
+
+
+The adapter runs successfully in StreamPipes, you can now play around with the data stream that the
+adapter, or continue with the next section to learn how to make an adapter configurable through the UI.
+
+### Building a more Advanced Adapter by Processing UI Input
+
+In this section, we will extend our previous build apter by adding the ability to configure the minimum and maximum
+package
+in the UI from which the weight value is retrieved.
+The beauty of building adapters for StreamPipes is that you don't have to worry about the UI.
+StreamPipes provides a set of pre-built input elements for adapters that you can simply add to your adapter
+configuration.
+So the first thing we need to customize is `declareConfig()`:
+
+```java jsx {10-11} showLineNumbers
+@Override
+public IAdapterConfiguration declareConfig() {
+ return AdapterConfigurationBuilder.create(
+ "org.apache.streampipes.pe.parcelcontrol.specificadapter",
+ ParcelControlStationSpecificAdapter::new
+ )
+ .withAssets(Assets.DOCUMENTATION, Assets.ICON)
+ .withCategory(AdapterType.Manufacturing)
+ .withLocales(Locales.EN)
+ .requiredFloatParameter(Labels.withId("min-weight"), 0.0f)
+ .requiredFloatParameter(Labels.withId("max-weight"), 10.f)
+ .buildConfiguration();
+}
+
+```
+
+In line `9-10` we have introduced two input parameters that expect float values as input. They have a default value
+of `0` or `10` resp. The defined identifier (`min-weight` and `max-weight`) can be used two provide a caption and
+a description via the `strings.en` file:
+
+```text
+min-weight.title=Minimum Parcel Weight
+min-weight.description=The lower bound from which the weight values are sampled randomly.
+
+max-weight.title=Maximum Parcel Weight
+max-weight.description=The upper bound from which the weight values are sampled randomly.
+```
+
+As a last step, we now need to modify the calculation of the parcel weight, so that the provided parameters are actually
+applied.
+This is done in `onAdapterStarted()`.
+
+```java jsx {6-9,24} showLineNumbers
+@Override
+public void onAdapterStarted(IAdapterParameterExtractor extractor,
+ IEventCollector collector,
+ IAdapterRuntimeContext adapterRuntimeContext) throws AdapterException {
+
+ var ex = extractor.getStaticPropertyExtractor();
+
+ float minWeight = ex.singleValueParameter("min-weight", Float.class);
+ float maxWeight = ex.singleValueParameter("max-weight", Float.class);
+
+ Runnable parcelControl = () -> {
+ while (running) {
+
+ // get the current time in seconds
+ long timestamp = System.currentTimeMillis();
+ long timeInSeconds = (int) timestamp / 1000;
+
+ // make event
+ Map event = new HashMap<>();
+ event.put("timestamp", timestamp);
+
+ if (timeInSeconds % 5 == 0) {
+ event.put("parcelPresent", true);
+ event.put("weight", ThreadLocalRandom.current().nextDouble(minWeight, maxWeight));
+
+ } else {
+ event.put("parcelPresent", false);
+ event.put("weight", 0);
+ }
+
+ // forward the event to the adapter pipeline
+ collector.collect(event);
+
+ try {
+ Thread.sleep(1000);
+ } catch (InterruptedException e) {
+ throw new RuntimeException(e);
+ }
+ }
+ };
+ running = true;
+ new Thread(parcelControl).start();
+}
+```
+
+* line `6-9`: We use an `StaticPropertyExtractor` to retrieve both user inputs
+* line `24`: We calculate the parcel weight by passing the configured values vor the minimum and maximum value.
+
+You can now run the `main()` method of the `Init` class to register the adapter at StreamPipes.
+The UI dialog to create a new instance of our parcel control station adapter looks now the following:
+
+
+:::caution
+Please make sure that you uninstall the parcel adapter in `Install Pipeline Elements` before
+you restart the execution of the `Init` class, if you have already done so.
+Otherwise, the changes made in this section will have no effect.
+:::
+
+### Read More
+
+Congratulations! You've just created your first StreamPipes adapter 🎉
+
+There are many more things to explore and data sources can be defined in much more detail.
+If this is of interest to you, the [advanced section](#advanced) will satisfy your needs.
+
+If anything within this tutorial did not work for you or you had problems following it,
+please feel free to provide some feedback by opening an [issue on GitHub](https://github.com/apache/streampipes/issues/new?assignees=&labels=bug%2Cdocumentation%2Cwebsite&projects=&template=doc_website_issue_report.yml).
+
+
diff --git a/website-v2/versioned_docs/version-0.95.1/06_extend-tutorial-data-processors.md b/website-v2/versioned_docs/version-0.95.1/06_extend-tutorial-data-processors.md
new file mode 100644
index 000000000..4c2d5ee25
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-tutorial-data-processors.md
@@ -0,0 +1,454 @@
+---
+id: extend-tutorial-data-processors
+title: "Tutorial: Data Processors"
+sidebar_label: "Tutorial: Data Processors"
+---
+
+In this tutorial, we will add a new data processor.
+
+From an architectural point of view, we will create a self-contained service that includes the description of the data
+processor and an implementation.
+
+## Objective
+
+We are going to create a new data processor that realizes a simple geofencing algorithm - we detect vehicles that enter
+a specified radius around a user-defined location.
+This pipeline element will be a generic element that works with any event stream that provides geospatial coordinates in
+form of a latitude/longitude pair.
+
+The algorithm outputs every location event once the position has entered the geofence.
+
+:::note
+
+The implementation in this tutorial is pretty simple - our processor will fire an event every time the GPS location is
+inside the geofence.
+In a real-world application, you would probably want to define a pattern that recognizes the _first_ event a vehicle
+enters the geofence.
+
+This can be easily done using a CEP library.
+
+:::
+
+## Project setup
+
+Instead of creating a new project from scratch, we recommend to use the Maven archetype to create a new project
+skeleton (streampipes-archetype-extensions-jvm).
+Enter the following command in a command line of your choice (Apache Maven needs to be installed):
+
+```
+mvn archetype:generate \
+-DarchetypeGroupId=org.apache.streampipes -DarchetypeArtifactId=streampipes-archetype-extensions-jvm \
+-DarchetypeVersion=0.93.0 -DgroupId=my.groupId \
+-DartifactId=my-example -DclassNamePrefix=MyExample -DpackageName=mypackagename
+```
+
+You will see a project structure similar to the structure shown in the [archetypes](06_extend-archetypes.md) section.
+
+:::tip
+
+Besides the basic project skeleton, the sample project also includes an example Dockerfile you can use to package your
+application into a Docker container.
+
+:::
+
+Now you're ready to create your first data processor for StreamPipes!
+
+## Adding data processor requirements
+
+First, we will add a new stream requirement.
+Create a new class `GeofencingProcessor` which should look as follows:
+
+```java
+package org.apache.streampipes.pe.example;
+
+import org.apache.streampipes.extensions.api.pe.IStreamPipesDataProcessor;
+import org.apache.streampipes.extensions.api.pe.config.IDataProcessorConfiguration;
+import org.apache.streampipes.extensions.api.pe.context.EventProcessorRuntimeContext;
+import org.apache.streampipes.extensions.api.pe.param.IDataProcessorParameters;
+import org.apache.streampipes.extensions.api.pe.routing.SpOutputCollector;
+import org.apache.streampipes.model.runtime.Event;
+import org.apache.streampipes.sdk.builder.ProcessingElementBuilder;
+import org.apache.streampipes.sdk.builder.StreamRequirementsBuilder;
+import org.apache.streampipes.sdk.builder.processor.DataProcessorConfiguration;
+import org.apache.streampipes.sdk.helpers.EpProperties;
+import org.apache.streampipes.sdk.helpers.EpRequirements;
+import org.apache.streampipes.sdk.helpers.Labels;
+import org.apache.streampipes.sdk.helpers.OutputStrategies;
+import org.apache.streampipes.sdk.helpers.SupportedFormats;
+import org.apache.streampipes.sdk.helpers.SupportedProtocols;
+import org.apache.streampipes.vocabulary.SO;
+
+public class GeofencingProcessor implements IStreamPipesDataProcessor {
+
+ private static final String LATITUDE_CENTER = "latitude-center";
+ private static final String LONGITUDE_CENTER = "longitude-center";
+
+
+ public IDataProcessorConfiguration declareConfig() {
+ return DataProcessorConfiguration.create(
+ GeofencingProcessor::new,
+ ProcessingElementBuilder.create(
+ "org.apache.streampipes.tutorial-geofencing"
+ )
+ .category(DataProcessorType.ENRICH)
+ .withAssets(Assets.DOCUMENTATION, Assets.ICON)
+ .build());
+ }
+
+ @Override
+ public void onPipelineStarted(IDataProcessorParameters params,
+ SpOutputCollector collector,
+ EventProcessorRuntimeContext runtimeContext) {
+
+ }
+
+ @Override
+ public void onEvent(Event event,
+ SpOutputCollector collector) {
+
+ }
+
+ @Override
+ public void onPipelineStopped() {
+
+ }
+}
+
+
+```
+
+In this class, we need to implement three methods: The `declareConfig` method is used to define abstract stream
+requirements such as event properties that must be present in any input stream that is later connected to the element
+using the StreamPipes UI.
+The second method, `onPipelineStarted` is triggered once a pipeline is started.
+The `onEvent` method is called for every incoming event.
+Finally, the `onPipelineStopped` method is called once the pipeline is stopped.
+
+Similar to data sources, the SDK provides a builder class to generate the description for data processors.
+
+The current code within the `declareConfig` method creates a new data processor with the ID.
+The ID is used as the internal ID of the data processor, but also used to reference additional assets in the `resources` folder, such as a `strings.en` file, used to configure labels and description, and a `documentation.md` file, which will later servce as a markdown documentation in the UI.
+But first, we will add some _stream requirements_ to the description. As we'd like to develop a generic pipeline element that
+works with any event that provides a lat/lng pair, we define two stream requirements as stated below:
+
+```java
+.requiredStream(StreamRequirementsBuilder
+ .create()
+ .requiredPropertyWithUnaryMapping(
+ EpRequirements.domainPropertyReq(Geo.LAT),
+ Labels.from("latitude-field","Latitude","The event property containing the latitude value"),
+ PropertyScope.MEASUREMENT_PROPERTY
+ )
+ .requiredPropertyWithUnaryMapping(
+ EpRequirements.domainPropertyReq(Geo.LNG),
+ Labels.from("longitude-field","Longitude","The event property containing the longitude value"),
+ PropertyScope.MEASUREMENT_PROPERTY
+ )
+ .build())
+```
+
+The first line, `.requiredStream()` defines that we want a data processor with exactly one input stream. Adding more
+stream requirements would create elements with multiple input connectors in StreamPipes.
+Stream requirements can be assigned by using the `StreamRequirementsBuilder` class.
+In our example, we define two requirements, so-called _domain property requirements_. In contrast to _data type
+requirements_ where we'd expect an event property with a field of a specific data type (e.g., float), domain property
+requirements expect a specific semantic type (called domain property), e.g., from a vocabulary such as the WGS84 Geo vocab.
+
+Once a pipeline is deployed, we are interested in the actual field (and its field name) that contains the latitude and
+longitude values.
+In some cases, there might be more than one field that satisfies a property requirement, and we would like users to
+select the property the geofencing component should operate on.
+Therefore, our example uses the method `requiredPropertyWithUnaryMapping`, which will map a requirement to a real event
+property of an input stream and let the user choose the appropriate field in the StreamPipes UI when pipelines are
+defined.
+
+Finally, the `PropertyScope` indicates that the required property is a measurement value (in contrast to a dimension
+value). This allows us later to provide improved user guidance in the pipeline editor.
+
+Similar to mapping properties, text parameters have an internalId (radius), a label and a description.
+In addition, we can assign a _value specification_ to the parameter indicating the value range we support.
+Our example supports a radius value between 0 and 1000 with a granularity of 1.
+In the StreamPipes UI, a required text parameter is rendered as a text input field, in case we provide an optional value
+specification, a slider input is automatically generated.
+
+For now, we've assigned parameters with an internal ID, a label and a description.
+To decouple human-readable labels and description from the actual data processor description, it is possible to extract the strings to a properties file.
+In the `resources` folder, switch to a folder with the same name as the data processor's ID. If you've used the Maven archetype to build our project, there should be a `strings.en` file.
+In this file, we can configure labels and descriptions. For instance, instead of writing
+
+```java
+
+.requiredPropertyWithUnaryMapping(
+ EpRequirements.domainPropertyReq(Geo.LAT),
+ Labels.from("latitude-field","Latitude","The event property containing the latitude value"),
+ PropertyScope.MEASUREMENT_PROPERTY
+ )
+
+```
+
+it is recommended to write
+
+```java
+
+.requiredPropertyWithUnaryMapping(
+ EpRequirements.domainPropertyReq(Geo.LAT),
+ Labels.withId("latitude-field"),
+ PropertyScope.MEASUREMENT_PROPERTY
+ )
+
+```
+
+and add the following line to the `strings.en` file:
+
+```properties
+
+latitude-field.title=Latitude
+latitute-field.description=The event property containing the latitude value
+
+```
+
+This feature will also ease future internationalization efforts.
+
+Besides requirements, users should be able to define the center coordinate of the Geofence and the size of the fence
+defined as a radius around the center in meters.
+The radius can be defined by adding a simple required text field to the description:
+
+```java
+.requiredIntegerParameter("radius","Geofence Size","The size of the circular geofence in meters.",0,1000,1)
+```
+
+Such user-defined parameters are called _static properties_. There are many different types of static properties (see
+the [Processor SDK](06_extend-sdk-static-properties.md) for an overview). Similar to stream requirements, it is also recommended to type `Labels.withId("radius")` and move labels and descriptions to the resource file.
+
+In this example, we'll further add two very simple input fields to let users provide latitude and longitude of the
+geofence center.
+
+Add the following line to the `declareConfig` method:
+
+```java
+ .requiredFloatParameter(Labels.from(LATITUDE_KEY,"Latitude","The latitude value"))
+ .requiredFloatParameter(Labels.from(LONGITUDE_KEY,"Longitude","The longitude value"))
+
+```
+
+Now we need to define the output of our Geofencing pipeline element.
+As explained in the first section, the element should fire every time some geo-located entity arrives within the defined
+geofence.
+Therefore, the processor outputs the same schema as it receives as an input.
+Although we don't know the exact input right now as it depends on the stream users connect in StreamPipes when creating
+pipelines, we can define an _output strategy_ as follows:
+
+```java
+.outputStrategy(OutputStrategies.keep())
+```
+
+This defines a _KeepOutputStrategy_, i.e., the input event schema is not modified by the processor.
+There are many more output strategies you can define depending on the functionality you desire, e.g., _AppendOutput_ for
+defining a processor that enriches events or _CustomOutput_ in case you would like users to select the output by
+themselves.
+
+That's it! We've now defined input requirements, required user input and an output strategy.
+In the next section, you will learn how to extract these parameters once the pipeline element is invoked after a
+pipeline was created.
+
+## Pipeline element invocation
+
+Once users start a pipeline that uses our geofencing component, the _onPipelineStarted_ method in our class is called. The
+interface `IDataProcessorParameters` includes convenient access to user-configured parameters a users has selected in the pipeline
+editor and information on the actual streams that are connected to the pipeline element.
+
+Next, we are interested in the fields of the input event stream that contains the latitude and longitude value we would
+like to compute against the geofence center location as follows:
+
+```java
+ String latitudeFieldName = params.extractor().mappingPropertyValue("latitude-field");
+ String longitudeFieldName = params.extractor().mappingPropertyValue("longitude-field");
+```
+
+We use the same `internalId` we've used to define the mapping property requirements in the `declareModel` method.
+
+Next, for extracting the geofence center coordinates, add to class variables centerLatitude and centerLongitude and
+assign the selected values using the following statements:
+
+```java
+ this.centerLatitude = params.extractor().singleValueParameter(LATITUDE_CENTER,Float.class);
+ this.centerLongitude = params.extractor().singleValueParameter(LONGITUDE_CENTER,Float.class);
+```
+
+The radius value can be extracted as follows:
+
+```java
+ int radius = params.extractor().singleValueParameter("radius",Float.class);
+```
+
+Great! That's all we need to describe a data processor for usage in StreamPipes. Your processor class should look as
+follows:
+
+```java
+
+package org.apache.streampipes.pe.example;
+
+import org.apache.streampipes.extensions.api.pe.IStreamPipesDataProcessor;
+import org.apache.streampipes.extensions.api.pe.config.IDataProcessorConfiguration;
+import org.apache.streampipes.extensions.api.pe.context.EventProcessorRuntimeContext;
+import org.apache.streampipes.extensions.api.pe.param.IDataProcessorParameters;
+import org.apache.streampipes.extensions.api.pe.routing.SpOutputCollector;
+import org.apache.streampipes.model.runtime.Event;
+import org.apache.streampipes.sdk.builder.ProcessingElementBuilder;
+import org.apache.streampipes.sdk.builder.StreamRequirementsBuilder;
+import org.apache.streampipes.sdk.builder.processor.DataProcessorConfiguration;
+import org.apache.streampipes.sdk.helpers.EpProperties;
+import org.apache.streampipes.sdk.helpers.EpRequirements;
+import org.apache.streampipes.sdk.helpers.Labels;
+import org.apache.streampipes.sdk.helpers.OutputStrategies;
+import org.apache.streampipes.sdk.helpers.SupportedFormats;
+import org.apache.streampipes.sdk.helpers.SupportedProtocols;
+import org.apache.streampipes.vocabulary.SO;
+
+public class GeofencingProcessor implements IStreamPipesDataProcessor {
+
+ private static final String LATITUDE_CENTER = "latitude-center";
+ private static final String LONGITUDE_CENTER = "longitude-center";
+
+ private float centerLatitude;
+ private float centerLongitude;
+ private String latitudeFieldName;
+ private String longitudeFieldName;
+
+ private int radius;
+
+ public IDataProcessorConfiguration declareConfig() {
+ return DataProcessorConfiguration.create(
+ GeofencingProcessor::new,
+ ProcessingElementBuilder.create("org.streampipes.tutorial-geofencing")
+ .category(DataProcessorType.ENRICH)
+ .withAssets(Assets.DOCUMENTATION, Assets.ICON)
+ .withLocales(Locales.EN)
+ .requiredStream(StreamRequirementsBuilder
+ .create()
+ .requiredPropertyWithUnaryMapping(EpRequirements.domainPropertyReq(Geo.lat),
+ Labels.from("latitude-field", "Latitude", "The event " +
+ "property containing the latitude value"), PropertyScope.MEASUREMENT_PROPERTY)
+ .requiredPropertyWithUnaryMapping(EpRequirements.domainPropertyReq(Geo.lng),
+ Labels.from("longitude-field", "Longitude", "The event " +
+ "property containing the longitude value"), PropertyScope.MEASUREMENT_PROPERTY)
+ .build())
+ .outputStrategy(OutputStrategies.keep())
+ .requiredIntegerParameter("radius", "Geofence Size", "The size of the circular geofence in meters.", 0, 1000, 1)
+ .requiredFloatParameter(Labels.from(LATITUDE_CENTER, "Latitude", "The latitude value"))
+ .requiredFloatParameter(Labels.from(LONGITUDE_CENTER, "Longitude", "The longitude value"))
+ .build()
+ );
+ }
+
+ @Override
+ public void onPipelineStarted(IDataProcessorParameters params,
+ SpOutputCollector collector,
+ EventProcessorRuntimeContext runtimeContext) {
+ this.centerLatitude = params.extractor().singleValueParameter(LATITUDE_CENTER, Float.class);
+ this.centerLongitude = params.extractor().singleValueParameter(LONGITUDE_CENTER, Float.class);
+ this.latitudeFieldName = params.extractor().mappingPropertyValue("latitude-field");
+ this.longitudeFieldName = params.extractor().mappingPropertyValue("longitude-field");
+ this.radius = params.extractor().singleValueParameter("radius", Integer.class);
+ }
+
+ @Override
+ public void onEvent(Event event,
+ SpOutputCollector collector) {
+
+ }
+
+ @Override
+ public void onPipelineStopped() {
+
+ }
+}
+
+```
+
+## Adding an implementation
+
+Everything we need to do now is to add an implementation.
+
+Add the following piece of code to the onEvent method, which realizes the Geofencing functionality:
+
+```java
+
+ @Override
+ public void onEvent(Event event,
+ SpOutputCollector collector) {
+ float latitude = event.getFieldBySelector(latitudeFieldName).getAsPrimitive().getAsFloat();
+ float longitude = event.getFieldBySelector(longitudeFieldName).getAsPrimitive().getAsFloat();
+
+ float distance = distFrom(latitude,longitude, centerLatitude, centerLongitude);
+
+ if(distance <= radius){
+ collector.collect(event);
+ }
+ }
+
+ public static float distFrom(float lat1, float lng1, float lat2, float lng2) {
+ double earthRadius = 6371000;
+ double dLat = Math.toRadians(lat2-lat1);
+ double dLng = Math.toRadians(lng2-lng1);
+ double a = Math.sin(dLat/2)*Math.sin(dLat/2) +
+ Math.cos(Math.toRadians(lat1))*Math.cos(Math.toRadians(lat2)) *
+ Math.sin(dLng/2)*Math.sin(dLng/2);
+
+ double c = 2*Math.atan2(Math.sqrt(a),Math.sqrt(1-a));
+
+ return(float)(earthRadius*c);
+ }
+```
+
+We won't go into details here as this isn't StreamPipes-related code, but in general the class extracts latitude and
+longitude fields from the input event (which is provided as a map data type) and calculates the distance between the
+geofence center and these coordinates.
+If the distance is below the given radius, the event is forwarded to the next operator.
+
+See the [event model](06_extend-sdk-event-model.md) guide to learn how to extract parameters from events.
+
+## Registering the pipeline element
+
+The final step is to register the data processor in the `Init` method. Add the following line to
+the `SpServiceDefinitionBuilder`:
+
+```java
+ .registerPipelineElement(new GeofencingProcessor())
+```
+
+## Starting the service
+
+:::tip
+
+Once you start the service, it will register in StreamPipes with the hostname. The hostname will be auto-discovered and
+should work out-of-the-box.
+In some cases, the detected hostname is not resolvable from within a container (where the core is running). In this
+case, provide a SP_HOST environment variable to override the auto-discovery.
+
+:::
+
+:::tip
+
+The default port of all pipeline element services as defined in the `create` method is port 8090.
+If you'd like to run multiple services at the same time on your development machine, change the port here. As an
+alternative, you can also provide an env variable `SP_PORT` which overrides the port settings. This is useful to use
+different configs for dev and prod environments.
+
+:::
+
+Now we are ready to start our service!
+
+Configure your IDE to provide an environment variable called ``SP_DEBUG`` with value ``true`` when starting the project.
+
+Execute the main method in the class `Init` we've just created.
+
+The service automatically registers itself in StreamPipes.
+To install the just created element, open the StreamPipes UI and follow the manual provided in
+the [user guide](03_use-install-pipeline-elements.md).
+
+## Read more
+
+Congratulations! You've just created your first data processor for StreamPipes.
+There are many more things to explore and data processors can be defined in much more detail using multiple wrappers.
+Follow our [SDK guide](06_extend-sdk-static-properties.md) to see what's possible!
diff --git a/website-v2/versioned_docs/version-0.95.1/06_extend-tutorial-data-sinks.md b/website-v2/versioned_docs/version-0.95.1/06_extend-tutorial-data-sinks.md
new file mode 100644
index 000000000..09baeff71
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/06_extend-tutorial-data-sinks.md
@@ -0,0 +1,272 @@
+---
+id: extend-tutorial-data-sinks
+title: "Tutorial: Data Sinks"
+sidebar_label: "Tutorial: Data Sinks"
+---
+
+In this tutorial, we will add a new data sink using the standalone wrapper.
+
+From an architectural point of view, we will create a self-contained service that includes the description of the data
+sink and a corresponding implementation.
+
+## Objective
+
+We are going to create a new data sink that calls an external HTTP endpoint to forward data to an external service.
+
+For each incoming event, an external service is invoked using an HTTP POST request. In this example, we'll call an
+endpoint provided by [RequestBin](https://requestbin.com/).
+To setup your own endpoint, go to [https://requestbin.com/](https://requestbin.com/) and click "Create a request bin".
+Copy the URL of the newly created endpoint.
+
+## Project setup
+
+Instead of creating a new project from scratch, we recommend to use the Maven archetype to create a new project
+skeleton (streampipes-archetype-extensions-jvm).
+Enter the following command in a command line of your choice (Apache Maven needs to be installed):
+
+```
+mvn archetype:generate -DarchetypeGroupId=org.apache.streampipes \
+-DarchetypeArtifactId=streampipes-archetype-extensions-jvm -DarchetypeVersion=0.93.0 \
+-DgroupId=org.streampipes.tutorial -DartifactId=sink-tutorial -DclassNamePrefix=Rest -DpackageName=mypackage
+```
+
+You will see a project structure similar to the structure shown in the [archetypes](06_extend-archetypes.md) section.
+
+:::tip
+
+Besides the basic project skeleton, the sample project also includes an example Dockerfile you can use to package your
+application into a Docker container.
+
+:::
+
+Now you're ready to create your first data sink for StreamPipes!
+
+## Adding data sink requirements
+
+First, we will add a new stream requirement.
+Create a class `RestSink` which should look as follows:
+
+```java
+package org.apache.streampipes.pe.example;
+
+import org.apache.streampipes.extensions.api.pe.IStreamPipesDataSink;
+import org.apache.streampipes.extensions.api.pe.config.IDataSinkConfiguration;
+import org.apache.streampipes.extensions.api.pe.context.EventSinkRuntimeContext;
+import org.apache.streampipes.extensions.api.pe.param.IDataSinkParameters;
+import org.apache.streampipes.model.DataSinkType;
+import org.apache.streampipes.model.runtime.Event;
+import org.apache.streampipes.model.schema.PropertyScope;
+import org.apache.streampipes.sdk.builder.DataSinkBuilder;
+import org.apache.streampipes.sdk.builder.StreamRequirementsBuilder;
+import org.apache.streampipes.sdk.builder.sink.DataSinkConfiguration;
+import org.apache.streampipes.sdk.helpers.EpRequirements;
+import org.apache.streampipes.sdk.helpers.Labels;
+import org.apache.streampipes.sdk.helpers.Locales;
+import org.apache.streampipes.sdk.utils.Assets;
+
+public class RestSink implements IStreamPipesDataSink {
+
+ @Override
+ public IDataSinkConfiguration declareConfig() {
+ return DataSinkConfiguration.create(
+ RestSink::new,
+ DataSinkBuilder.create("org.apache.streampipes.tutorial.pe.sink.rest")
+ .category(DataSinkType.NOTIFICATION)
+ .withAssets(Assets.DOCUMENTATION, Assets.ICON)
+ .withLocales(Locales.EN)
+ .requiredStream(StreamRequirementsBuilder
+ .create()
+ .requiredPropertyWithNaryMapping(EpRequirements.anyProperty(), Labels.withId(
+ "fields-to-send"), PropertyScope.NONE)
+ .build())
+ .build()
+ );
+ }
+
+ @Override
+ public void onPipelineStarted(IDataSinkParameters params,
+ EventSinkRuntimeContext eventSinkRuntimeContext) {
+
+ }
+
+ @Override
+ public void onEvent(Event event) {
+
+ }
+
+ @Override
+ public void onPipelineStopped() {
+
+ }
+
+
+```
+
+In this class, we need to implement three methods: The `declareConfig` method is used to define abstract stream
+requirements such as event properties that must be present in any input stream that is later connected to the element
+using the StreamPipes UI.
+The second method, `onPipelineStarted` is called once a pipeline using this sink is started. The third method, `onEvent`, is
+called for every incoming event.
+
+The `DataSinkBuilder` within the ``declareConfig`` method describes the properties of our data sink:
+
+* ``category`` defines a category for this sink.
+* ``withAssets`` denotes that we will provide an external documentation file and an icon, which can be found in
+ the ``resources`` folder
+* ``withLocales`` defines that we will provide an external language file, also available in the ``resources`` folder
+* ``requiredStream`` defines requirements any input stream connected to this sink must provide. In this case, we do not
+ have any specific requirements, we just forward all incoming events to the REST sink. However, we want to let the user
+ display a list of available fields from the connected input event, where users can select a subset. This is defined by
+ defining a Mapping from the empty requirement. This will later on render a selection dialog in the pipeline editor.
+
+The ``onPipelineStarted`` method is called when a pipeline containing the sink is started. Once a pipeline is started, we
+would like to extract user-defined parameters.
+In this example, we simply extract the fields selected by users that should be forwarded to the REST sink. Finally, we
+return a new configured event sink containing the parameters.
+
+## Pipeline element invocation
+
+Once users start a pipeline that uses our geofencing component, the _onInvocation_ method in our class is called. The
+interface `IDataSinkParameters` includes methods to extract the configuration parameters a user has selected in
+the pipeline editor and information on the actual streams that are connected to the pipeline element.
+
+## Adding an implementation
+
+Now we'll add a proper implementation (i.e., the Rest call executed for every incoming event) to the following methods:
+
+Our final class should look as follows:
+
+```java
+package org.apache.streampipes.pe.example;
+
+import org.apache.streampipes.commons.exceptions.SpRuntimeException;
+import org.apache.streampipes.dataformat.SpDataFormatDefinition;
+import org.apache.streampipes.dataformat.json.JsonDataFormatDefinition;
+import org.apache.streampipes.extensions.api.pe.IStreamPipesDataSink;
+import org.apache.streampipes.extensions.api.pe.config.IDataSinkConfiguration;
+import org.apache.streampipes.extensions.api.pe.context.EventSinkRuntimeContext;
+import org.apache.streampipes.extensions.api.pe.param.IDataSinkParameters;
+import org.apache.streampipes.model.DataSinkType;
+import org.apache.streampipes.model.runtime.Event;
+import org.apache.streampipes.model.schema.PropertyScope;
+import org.apache.streampipes.sdk.builder.DataSinkBuilder;
+import org.apache.streampipes.sdk.builder.StreamRequirementsBuilder;
+import org.apache.streampipes.sdk.builder.sink.DataSinkConfiguration;
+import org.apache.streampipes.sdk.helpers.EpRequirements;
+import org.apache.streampipes.sdk.helpers.Labels;
+import org.apache.streampipes.sdk.helpers.Locales;
+import org.apache.streampipes.sdk.utils.Assets;
+
+import com.google.common.base.Charsets;
+import org.apache.http.client.fluent.Request;
+import org.apache.http.entity.StringEntity;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+
+public class RestSink implements IStreamPipesDataSink {
+
+ private static final Logger LOG = LoggerFactory.getLogger(RestSink.class);
+
+ private static final String REST_ENDPOINT_URI = YOUR_REQUEST_BIN_URL;
+ private List fieldsToSend;
+ private SpDataFormatDefinition dataFormatDefinition;
+
+ @Override
+ public IDataSinkConfiguration declareConfig() {
+ return DataSinkConfiguration.create(
+ RestSink::new,
+ DataSinkBuilder.create("org.apache.streampipes.tutorial.pe.sink.rest")
+ .category(DataSinkType.NOTIFICATION)
+ .withAssets(Assets.DOCUMENTATION, Assets.ICON)
+ .withLocales(Locales.EN)
+ .requiredStream(StreamRequirementsBuilder
+ .create()
+ .requiredPropertyWithNaryMapping(EpRequirements.anyProperty(), Labels.withId(
+ "fields-to-send"), PropertyScope.NONE)
+ .build())
+ .build()
+ );
+ }
+
+ @Override
+ public void onPipelineStarted(IDataSinkParameters params,
+ EventSinkRuntimeContext eventSinkRuntimeContext) {
+ this.dataFormatDefinition = new JsonDataFormatDefinition();
+ this.fieldsToSend = params.extractor().mappingPropertyValues("fields-to-send");
+ }
+
+ @Override
+ public void onEvent(Event event) {
+ Map outEventMap = event.getSubset(fieldsToSend).getRaw();
+ try {
+ String json = new String(dataFormatDefinition.fromMap(outEventMap));
+ Request.Post(REST_ENDPOINT_URI).body(new StringEntity(json, Charsets.UTF_8)).execute();
+ } catch (SpRuntimeException e) {
+ LOG.error("Could not parse incoming event");
+ } catch (IOException e) {
+ LOG.error("Could not reach endpoint at {}", REST_ENDPOINT_URI);
+ }
+ }
+
+ @Override
+ public void onPipelineStopped() {
+
+ }
+}
+
+```
+
+The only class variable you need to change right now is the REST_ENDPOINT_URL. Change this url to the URL provided by
+your request bin.
+In the ``ònEvent`` method, we use a helper method to get a subset of the incoming event.
+Finally, we convert the resulting ``Map`` to a JSON string and call the endpoint.
+
+## Preparing the service
+
+The final step is to register the sink as a pipeline element.
+
+Go to the class `Init` and register the sink:
+
+```java
+.registerPipelineElement(new RestSink())
+```
+
+## Starting the service
+
+:::tip
+
+Once you start the service, it will register in StreamPipes with the hostname. The hostname will be auto-discovered and
+should work out-of-the-box.
+In some cases, the detected hostname is not resolvable from within a container (where the core is running). In this
+case, provide a SP_HOST environment variable to override the auto-discovery.
+
+:::
+
+:::tip
+
+The default port of all pipeline element services as defined in the `create` method is port 8090.
+If you'd like to run multiple services at the same time on your development machine, change the port here. As an
+alternative, you can also provide an env variable `SP_PORT` which overrides the port settings. This is useful to use
+different configs for dev and prod environments.
+
+:::
+
+Now we are ready to start our service!
+
+Configure your IDE to provide an environment variable called ``SP_DEBUG`` with value ``true`` when starting the project.
+
+Execute the main method in the class `Init` we've just created. The service automatically registers itself in
+StreamPipes.
+
+To install the created element, open the StreamPipes UI and follow the manual provided in
+the [user guide](03_use-install-pipeline-elements.md).
+
+## Read more
+
+Congratulations! You've just created your first data sink for StreamPipes.
+There are many more things to explore and data sinks can be defined in much more detail using multiple wrappers.
+Follow our [SDK guide](../dev-guide-sdk-guide-sinks) to see what's possible!
diff --git a/website-v2/versioned_docs/version-0.95.1/07_technicals-architecture.md b/website-v2/versioned_docs/version-0.95.1/07_technicals-architecture.md
new file mode 100644
index 000000000..e62ba4625
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/07_technicals-architecture.md
@@ -0,0 +1,110 @@
+---
+id: technicals-architecture
+title: Architecture
+sidebar_label: Architecture
+---
+
+## Architecture
+
+
+
+Apache StreamPipes implements a microservice architecture as shown in the figure above.
+
+## StreamPipes Core
+
+The StreamPipes Core is the central component to manage all StreamPipes resources.
+It delegates the management of adapters, pipeline elements, pipelines and functions to registered extensions services (see below) and monitors the execution of extensions.
+The Core also provides internal REST interfaces to communicate with the user interface, as well as public REST interfaces that can be used by external applications and StreamPipes clients.
+
+Configuration and user data are stored in an Apache CouchDB database.
+
+## StreamPipes Extensions
+
+An Apache StreamPipes extensions service is a microservice which contains the implementation of specific adapters, data streams, data processors, data sinks and functions.
+Multiple extension services can be part of a single StreamPipes installation.
+Each service might provide its own set of extensions. Extensions services register at the StreamPipes Core at startup. Users are able to install all or a subset of extensions of each service.
+This allows StreamPipes to be extended at runtime by starting a new service with additional extensions.
+
+Extensions can be built using the SDK (see [Extending StreamPipes](06_extend-setup.md)).
+Extensions services can be provided either in Java or in Python.
+
+:::info
+
+As of version 0.93.0, the Python SDK supports functions only. If you would like to develop pipeline elements in Python as well, let us know in a [Github discussions](https://github.com/apache/streampipes/discussions) comment, so that we can better prioritize development.
+
+:::
+
+
+An extensions service interacts with the core by receiving control messages to invoke or detach an extension.
+In addition, the core regularly fetches monitoring and log data from each registered extensions service.
+
+
+## StreamPipes Client
+
+The Apache StreamPipes Client is a lightweight library for Java and Python which can be used to interact with StreamPipes resources programmatically.
+For instance, users use the client to influence the control flow of pipelines, to download raw data from the data lake APIs or to realize custom applications with live data.
+
+
+## Third-party systems
+
+In addition to the core components, an Apache StreamPipes version uses several third-party services, which are part of the standard installation.
+
+* Configurations and user data is stored in an [Apache CouchDB](https://couchdb.apache.org) database.
+* Time-series data is stored in an [InfluxDB](https://github.com/influxdata/influxdb) database.
+* Events are exchanged over a messaging system. Users can choose from various messaging systems that StreamPipes supports. Currently, we support [Apache Kafka](https://kafka.apache.org), [Apache Pulsar](https://pulsar.apache.org), [MQTT](https://mqtt.org/) and [NATS](https://nats.io/). The selection of the right messaging system depends on the use case. See [Messaging](07_technicals-messaging.md) for more information.
+
+:::info
+
+Versions prior to 0.93.0 included Consul for service discovery and registration. Starting from 0.93.0 onwards, we switched to an internal service discovery mechanism.
+
+:::
+
+All mentioned third-party services are part of the default installation and are auto-configured during the installation process.
+
+## Programming Languages
+
+Apache StreamPipes is mainly written in Java.
+Services are based on Spring Boot.
+The included [Python integration](https://streampipes.apache.org/docs/docs/python/latest/) is written in Python.
+
+The user interface is mainly written in TypeScript using the Angular framework.
+
+
+## Data Model
+
+Internally, Apache StreamPipes realizes a stream processing layer where events are continuously exchanged over a messaging system.
+When building a pipeline, data processors consume data from a topic assigned by the core and publish data back to another topic, which is also assigned by the core.
+
+At runtime, events have a flat and easily understandable data structure, consisting of key/value pairs. Events are serialized in JSON, although StreamPipes can be configured to use other (binary) message formats.
+
+This allows for easy integration with other systems which want to consume data from Streampipes, since an event could look as simple as this:
+
+```json
+{
+ "timestamp": 1234556,
+ "deviceId": "ABC",
+ "temperature": 37.5
+}
+```
+
+However, this wouldn't be very expressive, right? To [assist users](07_technicals-user-guidance.md), StreamPipes provides a rich description layer for events. So under the hood, for the `temperature` field shown above StreamPipes can also store the following:
+
+```json
+{
+ "label": "Temperature",
+ "description": "Measures the temperature during leakage tests",
+ "measurementUnit": "https://qudt.org/vocab/unit/DEG_C",
+ "runtimeName": "temperature",
+ "runtimeType": "xsd:float",
+ "semanticType": "https://my-company-vocabulary/leakage-test-temperature"
+}
+```
+
+By dividing the description layer from the runtime representation, we get a good trade-off between expressivity, readability for humans and lightweight runtime message formats.
+The schema is stored in an internal schema registry and available to the client APIs and user interface views to improve validation and user guidance.
+
+StreamPipes also supports arrays and nested structures, although we recommend using flat events where possible to ease integration with downstream systems (such as time-series storage).
+
+
+
+
diff --git a/website-v2/versioned_docs/version-0.95.1/07_technicals-messaging.md b/website-v2/versioned_docs/version-0.95.1/07_technicals-messaging.md
new file mode 100644
index 000000000..d5308a6d8
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/07_technicals-messaging.md
@@ -0,0 +1,65 @@
+---
+id: technicals-messaging
+title: Messaging
+sidebar_label: Messaging
+---
+
+## Architecture
+
+To exchange messages at runtime between individual [Extensions Services](07_technicals-architecture.md), StreamPipes uses external messaging systems.
+This corresponds to an event-driven architecture with a central message broker and decoupled services which consume and produce events from the messaging system.
+
+There are many different open source messaging systems on the market, which each have individual strengths.
+To provide a flexible system which matches different needs, StreamPipes can be configured to use various messaging systems.
+
+## Supported messaging systems
+
+The following messaging systems are currently supported:
+
+* Apache Kafka
+* Apache Pulsar
+* MQTT
+* NATS
+
+## Configure StreamPipes to use another messaging system
+
+Configuring StreamPipes for one of these messaging system is an installation-time configuration.
+We currently do not recommend to change the configuration at runtime.
+
+The protocol can be configured with the environment variable `SP_PRIORITIZED_PROTOCOL` assigned to the core with one of the following values:
+
+```bash
+SP_PRIORITIZED_PROTOCOL=kafka # Use Kafka as protocol
+SP_PRIORITIZED_PROTOCOL=pulsar # Use Pulsar as protocol
+SP_PRIORITIZED_PROTOCOL=mqtt # Use MQTT as protocol
+SP_PRIORITIZED_PROTOCOL=nats # Use NATS as protocol
+```
+
+Note that each extension service can support an arbitrary number of protocols. For instance, you can have a lightweight extension service which only supports NATS, but have another, cloud-centered service which supports Kafka, both registered at the Core.
+To select a protocol when multiple protocols are supported by two pipeline elements, StreamPipes selects a protocol based on a priority, which can be configured in the [Configuration View](03_use-configurations.md).
+StreamPipes ensures that only pipeline elements which have a commonly supported protocol can be connected.
+
+Note that you might need to change the installation files. For the `Docker-Compose` based installation, we provide various compose file for different messaging setups. For the `Kubernetes` installation, we provide variables which can be set in the helm chart's `values.yaml` file.
+
+### Configure broker addresses
+
+By default, StreamPipes assumes that the messaging system is started from its own environment, e.g., the system configured in the selected `Docker-Compose` file.
+
+Besides that, it is also possible to let StreamPipes connect to an externally provided messaging system. For this purpose, various environment variables exist.
+
+* `SP_PRIORITIZED_PROTOCOL` to set the prioritized protocol to either `kafka`, `mqtt`, `nats` or `pulsar`
+
+* `SP_KAFKA_HOST`, `SP_KAFKA_PORT` to configure Kafka access
+* `SP_MQTT_HOST`, `SP_MQTT_PORT` to configure MQTT access
+* `SP_NATS_HOST`, `SP_NATS_PORT` to configure NATS access
+* `SP_PULSAR_URL` to configure Pulsar access
+
+
+Most settings can also be set in the UI under `Settings->Messaging`.
+
+:::warning Installation-time configurations
+Although it is currently possible to change messaging settings in the user interface, we do not support dynamic modification of messaging systems.
+Choosing a proper system is considered an installation-time setting which should not be changed afterwards.
+Already existing Adapters and pipeline elements are not properly updated after changes of the messaging layer.
+:::
+
diff --git a/website-v2/versioned_docs/version-0.95.1/07_technicals-runtime-wrappers.md b/website-v2/versioned_docs/version-0.95.1/07_technicals-runtime-wrappers.md
new file mode 100644
index 000000000..9cebadfd2
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/07_technicals-runtime-wrappers.md
@@ -0,0 +1,37 @@
+---
+id: technicals-runtime-wrappers
+title: Runtime Wrappers
+sidebar_label: Runtime Wrappers
+---
+
+## Overview
+
+In general, StreamPipes has an exchangeable runtime layer, e.g., the actual processing of incoming events can be delegated to a third-party stream processing system such as Kafka Streams or Apache Flink.
+
+The default runtime wrapper is the StreamPipes Native Wrapper, called the `StandaloneWrapper`.
+
+Although not recommended for production, we invite interested developers to check out our experimental wrappers:
+
+* Kafka Streams runtime wrapper at [https://github.com/apache/streampipes/tree/dev/streampipes-wrapper-kafka-streams](https://github.com/apache/streampipes/tree/dev/streampipes-wrapper-kafka-streams)
+* Apache Flink runtime wrapper at [https://github.com/apache/streampipes/tree/dev/streampipes-wrapper-flink](https://github.com/apache/streampipes/tree/dev/streampipes-wrapper-flink)
+
+## Assigning a runtime wrapper to an extension service
+
+Runtime wrappers can be assigned in the `Service Definition` of the `Init` class of an extension service:
+
+```java
+
+ @Override
+ public SpServiceDefinition provideServiceDefinition(){
+ return SpServiceDefinitionBuilder.create("org.apache.streampipes.extensions.all.jvm",
+ "StreamPipes Extensions (JVM)",
+ "",8090)
+ ...
+ .registerRuntimeProvider(new StandaloneStreamPipesRuntimeProvider())
+ ...
+ .build();
+ }
+
+```
+
+Please let us know through our communication channels if you are interested in this feature and if you are willing to contribute!
diff --git a/website-v2/versioned_docs/version-0.70.0/07_technicals-user-guidance.md b/website-v2/versioned_docs/version-0.95.1/07_technicals-user-guidance.md
similarity index 70%
rename from website-v2/versioned_docs/version-0.70.0/07_technicals-user-guidance.md
rename to website-v2/versioned_docs/version-0.95.1/07_technicals-user-guidance.md
index 0141dabc1..697411861 100644
--- a/website-v2/versioned_docs/version-0.70.0/07_technicals-user-guidance.md
+++ b/website-v2/versioned_docs/version-0.95.1/07_technicals-user-guidance.md
@@ -2,7 +2,6 @@
id: technicals-user-guidance
title: User Guidance
sidebar_label: User Guidance
-original_id: technicals-user-guidance
---
tbd
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/08_debugging.md b/website-v2/versioned_docs/version-0.95.1/08_debugging.md
similarity index 70%
rename from website-v2/versioned_docs/version-0.70.0/08_debugging.md
rename to website-v2/versioned_docs/version-0.95.1/08_debugging.md
index 33bedad41..95892c175 100644
--- a/website-v2/versioned_docs/version-0.70.0/08_debugging.md
+++ b/website-v2/versioned_docs/version-0.95.1/08_debugging.md
@@ -2,7 +2,6 @@
id: debugging-debugging
title: Debugging
sidebar_label: Debugging
-original_id: debugging-debugging
---
tbd
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/08_monitoring.md b/website-v2/versioned_docs/version-0.95.1/08_monitoring.md
similarity index 70%
rename from website-v2/versioned_docs/version-0.70.0/08_monitoring.md
rename to website-v2/versioned_docs/version-0.95.1/08_monitoring.md
index 0712b98e9..6680b5d86 100644
--- a/website-v2/versioned_docs/version-0.70.0/08_monitoring.md
+++ b/website-v2/versioned_docs/version-0.95.1/08_monitoring.md
@@ -2,7 +2,6 @@
id: debugging-monitoring
title: Monitoring
sidebar_label: Monitoring
-original_id: debugging-monitoring
---
tbd
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/09_contribute.md b/website-v2/versioned_docs/version-0.95.1/09_contribute.md
similarity index 96%
rename from website-v2/versioned_docs/version-0.70.0/09_contribute.md
rename to website-v2/versioned_docs/version-0.95.1/09_contribute.md
index 43ea625f3..119568929 100644
--- a/website-v2/versioned_docs/version-0.70.0/09_contribute.md
+++ b/website-v2/versioned_docs/version-0.95.1/09_contribute.md
@@ -2,7 +2,6 @@
id: community-contribute
title: Contribute
sidebar_label: Contribute
-original_id: community-contribute
---
## Contribute
diff --git a/website-v2/versioned_docs/version-0.70.0/09_get-help.md b/website-v2/versioned_docs/version-0.95.1/09_get-help.md
similarity index 88%
rename from website-v2/versioned_docs/version-0.70.0/09_get-help.md
rename to website-v2/versioned_docs/version-0.95.1/09_get-help.md
index 0e564825f..077f0b62f 100644
--- a/website-v2/versioned_docs/version-0.70.0/09_get-help.md
+++ b/website-v2/versioned_docs/version-0.95.1/09_get-help.md
@@ -2,7 +2,6 @@
id: community-get-help
title: Get Help
sidebar_label: Get Help
-original_id: community-get-help
---
The Apache StreamPipes community is happy to help with any questions or problems you might have.
@@ -12,7 +11,7 @@ Subscribe to our user mailing list to ask a question.
[Mailing Lists](https://streampipes.apache.org/mailinglists.html)
-To subscribe to the user list, send an email to [users-subscribe@streampipes.apache.org](mailto:users-subscribe@streampipes.apache.org)
+To subscribe to the user list, send an email to [users-subscribe@streampipes.apache.org](users-subscribe@streampipes.apache.org)
You can also ask questions on our Github discussions page:
[Github Discussions](https://github.com/apache/streampipes/discussions)
diff --git a/website-v2/versioned_docs/version-0.70.0/faq-common-problems.md b/website-v2/versioned_docs/version-0.95.1/faq-common-problems.md
similarity index 99%
rename from website-v2/versioned_docs/version-0.70.0/faq-common-problems.md
rename to website-v2/versioned_docs/version-0.95.1/faq-common-problems.md
index c7a61f147..14195c0f0 100644
--- a/website-v2/versioned_docs/version-0.70.0/faq-common-problems.md
+++ b/website-v2/versioned_docs/version-0.95.1/faq-common-problems.md
@@ -2,7 +2,6 @@
id: faq-common-problems
title: Common Problems
sidebar_label: Common Problems
-original_id: faq-common-problems
---
* Windows 10: Consul, Kafka, Zookeeper, or Kafka-Rest did not start
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.image.stream.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.adapters.image.stream.md
similarity index 94%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.image.stream.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.adapters.image.stream.md
index 03d8f81cf..a03806d9e 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.image.stream.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.adapters.image.stream.md
@@ -2,7 +2,6 @@
id: org.apache.streampipes.connect.adapters.image.stream
title: Image Upload (Stream)
sidebar_label: Image Upload (Stream)
-original_id: org.apache.streampipes.connect.adapters.image.stream
---
+
+
+
+
+
+
+
+***
+
+## Description
+
+This adapter enables the integration of IO-Link sensor data produced by an ifm IO-Link Master
+(e.g., AL1350) with Apache StreamPipes. To use this adapter, you need to configure your IO-Link
+master to publish events to an MQTT broker. This can be achieved through a REST interface or via
+the browser at `http://##IP_OF_IO_LINK_MASTER##/web/subscribe`. For detailed instructions,
+please refer to the ifm documentation.
+
+### Requirements
+The JSON events should include the following information:
+- `deviceinfo.serialnumber`
+- Only the pdin value is required for each port (e.g., `port[0]`).
+- The event `timer[1].datachanged` can be used as a trigger.
+Using this adapter, you can create a stream for sensors of the same type.
+
+### Restrictions
+This version supports a single IO-Link master. If you want to connect multiple masters, they must have the same setup.
+If you have different requirements, please inform us through the mailing list or GitHub discussions.
+
+***
+
+## Configuration
+
+Here is a list of the configuration parameters you must provide.
+
+### Broker URL
+
+Enter the URL of the broker, including the protocol (e.g. `tcp://10.20.10.3:1883`)
+
+### Access Mode
+
+If necessary, provide broker credentials.
+
+### Ports
+
+Select the ports that are connected to the IO-Link sensors.
+
+### Sensor Type
+
+Choose the type of sensor you want to connect. (**IMPORTANT:** Currently, only the VVB001 is supported)
+
+## Output
+
+The output includes all values from the selected sensor type. Here is an example for the `VVB001 sensor`:
+```
+{
+ "aPeak": 6.6,
+ "aRms": 1.8,
+ "crest": 3.7,
+ "out1": true,
+ "out2": true,
+ "port": "000000001234",
+ "status": 0,
+ "temperature": 22,
+ "timestamp": 1685525380729,
+ "vRms": 0.0023
+}
+```
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.netio.mqtt.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.adapters.netio.mqtt.md
similarity index 87%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.netio.mqtt.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.adapters.netio.mqtt.md
index dca15946d..80cb04156 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.netio.mqtt.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.adapters.netio.mqtt.md
@@ -1,8 +1,7 @@
---
-id: org.apache.streampipes.connect.adapters.netio.mqtt
+id: org.apache.streampipes.connect.iiot.adapters.netio.mqtt
title: NETIO MQTT M2M
sidebar_label: NETIO MQTT M2M
-original_id: org.apache.streampipes.connect.adapters.netio.mqtt
---
+
+# Open Industry 4.0 (OI4)
+
+
+
+
+
+---
+
+
+
+The OI4 adapter facilitates the integration of any OT-device compliant with the OI4 standard into Apache StreamPipes.
+For detailed information about this standard, please refer to their [development guide](https://openindustry4.com/fileadmin/Dateien/Downloads/OEC_Development_Guideline_V1.1.1.pdf).
+
+### Requirements
+
+Your OI4-compatible device should emit data via an MQTT broker.
+
+### Restrictions
+
+This adapter exclusively allows data consumption from a specific MQTT topic.
+If you have different requirements, please notify us through the mailing list or GitHub discussions.
+
+---
+
+## Configuration
+
+Below is a list of the configuration parameters you need to provide.
+
+### Broker URL
+
+Enter the URL of the broker, including the protocol and port number (e.g., `tcp://10.20.10.3:1883`).
+
+### Access Mode
+
+Choose between unauthenticated access or input your credentials for authenticated access.
+
+### Sensor Description
+
+You should provide information about the sensor you want to connect to. This can be achieved in two ways:
+
+a) **By Type**: Specify the type of sensor you want to connect to, e.g., `'VVB001'`. <\br>
+b) **By IODD**: Simply upload the IODD description of the respective sensor. Please note: This feature is not yet available! If you're interested in this feature, please notify us through the mailing list or GitHub discussions and share your use case with us.
+
+### Selected Sensors
+
+Configure which sensors of the master device you want to connect to. You can either select `All`, which will provide data from all sensors available on the respective MQTT topic, or choose `Custom Selection` and provide a list of sensor IDs in a comma-separated string (e.g., `000008740649,000008740672`).
+
+## Output
+
+The output consists of all values from the selected sensor type. Below is an example for the `VVB001 sensor`:
+
+```json
+{
+ "a-Rms": 1.8,
+ "OUT2": true,
+ "SensorID": "000008740649",
+ "Temperature": 22,
+ "Crest": 3.7,
+ "v-Rms": 0.0023,
+ "OUT1": true,
+ "Device status": 0,
+ "timestamp": 1685525380729
+}
+```
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.opcua.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.adapters.opcua.md
similarity index 91%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.opcua.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.adapters.opcua.md
index 3fc7169e2..76a65ca55 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.opcua.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.adapters.opcua.md
@@ -1,8 +1,7 @@
---
-id: org.apache.streampipes.connect.adapters.opcua
+id: org.apache.streampipes.connect.iiot.adapters.opcua
title: OPC UA
sidebar_label: OPC UA
-original_id: org.apache.streampipes.connect.adapters.opcua
---
+
+
+
+
+
+
+
+***
+
+## Description
+
+The Modbus adapter allows to connect to a PLC using the Modbus specification.
+
+***
+
+## Configuration
+
+The following configuration options are available when creating the adapter:
+
+### PLC Address
+
+The IP address of the Modbus device without any prefix, which will be added automatically when creating the adapter.
+
+### PLC Port
+
+The PLC port refers to the port of the PLC, such as 502.
+
+### Node ID
+
+The Node ID refers to the ID of the specific device.
+
+### Nodes
+
+The `Nodes` section requires configuration options for the individual nodes.
+Nodes can be either imported from a comma-separated CSV file, or can be directly assigned in the configuration menu.
+
+The following fields must be provided for each node:
+
+* Runtime Name: Refers to the field to internally identify the node, e.g., in the data explorer or pipeline editor.
+* Node Address: Refers to the address of the Node in Modbus, e.g., 1
+* Object Type: Can be selected from the available options `DiscreteInput`, `Coil`, `InputRegister`,
+ or `HoldingRegister`.
+
+An example CSV file looks as follows:
+
+```
+Runtime Name,Node Address,Object Type,
+field1,1,Coil
+temperature,2,Coil
+```
+
+Note that the CSV header must exactly match the titles `Runtime Name`, `Node Address` and `Object Type`.
diff --git a/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.adapters.plc4x.s7.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.adapters.plc4x.s7.md
new file mode 100644
index 000000000..9e22be6df
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.adapters.plc4x.s7.md
@@ -0,0 +1,96 @@
+---
+id: org.apache.streampipes.connect.iiot.adapters.plc4x.s7
+title: PLC4X S7
+sidebar_label: PLC4X S7
+---
+
+
+
+
+
+
+
+
+
+***
+
+## Description
+
+The adapter allows to connect with a Siemens S7 PLC.
+
+***
+
+## Configuration
+
+The following configuration options are available when creating an adapter:
+
+### PLC Address
+
+This field requires the PLC address in form of the IP without the prefixed protocol (e.g., 192.168.34.56).
+
+In addition to the pure IP, other parameters supported by Apache PLC4X can be provided as an URL parameter:
+
+* `local-rack`
+* `local-slot`
+* `local-tsap`
+* `remote-rack`
+* `remote-slot`
+
+Additional configs are separated by `&`.
+
+Example address: `192.68.34.56?remote-rack=0&remote-slot=3&controller-type=S7_400`
+
+See the Apache PLC4X documentation for more information.
+
+### Polling Interval
+
+The polling interval requires a number in milliseconds, which represents the interval in which the adapter will poll the
+PLC for new data. For instance, a polling interval of 1000 milliseconds will configure the adapter to send a request to
+the PLC every second.
+
+### Nodes
+
+In the Nodes section, the PLC nodes that should be gathered are defined.
+There are two options to define the nodes:
+
+* Manual configuration: The address must be assigned manually by providing a runtime name, the node name and the
+ datatype. The `Runtime Name` will be the StreamPipes-internal name of the field, which will also show up in the data
+ explorer and pipeline editor. The `Node Name` refers to the node address of the PLC, e.g., `%Q0.4`. Finally, the data
+ type can be selected from the available selection. Currently available data types
+ are `Bool`, `Byte`, `Int`, `Word`, `Real`, `Char`, `String`, `Date`, `Time of Day` and `Date and Time`.
+* Instead of providing the node information manually, a CSV file can be uploaded. The CSV file can, for instance, be
+ exported from TIA and then be enriched with the appropriate runtime names. This is especially useful when many fields
+ should be added as nodes. Here is an example export enriched with the runtime name:
+
+```
+Runtime Name,Path,Data Type,Node Name
+I_High_sensor,Tag table_1,Bool,%I0.0,
+I_Low_sensor,Tag table_1,Bool,%I0.1,
+I_Pallet_sensor,Tag table_1,Bool,%I0.2,
+I_Loaded,Tag table_1,Bool,%I0.3,
+```
+
+Note that the CSV can contain additional columns, but only the columns `Runtime Name`, `Data Type` and `Node Name` are
+used, while all other columns will be ignored.
+
+## Best Practices
+
+Instead of creating a large event containing all nodes that should be available in StreamPipes, consider to group the
+fields logically into smaller adapters.
+This will ease the definition of pipelines for users and eases future modifications.
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.ros.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.adapters.ros.md
similarity index 87%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.ros.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.adapters.ros.md
index 12e7a3db4..aeac39947 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.adapters.ros.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.adapters.ros.md
@@ -1,8 +1,7 @@
---
-id: org.apache.streampipes.connect.adapters.ros
+id: org.apache.streampipes.connect.iiot.adapters.ros
title: ROS Bridge
sidebar_label: ROS Bridge
-original_id: org.apache.streampipes.connect.adapters.ros
---
+
+
+
+
+
+
+
+***
+
+## Description
+
+This adapter publishes simulated machine sensor data at a configurable time interval. It is ideal for exploring the
+capabilities of StreamPipes without needing your own data or for testing purposes. Three different sensor scenarios are
+available:
+
+* Flowrate
+* Pressure
+* Water Level
+
+All scenarios include an error or anomaly condition, making them suitable for trend detection, anomaly detection, and
+similar applications.
+
+### Flowrate Sensor
+
+This scenario simulates a flowrate sensor in a piping system, including a sensor defect situation. The generated data
+stream includes:
+
+- **Sensor ID**: The identifier or name of the sensor, such as `sensor01`.
+- **Mass Flow**: Numeric value denoting the current mass flow in the sensor, ranging from 0 to 10.
+- **Volume Flow**: Numeric value denoting the current volume flow, ranging from 0 to 10.
+- **Temperature**: Numeric value denoting the current temperature in degrees Celsius, ranging from 40 to 100.
+- **Density**: Numeric value denoting the current density of the fluid, ranging from 40 to 50.
+- **Sensor Fault Flags**: Boolean indicator of sensor issues.
+
+The sensor defect scenario is as follows: Normally, temperature values range between 40 and 50 degrees Celsius. After
+thirty seconds, the simulation switches to defect mode for another thirty seconds, with temperatures ranging from 80 to
+100 degrees Celsius and `Sensor Fault Flags` set to `true`.
+
+### Pressure Sensor
+
+This scenario simulates a pressure sensor in a gas tank, including an anomaly situation. The generated data stream
+includes:
+
+- **Sensor ID**: The identifier or name of the sensor, such as `sensor01`.
+- **Pressure**: Numeric value denoting the current pressure in the tank, ranging from 10 to 70.
+
+The anomaly scenario is as follows: Normally, pressure values range between 10 and 40. After thirty seconds, the
+simulation switches to anomaly mode for another thirty seconds, with pressure values ranging from 40 to 70.
+
+### Water Level Sensor
+
+This scenario simulates a sensor in a water tank, including an overflow situation. The generated data stream includes:
+
+- **Sensor ID**: The identifier or name of the sensor, such as `sensor01`.
+- **Level**: Numeric value denoting the current water level in the tank, ranging from 20 to 80.
+- **Overflow**: Boolean indicator of tank overflow.
+
+The overflow scenario is as follows: Normally, level values range between 20 and 30. After thirty seconds, the
+simulation switches to overflow mode for another thirty seconds, with level values ranging from 60 to 80 and `Overflow`
+set to `true`.
+
+## Configuration
+
+When creating the adapter, the following parameters can be configured:
+
+- **Wait Time**: The time in milliseconds between two sensor events. Defaults to 1000 (1 second).
+- **Sensor**: Select one of the sensor scenarios described above: `flowrate`, `pressure`, `waterlevel`.
+
+***
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.protocol.stream.file.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.protocol.stream.file.md
new file mode 100644
index 000000000..79efd7ca3
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.protocol.stream.file.md
@@ -0,0 +1,90 @@
+---
+id: org.apache.streampipes.connect.iiot.protocol.stream.file
+title: File Stream
+sidebar_label: File Stream
+---
+
+
+
+
+
+
+
+
+
+***
+
+## Description
+
+The File Stream Adapter enables continuous streaming of file contents to Apache StreamPipes, creating a data stream for utilization within StreamPipes. It's particularly handy when you prefer not to connect directly to the data source via StreamPipes or for testing and demonstration purposes. Currently, it supports the following file types:
+
+- CSV
+- JSON
+- XML
+
+### Example
+
+Suppose we have a CSV file (`temperature.csv`) containing data from a temperature sensor recording data every second:
+
+```text
+time,temperature
+1715593295000,36.3
+1715593296000,37.5
+1715593297000,37.0
+1715593298000,37.2
+1715593299000,37.2
+1715593210000,37.6
+1715593211000,37.4
+1715593212000,37.5
+1715593213000,37.5
+1715593214000,37.7
+```
+
+When creating a new File Stream Adapter:
+- Upload the file
+- Select `yes` for `Replay Once`
+- Choose `CSV` as the `Format` with `,` as the `delimiter`, check `Header`
+
+After creating the adapter, it will output one line of the CSV as an event every second.
+Further details on configuration options are provided below.
+
+---
+
+## Configuration
+
+### File
+
+This section determines the file to be streamed by the adapter. Options include:
+
+- `Choose existing file`: Select from files already present in StreamPipes.
+- `Upload new file`: Upload a new file, also available for other adapters. Supports `.csv`, `.json`, and `.xml` file types.
+
+### Overwrite file time
+Enable this option to always pass the current system time as the timestamp when emitting an event. If your file lacks timestamp information, this should be enabled. Conversely, if your file has timestamp information, enabling this option will overwrite it with the current system time. By default, this option is disabled, leaving timestamp information unaffected.
+
+### Replay Once
+Distinguishes between replaying all data contained in the file only once or in a loop until the adapter is manually stopped.
+If enabled, this will cause events from the file to be emitted multiple times. In this case, it is recommended to enable `Overwrite file time` if the resulting stream is to be persisted in StreamPipes, otherwise existing events with the same timestamp will be overwritten.
+
+### Replay Speed
+
+Configures the event frequency:
+- **Keep original time**: Events are emitted based on the timestamp information in the file.
+- **Fastest**: All data in the file is replayed as quickly as possible, with no waiting time.
+- **Speed Up Factor**: Adjusts the waiting time of the adapter based on the provided speed up factor, considering the time between two events in the file.
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.protocol.stream.http.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.protocol.stream.http.md
similarity index 83%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.protocol.stream.http.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.protocol.stream.http.md
index ae603e97f..e24df3a09 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.protocol.stream.http.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.protocol.stream.http.md
@@ -1,8 +1,7 @@
---
-id: org.apache.streampipes.connect.protocol.stream.http
+id: org.apache.streampipes.connect.iiot.protocol.stream.http
title: HTTP Stream
sidebar_label: HTTP Stream
-original_id: org.apache.streampipes.connect.protocol.stream.http
---
+
+
+
+
+
+
+
+***
+
+## Description
+
+Consumes events from a NATS broker.
+
+***
+
+## Configuration
+
+### NATS Subject
+
+The subject (topic) where events should be received from. When using wildcard subjects, all messages need to have the same format currently.
+
+### NATS Broker URL
+
+The URL to connect to the NATS broker. It can be provided multiple urls separated by commas(,).
+(e.g., nats://localhost:4222,nats://localhost:4223)
+
+### Username
+
+The username to authenticate the client with NATS broker.
+
+It is an optional configuration.
+
+### NATS Broker URL
+
+The password to authenticate the client with NATS broker.
+
+It is an optional configuration.
+
+### NATS Connection Properties
+
+All other possible connection configurations that the nats client can be created with.
+It can be provided as key value pairs separated by colons(:) and commas(,).
+(e.g., io.nats.client.reconnect.max:1, io.nats.client.timeout:1000)
+
+It is an optional configuration.
+
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.protocol.stream.pulsar.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.protocol.stream.pulsar.md
similarity index 83%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.protocol.stream.pulsar.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.protocol.stream.pulsar.md
index 0796346fe..f9adc56ce 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.connect.protocol.stream.pulsar.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.connect.iiot.protocol.stream.pulsar.md
@@ -1,8 +1,7 @@
---
-id: org.apache.streampipes.connect.protocol.stream.pulsar
+id: org.apache.streampipes.connect.iiot.protocol.stream.pulsar
title: Apache Pulsar
sidebar_label: Apache Pulsar
-original_id: org.apache.streampipes.connect.protocol.stream.pulsar
---
-
+
@@ -50,4 +49,4 @@ Describe the configuration parameters here
Field that contains the image.
-## Output
+## Output
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processor.imageclassification.jvm.image-cropper.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processor.imageclassification.jvm.image-cropper.md
similarity index 88%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processor.imageclassification.jvm.image-cropper.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processor.imageclassification.jvm.image-cropper.md
index c4f96577b..e2cd6b0e6 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processor.imageclassification.jvm.image-cropper.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processor.imageclassification.jvm.image-cropper.md
@@ -2,7 +2,6 @@
id: org.apache.streampipes.processor.imageclassification.jvm.image-cropper
title: Image Cropper
sidebar_label: Image Cropper
-original_id: org.apache.streampipes.processor.imageclassification.jvm.image-cropper
---
-
+
@@ -38,7 +37,7 @@ Image Enrichment: Crops an + image based on + given bounding box coordinates
## Required input
An image and an array with bounding boxes.
-A box consists of the x and y coordinates in the image as well as the height and width
+A box consists of the x and y coordinates in the image as well as the height and width
## Output
-A new event for each box containing the cropped image
+A new event for each box containing the cropped image
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processor.imageclassification.jvm.image-enricher.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processor.imageclassification.jvm.image-enricher.md
similarity index 86%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processor.imageclassification.jvm.image-enricher.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processor.imageclassification.jvm.image-enricher.md
index e008be9ec..8a09f3ae8 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processor.imageclassification.jvm.image-enricher.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processor.imageclassification.jvm.image-enricher.md
@@ -2,7 +2,6 @@
id: org.apache.streampipes.processor.imageclassification.jvm.image-enricher
title: Image Enricher
sidebar_label: Image Enricher
-original_id: org.apache.streampipes.processor.imageclassification.jvm.image-enricher
---
@@ -35,32 +34,39 @@ original_id: org.apache.streampipes.processors.changedetection.jvm.cusum
## Description
-Performs change detection on a single dimension of the incoming data stream. A change is detected if the cumulative deviation from the mean exceeds a certain threshold. This implementation tracks the mean and the standard deviation using Welford's algorithm, which is well suited for data streams.
+Performs change detection on a single dimension of the incoming data stream. This implementation tracks the mean and the
+standard deviation using Welford's algorithm, which is well suited for data streams. A change is detected if the
+cumulative deviation from the mean exceeds a certain threshold.
***
## Required input
-The cusum processor requires a data stream that has at least one field containing a numerical value.
+The welford change dectection processor requires a data stream that has at least one field containing a numerical value.
***
## Configuration
### Value to observe
-Specify the dimension of the data stream (e.g. the temperature) on which to perform change detection.
+
+Specify the dimension of the data stream (e.g. the temperature) on which to perform change detection.
### Parameter `k`
-`k` controls the sensitivity of the change detector. Its unit are standard deviations. For an observation `x_n`, the Cusum value is `S_n = max(0, S_{n-1} - z-score(x_n) - k)`. Thus, the cusum-score `S` icnreases if `S_{n-1} - z-score(x_n) > k`.
+
+`k` controls the sensitivity of the change detector. Its unit are standard deviations. For an observation `x_n`, the
+Cusum value is `S_n = max(0, S_{n-1} - z-score(x_n) - k)`. Thus, the cusum-score `S` icnreases
+if `S_{n-1} - z-score(x_n) > k`.
### Parameter `h`
-The alarm theshold in standard deviations. An alarm occurs if `S_n > h`
+
+The alarm theshold in standard deviations. An alarm occurs if `S_n > h`
## Output
-This processor outputs the original data stream plus
+This processor outputs the original data stream plus
-- `cusumLow`: The cusum value for negative changes
-- `cusumHigh`: The cusum value for positive changes
+- `cumSumLow`: The cumulative sum value for negative changes
+- `cumSumHigh`: The cumulative sum value for positive changes
- `changeDetectedLow`: Boolean indicating if a negative change was detected
-- `changeDetectedHigh`: Boolean indicating if a positive change was detected
+- `changeDetectedHigh`: Boolean indicating if a positive change was detected
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.enricher.jvm.jseval.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.enricher.jvm.jseval.md
new file mode 100644
index 000000000..3fe018306
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.enricher.jvm.jseval.md
@@ -0,0 +1,55 @@
+---
+id: org.apache.streampipes.processors.enricher.jvm.jseval
+title: JavaScript Eval
+sidebar_label: JavaScript Eval
+---
+
+
+
+
+
+
+
+
+
+***
+
+## Description
+A pipeline element that allows writing user defined JavaScript function to enrich events.
+
+***
+
+## Required input
+This processor does not have any specific input requirements.
+
+***
+
+## Configuration
+User can specify their custom enrichment logic within the `process` method. Please note that the `process` function
+must be in the following format and it must return a map of data which is compatible with the output schema.
+```javascript
+ function process(event) {
+ // do processing here.
+ // return a map with fields that matched defined output schema.
+ return {id: event.id, tempInCelsius: (event.tempInKelvin - 273.15)};
+ }
+```
+
+## Output
+A new event with the user defined output schema.
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.enricher.flink.processor.math.mathop.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.enricher.jvm.processor.math.mathop.md
similarity index 85%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.enricher.flink.processor.math.mathop.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.enricher.jvm.processor.math.mathop.md
index e619125c7..859708971 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.enricher.flink.processor.math.mathop.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.enricher.jvm.processor.math.mathop.md
@@ -1,8 +1,7 @@
---
-id: org.apache.streampipes.processors.enricher.flink.processor.math.mathop
+id: org.apache.streampipes.processors.enricher.jvm.processor.math.mathop
title: Math
sidebar_label: Math
-original_id: org.apache.streampipes.processors.enricher.flink.processor.math.mathop
---
-
+
***
## Description
-Appends the current time in ms to the event payload.
+
+The processing element should be able to detect when a numeric property change from one configured value to another.
***
## Required input
-The timestamp enricher works with any input event.
+The required input is a number.
***
## Configuration
+Value of last event (example: 0)
+
+Value of current event (example: 5)
-(no further configuration required)
## Output
-This processor appends the current system time to every input event.
+A boolean value is returned when the input changes.
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.filters.jvm.compose.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.filters.jvm.compose.md
similarity index 93%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.filters.jvm.compose.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.filters.jvm.compose.md
index c02b529f8..8cb669890 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.filters.jvm.compose.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.filters.jvm.compose.md
@@ -2,7 +2,6 @@
id: org.apache.streampipes.processors.filters.jvm.compose
title: Compose
sidebar_label: Compose
-original_id: org.apache.streampipes.processors.filters.jvm.compose
---
-
***
+
## Description
-Triggers an event when the input data stream stops sending events
+Smooths the data stream by the mean/median of the last n values.
***
## Required input
-
-Does not have any specific input requirements.
-
+A numerical field is required.
***
## Configuration
-
-### Time Window Length (Seconds)
-
-Specifies the size of the time window in seconds.
+### N Value
+Specifies the number of previous data points which are used to smooth the data.
+### Method
+Specifies the method which is used to smooth the data. Choose between mean and median.
## Output
-
-Outputs a similar event like below.
-
-```
-{
- 'timestamp': 1621243855401,
- 'message': 'Event stream has stopped'
-}
-```
\ No newline at end of file
+Appends a field with the smoothed data.
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.filters.jvm.numericalfilter.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.filters.jvm.numericalfilter.md
similarity index 95%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.filters.jvm.numericalfilter.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.filters.jvm.numericalfilter.md
index 82b4c5b7c..55c320801 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.filters.jvm.numericalfilter.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.filters.jvm.numericalfilter.md
@@ -2,7 +2,6 @@
id: org.apache.streampipes.processors.filters.jvm.numericalfilter
title: Numerical Filter
sidebar_label: Numerical Filter
-original_id: org.apache.streampipes.processors.filters.jvm.numericalfilter
---
+
+
+
+
+
+
+
+
+***
+
+## Description
+
+The **Swinging Door Trending (SDT)** algorithm is a linear trend compression algorithm.
+In essence, it replaces a series of continuous `(timestamp, value)` points with a straight line determined by the start and end points.
+
+The **Swinging Door Trending (SDT) Filter Processor** can extract and forward the characteristic events of the original stream.
+In general, this filter can also be used to reduce the frequency of original data in a lossy way.
+
+***
+
+## Required Inputs
+
+The processor works with any input event that has **one field containing a timestamp** and
+**one field containing a numerical value**.
+
+***
+
+## Configuration
+
+### Timestamp Field
+Specifies the timestamp field name where the SDT algorithm should be applied on.
+
+### Value Field
+Specifies the value field name where the SDT algorithm should be applied on.
+
+### Compression Deviation
+**Compression Deviation** is the most important parameter in SDT that represents the maximum difference
+between the current sample and the current linear trend.
+
+**Compression Deviation** needs to be greater than 0 to perform compression.
+
+### Compression Minimum Time Interval
+**Compression Minimum Time Interval** is a parameter measures the time distance between two stored data points,
+which is used for noisy reduction.
+
+If the time interval between the current point and the last stored point is less than or equal to its value,
+current point will NOT be stored regardless of compression deviation.
+
+The default value is `0` with time unit ms.
+
+### Compression Maximum Time Interval
+**Compression Maximum Time Interval** is a parameter measure the time distance between two stored data points.
+
+If the time interval between the current point and the last stored point is greater than or equal to its value,
+current point will be stored regardless of compression deviation.
+
+The default value is `9,223,372,036,854,775,807`(`Long.MAX_VALUE`) with time unit ms.
+
+***
+
+## Output
+The characteristic event stream forwarded by the SDT filter.
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.filters.jvm.textfilter.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.filters.jvm.textfilter.md
similarity index 94%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.filters.jvm.textfilter.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.filters.jvm.textfilter.md
index c6e5fa9ce..ce5c254b6 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.filters.jvm.textfilter.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.filters.jvm.textfilter.md
@@ -2,7 +2,6 @@
id: org.apache.streampipes.processors.filters.jvm.textfilter
title: Text Filter
sidebar_label: Text Filter
-original_id: org.apache.streampipes.processors.filters.jvm.textfilter
---
+
+
+
+
+
+
+
+***
+
+## Description
+
+Creates a buffer polygon geometry from a geometry
+***
+
+## Required inputs
+
+* JTS Geometry
+* EPSG Code
+* Distance
+* Cap Style
+* Join Style
+* Mitre-Limit
+* Side
+* Simplify Factor
+* Quadrant Segments
+***
+
+## Configuration
+
+### Geometry field
+Input Geometry
+
+### EPSG field
+Integer value representing EPSG code
+
+### Distance
+The buffer distance around in geometry in meter
+
+### Cap Style
+Defines the endcap style of the buffer.
+* CAP_ROUND - the usual round end caps
+* CAP_FLAT - end caps are truncated flat at the line ends
+* CAP_SQUARE - end caps are squared off at the buffer distance beyond the line ends
+
+### Simplify Factor
+The default simplify factor Provides an accuracy of about 1%, which matches the accuracy of the
+default Quadrant Segments parameter.
+
+### Quadrant Segments
+The default number of facets into which to divide a fillet of 90 degrees.
+
+### Join Style
+Defines the corners in a buffer
+* JOIN_ROUND - the usual round join
+* JOIN_MITRE - corners are "sharp" (up to a distance limit)
+* JOIN_BEVEL - corners are beveled (clipped off).
+
+### Mitre-Limit
+Mitre ratio limit (only affects mitered join style)
+
+### Side
+`left` or `right` performs a single-sided buffer on the geometry, with the buffered side
+relative to the direction of the line or polygon.
+
+***
+
+## Output
+A polygon geometry with EPSG code. Shape is defined by input parameters.
+
+
+### Example
+
diff --git a/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.bufferpoint.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.bufferpoint.md
new file mode 100644
index 000000000..04346ce85
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.bufferpoint.md
@@ -0,0 +1,82 @@
+---
+id: org.apache.streampipes.processors.geo.jvm.jts.processor.bufferpoint
+title: Geo Buffer Point
+sidebar_label: Geo Buffer Point
+---
+
+
+
+
+
+
+
+
+
+***
+
+## Description
+
+Creates a buffer polygon geometry from a point geometry
+***
+
+## Required inputs
+
+* JTS Geometry
+* EPSG Code
+* Distance
+* Cap Style
+* Simplify Factor
+* Quadrant Segments
+***
+
+## Configuration
+
+### Geometry Field
+Input Point Geometry
+
+### EPSG field
+Integer value representing EPSG code
+
+### Distance
+The buffer distance around the geometry in meter
+
+### Cap Style
+Defines the endcap style of the buffer.
+CAP_ROUND - the usual round end caps
+CAP_SQUARE - end caps are squared off at the buffer distance beyond the line ends
+
+
+### Simplify Factor
+The default simplify factor provides an accuracy of about 1%, which matches the accuracy of the
+default Quadrant Segments parameter.
+
+### Quadrant Segments
+The default number of facets into which to divide a fillet of 90 degrees.
+
+***
+
+## Output
+A polygon geometry with EPSG code. Shape is defined by input parameters.
+
+
+
+
+
+### Example
+
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.setEPSG.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.epsg.md
similarity index 51%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.setEPSG.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.epsg.md
index 5a61d273e..294124c81 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.setEPSG.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.epsg.md
@@ -1,8 +1,7 @@
---
-id: org.apache.streampipes.processors.geo.jvm.jts.processor.setEPSG
-title: EPSG Code
-sidebar_label: EPSG Code
-original_id: org.apache.streampipes.processors.geo.jvm.jts.processor.setEPSG
+id: org.apache.streampipes.processors.geo.jvm.jts.processor.epsg
+title: Geo EPSG Code
+sidebar_label: Geo EPSG Code
---
LineString (empty)
+ * Point(8.12 41.23) --> LineString(empty)
* Second Event:
* Point(8.56 41.25) --> LineString(8.12 41.23, 8.56 41.25)
* Second Event:
diff --git a/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.validation.complex.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.validation.complex.md
new file mode 100644
index 000000000..9cd2db4b2
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.validation.complex.md
@@ -0,0 +1,86 @@
+---
+id: org.apache.streampipes.processors.geo.jvm.jts.processor.validation.complex
+title: Geo Geometry Topology Validation Filter
+sidebar_label: Geo Geometry Topology Validation Filter
+---
+
+
+
+
+
+
+
+
+
+***
+
+## Description
+Validates geometry of topology
+erros from JTS.
+
+* **HOLE_OUTSIDE_SHELL**: Indicates that a hole of a polygon lies partially or completely in the exterior of the shell
+* **NESTED_HOLES**: Indicates that a hole lies in the interior of another hole in the same polygon
+* **DISCONNECTED_INTERIOR**: Indicates that the interior of a polygon is disjoint (often caused by set of contiguous holes splitting the polygon into two parts)
+* **SELF_INTERSECTION**: Indicates that two rings of a polygonal geometry intersect
+* **RING_SELF_INTERSECTION**: Indicates that a ring self-intersects
+* **NESTED_SHELLS**: Indicates that a polygon component of a MultiPolygon lies inside another polygonal component
+* **DUPLICATE_RINGS**: Indicates that a polygonal geometry contains two rings which are identical
+* **TOO_FEW_POINTS**: Indicates that either a LineString contains a single point or a LinearRing contains 2 or 3 points
+* **RING_NOT_CLOSED**: Indicates that a ring is not correctly closed (the first and the last coordinate are different)
+
+
+***
+
+## Required inputs
+
+* JTS Geometry
+* EPSG Code
+* Validation Type
+* Log Output Option
+
+
+***
+
+## Configuration
+
+### Point Geometry Field
+Input Point Geometry
+
+### EPSG field
+Integer value representing EPSG code
+
+### Validation Output
+Chose the output result of the filter.
+* Valid - all valid events are parsed through
+* Invalid - all invalid events are parsed through
+
+
+### Log Output Option
+Options to activate Log-Output to the Pipeline Logger Window with detailed reason why Geometry is invalid
+
+
+***
+
+### Default Validation Checks
+
+## Output
+
+All events that match the validation output.
+
+### Example
diff --git a/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.validation.simple.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.validation.simple.md
new file mode 100644
index 000000000..21c77b1b1
--- /dev/null
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.jts.processor.validation.simple.md
@@ -0,0 +1,80 @@
+---
+id: org.apache.streampipes.processors.geo.jvm.jts.processor.validation.simple
+title: Geo Geometry Validation Filter
+sidebar_label: Geo Geometry Validation Filter
+---
+
+
+
+
+
+
+
+
+
+***
+
+## Description
+
+Checks the geometry event if the geometry is simple and / or empty.
+
+***
+
+## Required inputs
+
+* JTS Geometry
+* EPSG Code
+* Validation Type
+* Validation Output
+
+
+***
+
+## Configuration
+
+Validates geometry of different validations categories.
+
+
+### Point Geometry Field
+Input Point Geometry
+
+### EPSG field
+Integer value representing EPSG code
+
+### Validation Type
+* IsEmpty - Geometry is empty.
+* IsSimple - Geometry is simple. The SFS definition of simplicity follows the general rule that a Geometry is simple if it has no points of self-tangency, self-intersection or other anomalous points.
+ * Valid polygon geometries are simple, since their rings must not self-intersect.
+ * Linear rings have the same semantics.
+ * Linear geometries are simple if they do not self-intersect at points other than boundary points.
+ * Zero-dimensional geometries (points) are simple if they have no repeated points.
+ * Empty Geometries are always simple!
+
+### Validation Output
+Chose the output result of the filter.
+* Valid - all valid events are parsed through
+* Invalid - all invalid events are parsed through
+
+***
+
+## Output
+
+All events that match the validation output.
+
+### Example
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.geo.jvm.processor.distancecalculator.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.latlong.processor.distancecalculator.haversine.md
similarity index 79%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.geo.jvm.processor.distancecalculator.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.latlong.processor.distancecalculator.haversine.md
index 8a86104e7..075dd23d0 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.geo.jvm.processor.distancecalculator.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.geo.jvm.latlong.processor.distancecalculator.haversine.md
@@ -1,8 +1,7 @@
---
-id: org.apache.streampipes.processors.geo.jvm.processor.distancecalculator
-title: Distance Calculator
-sidebar_label: Distance Calculator
-original_id: org.apache.streampipes.processors.geo.jvm.processor.distancecalculator
+id: org.apache.streampipes.processors.geo.jvm.latlong.processor.distancecalculator.haversine
+title: Geo Distance Calculator (Haversine)
+sidebar_label: Geo Distance Calculator (Haversine)
---
+
+
+
+
+
+
+
+***
+
+## Overview
+
+The "Datetime From String" processor is a handy tool that helps convert human-readable datetime information into a
+format that machines can understand. This is particularly useful when dealing with data that includes dates and times.
+
+### Why Use This Processor?
+
+In the context of event streams, you may encounter dates and times formatted for human readability but not necessarily
+optimized for computer processing. The "Datetime From String" processor addresses this by facilitating the conversion
+of human-readable datetime information within your continuous stream of events.
+
+***
+
+## How It Works
+
+When you input a data stream into this processor containing a datetime in a specific format (such as "2023-11-24 15:30:
+00"), it
+undergoes a transformation. The processor converts it into a computer-friendly format called a ZonedDateTime object.
+
+### Example
+
+Let's say you have an event stream with a property containing values like "2023-11-24 15:30:00" and you want to make
+sure your computer understands it. You can use
+this processor to convert it into a format that's machine-friendly.
+
+***
+
+## Getting Started
+
+To use this processor, you need one thing in your data:
+
+1. **Datetime String**: This is the name of the event property that contains the human-readable datetime string, like "2023-11-24 15:30:00".
+
+
+### Configuration
+
+The only thing you need to configure is the time zone.
+1. **Time Zone**: Specify the time zone that applies to your datetime if it doesn't already have this information.This ensures that the processor understands the context of your
+datetime.
+
+## Output
+
+After the conversion happens, the processor adds a new piece of information to your data stream:
+
+* **timestringInMillis**: This is the transformed datetime in a format that computers can easily work with (UNIX timestamp in milliseconds).
+* **timeZone**: The name of the timezone the `dateTime` value refers to. Can be used to reconstitute the actual time.
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.duration-value.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.transformation.jvm.duration-value.md
similarity index 90%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.duration-value.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.transformation.jvm.duration-value.md
index 3ed23c240..0d1a66b2e 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.duration-value.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.transformation.jvm.duration-value.md
@@ -2,7 +2,6 @@
id: org.apache.streampipes.processors.transformation.jvm.duration-value
title: Calculate Duration
sidebar_label: Calculate Duration
-original_id: org.apache.streampipes.processors.transformation.jvm.duration-value
---
+
+
+Enrich a data stream by dynamically adding fields based on user-provided static metadata configuration.
+
+---
+
+## Description
+
+The Static Metadata Enricher is designed to enrich a data stream by dynamically adding fields based on user-provided
+metadata configuration. Users can specify static properties, and the processor will process each event, adding fields
+according to the provided key-value pairs. The output strategy is determined dynamically based on the provided metadata.
+For added convenience, users also have the option of uploading a CSV file with metadata information.
+
+### Configuration
+
+For each metadata entry, configure the following three options:
+
+- **Runtime Name:** A unique identifier for the property during runtime.
+- **Value:** The value associated with the property.
+- **Data Type:** The data type of the property value.
+
+#### Using CSV Option
+
+Alternatively, you can utilize the CSV upload feature by creating a CSV file with the following format:
+
+```
+Runtime Name,Runtime Value,Data Type
+sensorType,Temperature,String
+maxSensorValue,100.0,Float
+minSensorValue,0,Float
+```
+
+## Example
+### Input Event
+
+```json
+{
+ "reading": 25.5
+}
+```
+
+### Output Event
+
+```json
+{
+ "reading": 25.5,
+ "sensorType": "Temperature",
+ "maxSensorValue": 100.0,
+ "minSensorValue": 0.0
+}
+```
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.processor.stringoperator.state.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.transformation.jvm.processor.stringoperator.state.md
similarity index 93%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.processor.stringoperator.state.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.transformation.jvm.processor.stringoperator.state.md
index 5f533eb01..02fbfa2cb 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.processor.stringoperator.state.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.transformation.jvm.processor.stringoperator.state.md
@@ -2,7 +2,6 @@
id: org.apache.streampipes.processors.transformation.jvm.processor.stringoperator.state
title: String To State
sidebar_label: String To State
-original_id: org.apache.streampipes.processors.transformation.jvm.processor.stringoperator.state
---
+
+
+
+
+
+
+
+***
+
+## Description
+
+This processor rounds numeric values to the given decimal places.
+It supports multiple rounding strategies.
+
+***
+
+## Required input
+
+This processor requires an event that provides numerical properties.
+
+***
+
+## Configuration
+
+### Fields to Be Rounded
+
+Select which fields of the event should be rounded.
+
+### Number of Digits
+
+Specify the number of digits after the decimal point to round/keep, e.g., if number is 2.8935 and 'digits' is 3,
+the result will be 2.894.
+
+### Mode of Rounding
+
+Specify the mode of rounding.
+Supported rounding modes:
+* `UP`: Rounding mode to round away from zero. Always increments the digit prior to a non-zero discarded fraction. Note that this rounding mode never decreases the magnitude of the calculated value.
+* `DOWN`: Rounding mode to round towards zero. Never increments the digit prior to a discarded fraction (i.e., truncates). Note that this rounding mode never increases the magnitude of the calculated value.
+* `CEILING`: Rounding mode to round towards positive infinity. If the result is positive, behaves as for `UP`; if negative, behaves as for `DOWN`. Note that this rounding mode never decreases the calculated value
+* `FLOOR`: Rounding mode to round towards negative infinity. If the result is positive, behave as for `DOWN`; if negative, behave as for `UP`. Note that this rounding mode never increases the calculated value.
+* `HALF_UP`: Rounding mode to round towards "nearest neighbor" unless both neighbors are equidistant, in which case round up. Behaves as for `UP` if the discarded fraction is ≥ 0.5; otherwise, behaves as for `DOWN`.
+* `HALF_DOWN`: Rounding mode to round towards "nearest neighbor" unless both neighbors are equidistant, in which case round down. Behaves as for `UP` if the discarded fraction is > 0.5; otherwise, behaves as for `DOWN`.
+* `HALF_EVEN`: Rounding mode to round towards the "nearest neighbor" unless both neighbors are equidistant, in which case, round towards the even neighbor. Behaves as for `HALF_UP` if the digit to the left of the discarded fraction is odd; behaves as for `HALF_DOWN` if it's even. Note that this is the rounding mode that statistically minimizes cumulative error when applied repeatedly over a sequence of calculations.
+
+## Output
+
+The output of this processor is the same event with the fields selected by the ``Fiels to Be Rounded`` parameter rounded
+to ``Number of Digits`` digits.
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.split-array.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.transformation.jvm.split-array.md
similarity index 88%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.split-array.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.transformation.jvm.split-array.md
index 864ddd673..47e0b6d08 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.processors.transformation.jvm.split-array.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.processors.transformation.jvm.split-array.md
@@ -2,7 +2,6 @@
id: org.apache.streampipes.processors.transformation.jvm.split-array
title: Split Array
sidebar_label: Split Array
-original_id: org.apache.streampipes.processors.transformation.jvm.split-array
---
+
+
+
+
+
+
+
+***
+
+## Description
+
+Publishes events to Apache RocketMQ.
+
+***
+
+## Required input
+
+This sink does not have any requirements and works with any incoming event type.
+
+***
+
+## Configuration
+
+### RocketMQ Endpoint
+
+The endpoint to connect to the broker.
+
+
+### RocketMQ Topic
+
+The topic where events should be sent to.
+
+
+## Output
+
+(not applicable for data sinks)
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.databases.jvm.mysql.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.sinks.brokers.jvm.tubemq.md
similarity index 63%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.databases.jvm.mysql.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.sinks.brokers.jvm.tubemq.md
index 28d208658..34dc0222f 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.databases.jvm.mysql.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.sinks.brokers.jvm.tubemq.md
@@ -1,8 +1,7 @@
---
-id: org.apache.streampipes.sinks.databases.jvm.mysql
-title: MySQL Database
-sidebar_label: MySQL Database
-original_id: org.apache.streampipes.sinks.databases.jvm.mysql
+id: org.apache.streampipes.sinks.brokers.jvm.tubemq
+title: TubeMQ (InLong) Publisher
+sidebar_label: TubeMQ (InLong) Publisher
---
+***
-
+
-***
-
## Description
-This sink visualizes data streams in the StreamPipes dashboard.
-Visualizations can be configured in Live Dashboard of StreamPipes after the pipeline has been started.
+Send a message to a connected websocket client
***
@@ -46,7 +43,9 @@ This sink does not have any requirements and works with any incoming event type.
## Configuration
-No further configuration necessary, individual visualizations can be configured in the Dashboard itself.
+### Port
+
+The port on which the websocket listens for connections
## Output
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.databases.ditto.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.sinks.databases.ditto.md
similarity index 94%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.databases.ditto.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.sinks.databases.ditto.md
index 8a7089a16..0726af127 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.databases.ditto.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.sinks.databases.ditto.md
@@ -2,7 +2,6 @@
id: org.apache.streampipes.sinks.databases.ditto
title: Eclipse Ditto
sidebar_label: Eclipse Ditto
-original_id: org.apache.streampipes.sinks.databases.ditto
---
+
+
+
+
+
+
+
+***
+
+## Description
+
+Stores events in a Redis key-value store.
+
+***
+
+## Required input
+
+This sink does not have any requirements and works with any incoming event type.
+
+***
+
+## Configuration
+
+Describe the configuration parameters here
+
+### Hostname
+The hostname of the Redis instance
+
+### Port
+The port of the Redis instance (default 6379)
+
+### Key Field
+Runtime field to be used as the key when storing the event. If auto-increment is enabled, this setting will be ignored.
+
+### Auto Increment
+Enabling this will generate a sequential numeric key for every record inserted. (note: enabling this will ignore Key Field)
+
+### Expiration Time (Optional)
+The expiration time for a persisted event.
+
+### Password (Optional)
+The password for the Redis instance.
+
+### Connection Name (Optional)
+A connection name to assign for the current connection.
+
+### DB Index (Optional)
+Zero-based numeric index for Redis database.
+
+### Max Active (Redis Pool) (Optional)
+The maximum number of connections that can be allocated from the pool.
+
+### Max Idle (Redis Pool) (Optional)
+The maximum number of connections that can remain idle in the pool.
+
+### Max Wait (Redis Pool) (Optional)
+The maximum number of milliseconds that the caller needs to wait when no connection is available.
+
+### Max Timeout (Redis Pool) (Optional)
+The maximum time for connection timeout and read/write timeout.
+
+## Output
+
+(not applicable for data sinks)
\ No newline at end of file
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.internal.jvm.datalake.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.sinks.internal.jvm.datalake.md
similarity index 55%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.internal.jvm.datalake.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.sinks.internal.jvm.datalake.md
index aeea4aabd..44a091864 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.internal.jvm.datalake.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.sinks.internal.jvm.datalake.md
@@ -2,7 +2,6 @@
id: org.apache.streampipes.sinks.internal.jvm.datalake
title: Data Lake
sidebar_label: Data Lake
-original_id: org.apache.streampipes.sinks.internal.jvm.datalake
---
+
+# MS Teams Sink
+
+
+
+
+
+---
+
+
+
+The MS Teams Sink is a StreamPipes data sink that facilitates the sending of messages to a Microsoft Teams channel
+through a Webhook URL. Whether you need to convey simple text messages or employ more advanced formatting with [Adaptive
+Cards](https://adaptivecards.io/), this sink provides a versatile solution for integrating StreamPipes with Microsoft Teams.
+
+---
+
+## Required input
+
+The MS Teams Sink does not have any specific requirements for incoming event types. It is designed to work seamlessly
+with any type of incoming event, making it a versatile choice for various use cases.
+
+---
+
+## Configuration
+
+#### Webhook URL
+
+To configure the MS Teams Sink, you need to provide the Webhook URL that enables the sink to send messages to a specific
+MS Teams channel. If you don't have a Webhook URL, you can learn how to create
+one [here](https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook?tabs=dotnet#create-incoming-webhooks-1).
+
+#### Message Content Options
+
+You can choose between two message content formats:
+
+- **Simple Message Content:** Supports plain text and basic markdown formatting.
+- **Advanced Message Content:** Expects JSON input directly forwarded to Teams without modification. This format is
+ highly customizable and can be used for Adaptive Cards.
+
+Choose the format that best suits your messaging needs.
+
+### Silent Period
+
+The *Silent Period* is the duration, expressed in minutes, during which notifications are temporarily disabled after one
+has been sent. This feature is implemented to prevent overwhelming the target with frequent notifications, avoiding
+potential spam behavior.
+
+---
+
+## Usage
+
+#### Simple Message Format
+
+In the simple message format, you can send plain text messages or utilize basic markdown formatting to convey
+information. This is ideal for straightforward communication needs.
+
+#### Advanced Message Format
+
+For more sophisticated messaging requirements, the advanced message format allows you to send JSON content directly to
+Microsoft Teams without modification. This feature is especially powerful when used
+with [Adaptive Cards](https://learn.microsoft.com/en-us/adaptive-cards/), enabling interactive and dynamic content in
+your Teams messages.
diff --git a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.notifications.jvm.onesignal.md b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.sinks.notifications.jvm.onesignal.md
similarity index 93%
rename from website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.notifications.jvm.onesignal.md
rename to website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.sinks.notifications.jvm.onesignal.md
index 6b3a5b595..0c4d6427d 100644
--- a/website-v2/versioned_docs/version-0.70.0/pe/org.apache.streampipes.sinks.notifications.jvm.onesignal.md
+++ b/website-v2/versioned_docs/version-0.95.1/pe/org.apache.streampipes.sinks.notifications.jvm.onesignal.md
@@ -2,7 +2,6 @@
id: org.apache.streampipes.sinks.notifications.jvm.onesignal
title: OneSignal
sidebar_label: OneSignal
-original_id: org.apache.streampipes.sinks.notifications.jvm.onesignal
---