From f576432954933f894af619d18242b8088ce5a873 Mon Sep 17 00:00:00 2001 From: Jason Fox Date: Tue, 4 Jun 2024 10:58:21 +0200 Subject: [PATCH] Update Docker --- FIWARE Real-time Processing (Flink).postman_collection.json | 2 +- README.ja.md | 2 +- README.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/FIWARE Real-time Processing (Flink).postman_collection.json b/FIWARE Real-time Processing (Flink).postman_collection.json index 4e337a1..79c67df 100644 --- a/FIWARE Real-time Processing (Flink).postman_collection.json +++ b/FIWARE Real-time Processing (Flink).postman_collection.json @@ -2,7 +2,7 @@ "info": { "_postman_id": "97dd54fb-6a22-436d-9f28-95e1e9fdb7e4", "name": "FIWARE Real-time Processing (Flink)", - "description": "[![FIWARE Core Context Management](https://nexus.lab.fiware.org/static/badges/chapters/core.svg)](https://github.com/FIWARE/catalogue/blob/master/processing/README.md)\n\nThis tutorial is an introduction to the [FIWARE Cosmos Orion Flink Connector](http://fiware-cosmos-flink.rtfd.io), which\nfacilitates Big Data analysis of context data, through an integration with [Apache Flink](https://flink.apache.org/),\none of the most popular Big Data platforms. Apache Flink is a framework and distributed processing engine for stateful\ncomputations both over unbounded and bounded data streams. Flink has been designed to run in all common cluster\nenvironments, perform computations at in-memory speed and at any scale.\n\nThe tutorial uses [cUrl](https://ec.haxx.se/) commands throughout, but is also available as\n[Postman documentation](https://fiware.github.io/tutorials.Big-Data-Flink/)\n\nThe `docker-compose` files for this tutorial can be found on GitHub: \n\n![GitHub](https://fiware.github.io/tutorials.Big-Data-Flink/icon/GitHub-Mark-32px.png) [https://github.com/FIWARE/tutorials.Big-Data-Flink](https://github.com/FIWARE/tutorials.Big-Data-Flink)\n\n\n# Real-time Processing and Big Data Analysis\n\n> \"Who controls the past controls the future: who controls the present controls the past.\"\n>\n> — George Orwell. \"1984\" (1949)\n\nSmart solutions based on FIWARE are architecturally designed around microservices. They are therefore are designed to\nscale-up from simple applications (such as the Supermarket tutorial) through to city-wide installations base on a large\narray of IoT sensors and other context data providers.\n\nThe massive amount of data involved enventually becomes too much for a single machine to analyse, process and store, and\ntherefore the work must be delegated to additional distributed services. These distributed systems form the basis of\nso-called **Big Data Analysis**. The distribution of tasks allows developers to be able to extract insights\nfrom huge data sets which would be too complex to be dealt with using traditional methods. and uncover hidden patterns\nand correlations.\n\nAs we have seen, context data is core to any Smart Solution, and the Context Broker is able to monitor changes of state\nand raise [subscription events](https://github.com/Fiware/tutorials.Subscriptions) as the context changes. For smaller\ninstallations, each subscription event can be processed one-by-one by a single receiving endpoint, however as the system\ngrows, another technique will be required to avoid overwhelming the listener, potentially blocking resources and missing\nupdates.\n\n**Apache Flink** is a Java/Scala based stream-processing framework which enables the delegation of data-flow processes.\nTherefore additional computational resources can be called upon to deal with data as events arrive. The **Cosmos Flink**\nconnector allows developers write custom business logic to listen for context data subscription events and then process\nthe flow of the context data. Flink is able to delegate these actions to other workers where they will be acted upon\neither in sequentiallly or in parallel as required. The data flow processing itself can be arbitrarily complex.\n\nObviously in reality our existing Supermarket scenario is far too small to require the use of a Big Data solution, but\nwill serve as a basis for demonstrating the type of real-time processing which may be required in a larger solution\nwhich is processing a continous stream of context-data events.\n\n# Architecture\n\nThis application builds on the components and dummy IoT devices created in\n[previous tutorials](https://github.com/FIWARE/tutorials.IoT-Agent/). It will make use of three FIWARE components - the\n[Orion Context Broker](https://fiware-orion.readthedocs.io/en/latest/), the\n[IoT Agent for Ultralight 2.0](https://fiware-iotagent-ul.readthedocs.io/en/latest/), and the\n[Cosmos Orion Flink Connector](https://fiware-cosmos-flink.readthedocs.io/en/latest/) for connecting Orion to an\n[Apache Flink cluster](https://ci.apache.org/projects/flink/flink-docs-stable/concepts/runtime.html). The Flink cluster\nitself will consist of a single **JobManager** _master_ to coordinate execution and a single **TaskManager** _worker_ to\nexecute the tasks.\n\nBoth the Orion Context Broker and the IoT Agent rely on open source [MongoDB](https://www.mongodb.com/) technology to\nkeep persistence of the information they hold. We will also be using the dummy IoT devices created in the\n[previous tutorial](https://github.com/FIWARE/tutorials.IoT-Agent/).\n\nTherefore the overall architecture will consist of the following elements:\n\n- Two **FIWARE Generic Enablers** as independent microservices:\n - The FIWARE [Orion Context Broker](https://fiware-orion.readthedocs.io/en/latest/) which will receive requests\n using [NGSI](https://fiware.github.io/specifications/OpenAPI/ngsiv2)\n - The FIWARE [IoT Agent for Ultralight 2.0](https://fiware-iotagent-ul.readthedocs.io/en/latest/) which will\n receive northbound measurements from the dummy IoT devices in\n [Ultralight 2.0](https://fiware-iotagent-ul.readthedocs.io/en/latest/usermanual/index.html#user-programmers-manual)\n format and convert them to [NGSI](https://fiware.github.io/specifications/OpenAPI/ngsiv2) requests for the\n context broker to alter the state of the context entities\n- An [Apache Flink cluster](https://ci.apache.org/projects/flink/flink-docs-stable/concepts/runtime.html) consisting of a single **JobManager** and a single **TaskManager**\n - The FIWARE [Cosmos Orion Flink Connector](https://fiware-cosmos-flink.readthedocs.io/en/latest/) will be deployed as part of the dataflow which will\n subscribe to context changes and make operations on them in real-time\n- One [MongoDB](https://www.mongodb.com/) **database** :\n - Used by the **Orion Context Broker** to hold context data information such as data entities, subscriptions\n and registrations\n - Used by the **IoT Agent** to hold device information such as device URLs and Keys\n- Three **Context Providers**:\n - A webserver acting as set of [dummy IoT devices](https://github.com/FIWARE/tutorials.IoT-Sensors) using the\n [Ultralight 2.0](https://fiware-iotagent-ul.readthedocs.io/en/latest/usermanual/index.html#user-programmers-manual)\n protocol running over HTTP.\n - The **Stock Management Frontend** is not used in this tutorial. It does the following:\n - Display store information and allow users to interact with the dummy IoT devices\n - Show which products can be bought at each store\n - Allow users to \"buy\" products and reduce the stock count.\n - The **Context Provider NGSI** proxy is not used in this tutorial. It does the following:\n - receive requests using [NGSI](https://fiware.github.io/specifications/OpenAPI/ngsiv2)\n - makes requests to publicly available data sources using their own APIs in a proprietary format\n - returns context data back to the Orion Context Broker in\n [NGSI](https://fiware.github.io/specifications/OpenAPI/ngsiv2) format.\n\nThe overall architecture can be seen below:\n\n![](https://fiware.github.io/tutorials.Big-Data-Flink/img/architecture.png)\n\nSince all interactions between the elements are initiated by HTTP requests, the entities can be containerized and run\nfrom exposed ports.\n\nThe configuration information of the Apache Flink cluster can be seen in the `jobmanager` and `taskmanager` sections of\nthe associated `docker-compose.yml` file:\n\n## Flink Cluster Configuration\n\n```yaml\njobmanager:\n image: flink:1.9.0-scala_2.11\n hostname: jobmanager\n container_name: flink-jobmanager\n expose:\n - \"8081\"\n - \"9001\"\n ports:\n - \"6123:6123\"\n - \"8081:8081\"\n - \"9001:9001\"\n command: jobmanager\n environment:\n - JOB_MANAGER_RPC_ADDRESS=jobmanager\n```\n\n```yaml\ntaskmanager:\n image: flink:1.9.0-scala_2.11\n hostname: taskmanager\n container_name: flink-taskmanager\n ports:\n - \"6121:6121\"\n - \"6122:6122\"\n depends_on:\n - jobmanager\n command: taskmanager\n links:\n - \"jobmanager:jobmanager\"\n environment:\n - JOB_MANAGER_RPC_ADDRESS=jobmanager\n```\n\nThe `jobmanager` container is listening on three ports:\n\n- Port `8081` is exposed so we can see the web front-end of the Apache Flink Dashobard\n- Port `9001` is exposed so that the installation can recieve context data subscriptions\n- Port `6123` is the standard **JobManager** RPC port, used for internal communications\n\nThe `taskmanager` container is listening on two ports:\n\n- Ports `6121` and `6122` are used and RPC ports by the **TaskManager**, used for internal communications\n\nThe containers within the flink cluster are driven by a single environment variable as shown:\n\n| Key | Value | Description |\n| ----------------------- | ------------ | --------------------------------------------------------------------- |\n| JOB_MANAGER_RPC_ADDRESS | `jobmanager` | URL of the _master_ Job Manager which coordinates the task processing |\n\n# Prerequisites\n\n## Docker and Docker Compose\n\nTo keep things simple, all components will be run using [Docker](https://www.docker.com). **Docker** is a container\ntechnology which allows to different components isolated into their respective environments.\n\n- To install Docker on Windows follow the instructions [here](https://docs.docker.com/docker-for-windows/)\n- To install Docker on Mac follow the instructions [here](https://docs.docker.com/docker-for-mac/)\n- To install Docker on Linux follow the instructions [here](https://docs.docker.com/install/)\n\n**Docker Compose** is a tool for defining and running multi-container Docker applications. A series of\n[YAML files](https://github.com/FIWARE/tutorials.Big-Data-Flink/tree/master/docker-compose) are used to configure the\nrequired services for the application. This means all container services can be brought up in a single command. Docker\nCompose is installed by default as part of Docker for Windows and Docker for Mac, however Linux users will need to\nfollow the instructions found [here](https://docs.docker.com/compose/install/)\n\nYou can check your current **Docker** and **Docker Compose** versions using the following commands:\n\n```console\ndocker-compose -v\ndocker version\n```\n\nPlease ensure that you are using Docker version 20.10 or higher and Docker Compose 1.29 or higher and upgrade if\nnecessary.\n\n## Maven\n\n[Apache Maven](https://maven.apache.org/download.cgi) is a software project management and comprehension tool. Based on\nthe concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a\ncentral piece of information. We will use Maven to define and download our dependencies and to build and package our\ncode into a JAR file.\n\n## WSL\n\nWe will start up our services using a simple Bash script. Windows users should download the [Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/install)\nto provide a command-line functionality similar to a Linux distribution on Windows.\n\n# Start Up\n\nBefore you start, you should ensure that you have obtained or built the necessary Docker images locally. Please clone\nthe repository and create the necessary images by running the commands shown below. Note that you might need to run some\nof the commands as a privileged user:\n\n```console\ngit clone https://github.com/FIWARE/tutorials.Big-Data-Flink.git\ncd tutorials.Big-Data-Flink\n./services create\n```\n\nThis command will also import seed data from the previous tutorials and provision the dummy IoT sensors on startup.\n\nTo start the system, run the following command:\n\n```console\n./services start\n```\n\n> **Note:** If you want to clean up and start over again you can do so with the following command:\n>\n> ```\n> ./services stop\n> ```\n\n# Real-time Processing Operations\n\nDataflow within **Apache Flink** is defined within the\n[Flink documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.9/concepts/programming-model.html) as\nfollows:\n\n> \"The basic building blocks of Flink programs are streams and transformations. Conceptually a stream is a (potentially\n> never-ending) flow of data records, and a transformation is an operation that takes one or more streams as input, and\n> produces one or more output streams as a result.\n>\n> When executed, Flink programs are mapped to streaming dataflows, consisting of streams and transformation operators.\n> Each dataflow starts with one or more sources and ends in one or more sinks. The dataflows resemble arbitrary directed\n> acyclic graphs (DAGs). Although special forms of cycles are permitted via iteration constructs, for the most part this\n> can be glossed over this for simplicity.\"\n\n![](https://fiware.github.io/tutorials.Big-Data-Flink/img/streaming-dataflow.png)\n\nThis means that to create a streaming data flow we must supply the following:\n\n- A mechanism for reading Context data as a **Source Operator**\n- Business logic to define the transform operations\n- A mechanism for pushing Context data back to the context broker as a **Sink Operator**\n\nThe `orion-flink.connect.jar` offers both **Source** and **Sink** operations. It therefore only remains to write the\nnecessary Scala code to connect the streaming dataflow pipeline operations together. The processing code can be complied\ninto a JAR file which can be uploaded to the flink cluster. Two examples will be detailed below, all the source code for\nthis tutorial can be found within the\n[cosmos-examples](https://github.com/FIWARE/tutorials.Big-Data-Flink/tree/master/cosmos-examples) directory.\n\nFurther Flink processing examples can be found on the\n[Apache Flink website](https://ci.apache.org/projects/flink/flink-docs-release-1.9/getting-started)\n\n### Compiling a JAR file for Flink\n\nAn existing `pom.xml` file has been created which holds the necessary prerequisites to build the examples JAR file\n\nIn order to use the Orion Flink Connector we first need to manually install the connector JAR as an artifact using\nMaven:\n\n```console\ncd cosmos-examples\nmvn install:install-file \\\n -Dfile=./orion.flink.connector-1.2.1.jar \\\n -DgroupId=org.fiware.cosmos \\\n -DartifactId=orion.flink.connector \\\n -Dversion=1.2.1 \\\n -Dpackaging=jar\n```\n\nThereafter the source code can be compiled by running the `mvn package` command within the same directory:\n\n```console\ncd cosmos-examples\nmvn package\n```\n\nA new JAR file called `cosmos-examples-1.0.jar` will be created within the `cosmos-examples/target` directory.\n\n### Generating a stream of Context Data\n\nFor the purpose of this tutorial, we must be monitoring a system in which the context is periodically being updated. The\ndummy IoT Sensors can be used to do this. Open the device monitor page at `http://localhost:3000/device/monitor` and\nunlock a **Smart Door** and switch on a **Smart Lamp**. This can be done by selecting an appropriate the command from\nthe drop down list and pressing the `send` button. The stream of measurements coming from the devices can then be seen\non the same page:\n\n![](https://fiware.github.io/tutorials.Big-Data-Flink/img/door-open.gif)", + "description": "[![FIWARE Core Context Management](https://nexus.lab.fiware.org/static/badges/chapters/core.svg)](https://github.com/FIWARE/catalogue/blob/master/processing/README.md)\n\nThis tutorial is an introduction to the [FIWARE Cosmos Orion Flink Connector](http://fiware-cosmos-flink.rtfd.io), which\nfacilitates Big Data analysis of context data, through an integration with [Apache Flink](https://flink.apache.org/),\none of the most popular Big Data platforms. Apache Flink is a framework and distributed processing engine for stateful\ncomputations both over unbounded and bounded data streams. Flink has been designed to run in all common cluster\nenvironments, perform computations at in-memory speed and at any scale.\n\nThe tutorial uses [cUrl](https://ec.haxx.se/) commands throughout, but is also available as\n[Postman documentation](https://fiware.github.io/tutorials.Big-Data-Flink/)\n\nThe `docker-compose` files for this tutorial can be found on GitHub: \n\n![GitHub](https://fiware.github.io/tutorials.Big-Data-Flink/icon/GitHub-Mark-32px.png) [https://github.com/FIWARE/tutorials.Big-Data-Flink](https://github.com/FIWARE/tutorials.Big-Data-Flink)\n\n\n# Real-time Processing and Big Data Analysis\n\n> \"Who controls the past controls the future: who controls the present controls the past.\"\n>\n> — George Orwell. \"1984\" (1949)\n\nSmart solutions based on FIWARE are architecturally designed around microservices. They are therefore are designed to\nscale-up from simple applications (such as the Supermarket tutorial) through to city-wide installations base on a large\narray of IoT sensors and other context data providers.\n\nThe massive amount of data involved enventually becomes too much for a single machine to analyse, process and store, and\ntherefore the work must be delegated to additional distributed services. These distributed systems form the basis of\nso-called **Big Data Analysis**. The distribution of tasks allows developers to be able to extract insights\nfrom huge data sets which would be too complex to be dealt with using traditional methods. and uncover hidden patterns\nand correlations.\n\nAs we have seen, context data is core to any Smart Solution, and the Context Broker is able to monitor changes of state\nand raise [subscription events](https://github.com/Fiware/tutorials.Subscriptions) as the context changes. For smaller\ninstallations, each subscription event can be processed one-by-one by a single receiving endpoint, however as the system\ngrows, another technique will be required to avoid overwhelming the listener, potentially blocking resources and missing\nupdates.\n\n**Apache Flink** is a Java/Scala based stream-processing framework which enables the delegation of data-flow processes.\nTherefore additional computational resources can be called upon to deal with data as events arrive. The **Cosmos Flink**\nconnector allows developers write custom business logic to listen for context data subscription events and then process\nthe flow of the context data. Flink is able to delegate these actions to other workers where they will be acted upon\neither in sequentiallly or in parallel as required. The data flow processing itself can be arbitrarily complex.\n\nObviously in reality our existing Supermarket scenario is far too small to require the use of a Big Data solution, but\nwill serve as a basis for demonstrating the type of real-time processing which may be required in a larger solution\nwhich is processing a continous stream of context-data events.\n\n# Architecture\n\nThis application builds on the components and dummy IoT devices created in\n[previous tutorials](https://github.com/FIWARE/tutorials.IoT-Agent/). It will make use of three FIWARE components - the\n[Orion Context Broker](https://fiware-orion.readthedocs.io/en/latest/), the\n[IoT Agent for Ultralight 2.0](https://fiware-iotagent-ul.readthedocs.io/en/latest/), and the\n[Cosmos Orion Flink Connector](https://fiware-cosmos-flink.readthedocs.io/en/latest/) for connecting Orion to an\n[Apache Flink cluster](https://ci.apache.org/projects/flink/flink-docs-stable/concepts/runtime.html). The Flink cluster\nitself will consist of a single **JobManager** _master_ to coordinate execution and a single **TaskManager** _worker_ to\nexecute the tasks.\n\nBoth the Orion Context Broker and the IoT Agent rely on open source [MongoDB](https://www.mongodb.com/) technology to\nkeep persistence of the information they hold. We will also be using the dummy IoT devices created in the\n[previous tutorial](https://github.com/FIWARE/tutorials.IoT-Agent/).\n\nTherefore the overall architecture will consist of the following elements:\n\n- Two **FIWARE Generic Enablers** as independent microservices:\n - The FIWARE [Orion Context Broker](https://fiware-orion.readthedocs.io/en/latest/) which will receive requests\n using [NGSI](https://fiware.github.io/specifications/OpenAPI/ngsiv2)\n - The FIWARE [IoT Agent for Ultralight 2.0](https://fiware-iotagent-ul.readthedocs.io/en/latest/) which will\n receive northbound measurements from the dummy IoT devices in\n [Ultralight 2.0](https://fiware-iotagent-ul.readthedocs.io/en/latest/usermanual/index.html#user-programmers-manual)\n format and convert them to [NGSI](https://fiware.github.io/specifications/OpenAPI/ngsiv2) requests for the\n context broker to alter the state of the context entities\n- An [Apache Flink cluster](https://ci.apache.org/projects/flink/flink-docs-stable/concepts/runtime.html) consisting of a single **JobManager** and a single **TaskManager**\n - The FIWARE [Cosmos Orion Flink Connector](https://fiware-cosmos-flink.readthedocs.io/en/latest/) will be deployed as part of the dataflow which will\n subscribe to context changes and make operations on them in real-time\n- One [MongoDB](https://www.mongodb.com/) **database** :\n - Used by the **Orion Context Broker** to hold context data information such as data entities, subscriptions\n and registrations\n - Used by the **IoT Agent** to hold device information such as device URLs and Keys\n- Three **Context Providers**:\n - A webserver acting as set of [dummy IoT devices](https://github.com/FIWARE/tutorials.IoT-Sensors) using the\n [Ultralight 2.0](https://fiware-iotagent-ul.readthedocs.io/en/latest/usermanual/index.html#user-programmers-manual)\n protocol running over HTTP.\n - The **Stock Management Frontend** is not used in this tutorial. It does the following:\n - Display store information and allow users to interact with the dummy IoT devices\n - Show which products can be bought at each store\n - Allow users to \"buy\" products and reduce the stock count.\n - The **Context Provider NGSI** proxy is not used in this tutorial. It does the following:\n - receive requests using [NGSI](https://fiware.github.io/specifications/OpenAPI/ngsiv2)\n - makes requests to publicly available data sources using their own APIs in a proprietary format\n - returns context data back to the Orion Context Broker in\n [NGSI](https://fiware.github.io/specifications/OpenAPI/ngsiv2) format.\n\nThe overall architecture can be seen below:\n\n![](https://fiware.github.io/tutorials.Big-Data-Flink/img/architecture.png)\n\nSince all interactions between the elements are initiated by HTTP requests, the entities can be containerized and run\nfrom exposed ports.\n\nThe configuration information of the Apache Flink cluster can be seen in the `jobmanager` and `taskmanager` sections of\nthe associated `docker-compose.yml` file:\n\n## Flink Cluster Configuration\n\n```yaml\njobmanager:\n image: flink:1.9.0-scala_2.11\n hostname: jobmanager\n container_name: flink-jobmanager\n expose:\n - \"8081\"\n - \"9001\"\n ports:\n - \"6123:6123\"\n - \"8081:8081\"\n - \"9001:9001\"\n command: jobmanager\n environment:\n - JOB_MANAGER_RPC_ADDRESS=jobmanager\n```\n\n```yaml\ntaskmanager:\n image: flink:1.9.0-scala_2.11\n hostname: taskmanager\n container_name: flink-taskmanager\n ports:\n - \"6121:6121\"\n - \"6122:6122\"\n depends_on:\n - jobmanager\n command: taskmanager\n links:\n - \"jobmanager:jobmanager\"\n environment:\n - JOB_MANAGER_RPC_ADDRESS=jobmanager\n```\n\nThe `jobmanager` container is listening on three ports:\n\n- Port `8081` is exposed so we can see the web front-end of the Apache Flink Dashobard\n- Port `9001` is exposed so that the installation can recieve context data subscriptions\n- Port `6123` is the standard **JobManager** RPC port, used for internal communications\n\nThe `taskmanager` container is listening on two ports:\n\n- Ports `6121` and `6122` are used and RPC ports by the **TaskManager**, used for internal communications\n\nThe containers within the flink cluster are driven by a single environment variable as shown:\n\n| Key | Value | Description |\n| ----------------------- | ------------ | --------------------------------------------------------------------- |\n| JOB_MANAGER_RPC_ADDRESS | `jobmanager` | URL of the _master_ Job Manager which coordinates the task processing |\n\n# Prerequisites\n\n## Docker and Docker Compose\n\nTo keep things simple, all components will be run using [Docker](https://www.docker.com). **Docker** is a container\ntechnology which allows to different components isolated into their respective environments.\n\n- To install Docker on Windows follow the instructions [here](https://docs.docker.com/docker-for-windows/)\n- To install Docker on Mac follow the instructions [here](https://docs.docker.com/docker-for-mac/)\n- To install Docker on Linux follow the instructions [here](https://docs.docker.com/install/)\n\n**Docker Compose** is a tool for defining and running multi-container Docker applications. A series of\n[YAML files](https://github.com/FIWARE/tutorials.Big-Data-Flink/tree/master/docker-compose) are used to configure the\nrequired services for the application. This means all container services can be brought up in a single command. Docker\nCompose is installed by default as part of Docker for Windows and Docker for Mac, however Linux users will need to\nfollow the instructions found [here](https://docs.docker.com/compose/install/)\n\nYou can check your current **Docker** and **Docker Compose** versions using the following commands:\n\n```console\ndocker-compose -v\ndocker version\n```\n\nPlease ensure that you are using Docker version 24.0.x or higher and Docker Compose 2.24.x or higher and upgrade if\nnecessary.\n\n## Maven\n\n[Apache Maven](https://maven.apache.org/download.cgi) is a software project management and comprehension tool. Based on\nthe concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a\ncentral piece of information. We will use Maven to define and download our dependencies and to build and package our\ncode into a JAR file.\n\n## WSL\n\nWe will start up our services using a simple Bash script. Windows users should download the [Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/install)\nto provide a command-line functionality similar to a Linux distribution on Windows.\n\n# Start Up\n\nBefore you start, you should ensure that you have obtained or built the necessary Docker images locally. Please clone\nthe repository and create the necessary images by running the commands shown below. Note that you might need to run some\nof the commands as a privileged user:\n\n```console\ngit clone https://github.com/FIWARE/tutorials.Big-Data-Flink.git\ncd tutorials.Big-Data-Flink\n./services create\n```\n\nThis command will also import seed data from the previous tutorials and provision the dummy IoT sensors on startup.\n\nTo start the system, run the following command:\n\n```console\n./services start\n```\n\n> **Note:** If you want to clean up and start over again you can do so with the following command:\n>\n> ```\n> ./services stop\n> ```\n\n# Real-time Processing Operations\n\nDataflow within **Apache Flink** is defined within the\n[Flink documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.9/concepts/programming-model.html) as\nfollows:\n\n> \"The basic building blocks of Flink programs are streams and transformations. Conceptually a stream is a (potentially\n> never-ending) flow of data records, and a transformation is an operation that takes one or more streams as input, and\n> produces one or more output streams as a result.\n>\n> When executed, Flink programs are mapped to streaming dataflows, consisting of streams and transformation operators.\n> Each dataflow starts with one or more sources and ends in one or more sinks. The dataflows resemble arbitrary directed\n> acyclic graphs (DAGs). Although special forms of cycles are permitted via iteration constructs, for the most part this\n> can be glossed over this for simplicity.\"\n\n![](https://fiware.github.io/tutorials.Big-Data-Flink/img/streaming-dataflow.png)\n\nThis means that to create a streaming data flow we must supply the following:\n\n- A mechanism for reading Context data as a **Source Operator**\n- Business logic to define the transform operations\n- A mechanism for pushing Context data back to the context broker as a **Sink Operator**\n\nThe `orion-flink.connect.jar` offers both **Source** and **Sink** operations. It therefore only remains to write the\nnecessary Scala code to connect the streaming dataflow pipeline operations together. The processing code can be complied\ninto a JAR file which can be uploaded to the flink cluster. Two examples will be detailed below, all the source code for\nthis tutorial can be found within the\n[cosmos-examples](https://github.com/FIWARE/tutorials.Big-Data-Flink/tree/master/cosmos-examples) directory.\n\nFurther Flink processing examples can be found on the\n[Apache Flink website](https://ci.apache.org/projects/flink/flink-docs-release-1.9/getting-started)\n\n### Compiling a JAR file for Flink\n\nAn existing `pom.xml` file has been created which holds the necessary prerequisites to build the examples JAR file\n\nIn order to use the Orion Flink Connector we first need to manually install the connector JAR as an artifact using\nMaven:\n\n```console\ncd cosmos-examples\nmvn install:install-file \\\n -Dfile=./orion.flink.connector-1.2.1.jar \\\n -DgroupId=org.fiware.cosmos \\\n -DartifactId=orion.flink.connector \\\n -Dversion=1.2.1 \\\n -Dpackaging=jar\n```\n\nThereafter the source code can be compiled by running the `mvn package` command within the same directory:\n\n```console\ncd cosmos-examples\nmvn package\n```\n\nA new JAR file called `cosmos-examples-1.0.jar` will be created within the `cosmos-examples/target` directory.\n\n### Generating a stream of Context Data\n\nFor the purpose of this tutorial, we must be monitoring a system in which the context is periodically being updated. The\ndummy IoT Sensors can be used to do this. Open the device monitor page at `http://localhost:3000/device/monitor` and\nunlock a **Smart Door** and switch on a **Smart Lamp**. This can be done by selecting an appropriate the command from\nthe drop down list and pressing the `send` button. The stream of measurements coming from the devices can then be seen\non the same page:\n\n![](https://fiware.github.io/tutorials.Big-Data-Flink/img/door-open.gif)", "schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json" }, "item": [ diff --git a/README.ja.md b/README.ja.md index ebe9f56..8ee25da 100644 --- a/README.ja.md +++ b/README.ja.md @@ -228,7 +228,7 @@ docker-compose -v docker version ``` -Docker バージョン20.10 以降および Docker Compose 1.29 以降を使用していることを確認し、必要に応じてアップグレード +Docker バージョン20.10 以降および Docker Compose 2.24.x 以降を使用していることを確認し、必要に応じてアップグレード してください。 diff --git a/README.md b/README.md index da2810a..d297c77 100644 --- a/README.md +++ b/README.md @@ -221,7 +221,7 @@ docker-compose -v docker version ``` -Please ensure that you are using Docker version 20.10 or higher and Docker Compose 1.29 or higher and upgrade if +Please ensure that you are using Docker version 24.0.x or higher and Docker Compose 2.24.x or higher and upgrade if necessary. ## Maven