Skip to main content

Edge–Cloud Orchestration

Overview of Edge-Cloud Orchestration

In the context of the Industrial Internet, edge devices are primarily used to process local data, and decision-makers cannot form a global understanding of the entire system based solely on the information collected from edge devices. In practice, edge devices need to report data to a cloud computing platform (either public or private), where data aggregation and information fusion occur, allowing decision-makers to gain a comprehensive insight into the data. This architecture of edge-cloud orchestration has gradually become an essential pillar supporting the development of the Industrial Internet.

Edge devices mainly monitor and alert on specific data points from the production line, such as real-time production data from a workshop, and then synchronize this production data to a cloud-based big data platform. The requirement for real-time processing is high on the edge, but the volume of data may not be large; typically, a production workshop may have a few thousand to tens of thousands of monitoring points. In contrast, the central side often has sufficient computing resources to aggregate edge data for analysis.

To achieve this operation, the database or data storage layer must ensure that data can be reported hierarchically and selectively. In some scenarios, the overall data volume is very large, necessitating selective reporting. For example, raw records collected once every second on the edge may be downsampled to once every minute when reported to the central side. This downsampling significantly reduces the data volume while still retaining key information for long-term analysis and forecasting.

In the traditional industrial data collection process, data is collected from Programmable Logic Controllers (PLC) and then enters a historian (an industrial real-time database), which supports business applications. Such systems typically adopt a master-slave architecture that is difficult to scale horizontally and heavily relies on the Windows ecosystem, resulting in a relatively closed environment.

TDengine's Solution

TDengine Enterprise is committed to providing powerful edge-cloud orchestration capabilities, featuring the following significant characteristics:

  • Efficient Data Synchronization: Supports synchronization efficiency of millions of data points per second, ensuring rapid and stable data transmission between the edge and the cloud.
  • Multi-Data Source Integration: Compatible with various external data sources, such as AVEVA PI System, OPC-UA, OPC-DA, and MQTT, achieving broad data access and integration.
  • Flexible Configuration of Synchronization Rules: Provides configurable synchronization rules, allowing users to customize data synchronization strategies and methods based on actual needs.
  • Resume Transmission and Re-Subscription: Supports resume transmission and re-subscription functionalities, ensuring continuity and integrity of data synchronization during network instability or interruptions.
  • Historical Data Migration: Supports the migration of historical data, enabling users to seamlessly transfer historical data to a new system during upgrades or system changes.

TDengine's data subscription feature offers significant flexibility for subscribers, allowing users to configure subscription objects as needed. Users can subscribe to a database, a supertable, or even a query statement with filter conditions. This allows users to achieve selective data synchronization, transferring only the relevant data (including offline and out-of-order data) from one cluster to another to meet various complex data demands.

The following diagram illustrates the implementation of the edge-cloud orchestration architecture in TDengine Enterprise using a specific example of a production workshop. In the workshop, real-time data generated by equipment is stored in TDengine deployed on the edge. The TDengine deployed at the branch factory subscribes to data from the workshop's TDengine. To better meet business needs, data analysts can set subscription rules, such as downsampling data or only synchronizing data that exceeds a specified threshold. Similarly, TDengine deployed at the group level subscribes to data from various branch factories, achieving data aggregation at the group level for further analysis and processing.

Edge-cloud orchestration diagram

This implementation approach has several advantages:

  • Requires no coding; only simple configurations are needed on the edge and cloud sides.
  • Significantly increases the automation level of cross-region data synchronization, reducing error rates.
  • Data does not need to be cached, minimizing batch transmissions and avoiding bandwidth congestion during peak flow.
  • Data is synchronized through a subscription method, which is configurable, simple, flexible, and real-time.
  • Both edge and cloud use TDengine, ensuring a unified data model that reduces the difficulty of data governance.

A common pain point faced by manufacturing enterprises is data synchronization. Many companies currently use offline methods to synchronize data, but TDengine Enterprise enables real-time data synchronization with configurable rules. This approach can prevent resource waste and bandwidth congestion risks caused by periodically transmitting large volumes of data.

Advantages of Edge-Cloud Orchestration

The IT and OT (Operational Technology) construction status of traditional industries varies greatly. Compared to the Internet sector, most enterprises are significantly lagging in their investments in digitization. Many enterprises are still using outdated systems to process data, which often operate independently, leading to so-called data silos.

In this context, injecting new vitality into traditional industries with AI requires first integrating the dispersed systems and their collected data, breaking the limitations of data silos. However, this process is challenging, as it involves multiple systems and various Industrial Internet protocols, making data aggregation far more than a simple merging task. It requires cleaning, processing, and handling data from different sources to integrate it into a unified platform.

When all data is aggregated into a single system, the efficiency of accessing and processing data will be significantly improved. Enterprises will be able to respond more quickly to real-time data and resolve issues more effectively. Employees both inside and outside the enterprise can also collaborate efficiently, enhancing overall operational efficiency.

Moreover, once data is aggregated, advanced third-party AI analysis tools can be utilized for better anomaly monitoring, real-time alerts, and more accurate predictions regarding capacity, costs, and equipment maintenance. This will enable decision-makers to better grasp the overall macro situation, providing strong support for enterprise development and facilitating the digital transformation and intelligent upgrade of traditional industries.