Deploy a Time-Series Foundation Model
A number of research institutions and enterprises have released open-source time-series foundation models (TSFMs), greatly simplifying time-series data analysis. Beyond traditional data analysis algorithms, machine learning, and deep learning models, TSFMs offer a new and powerful option for advanced time-series analytics.
TDgpt (since version 3.3.6.4) provides native support for six types of Time-Series Foundation Models (TSFMs): TDtsfm v1.0, Time-MoE, Chronos, Moirai, TimesFM, and Moment. All these models are deployed as local services that TDgpt connects to.
Deployment Details
The server scripts for all six TSFM services are located in the <TDgpt_root_directory>/lib/taosanalytics/tsfmservice/ directory.
TDgpt distinguishes between models that are configured by default and those that require manual configuration:
- Default Models:
TDtsfmandTime-MoEare configured by default intaosanode.ini. You only need to start their respective server scripts to use them. - Additional Models:
Chronos,Moirai,TimesFM, andMomentrequire you to start their server scripts and add their service URLs totaosanode.inibefore use.
TDgpt has been adapted to interface with specific features of these models. If a certain function is unavailable, it may be due to a limitation of the model itself or because TDgpt has not yet been adapted to support that specific feature for that model.
| Models | Files | Note | Functions Description | ||||||
|---|---|---|---|---|---|---|---|---|---|
| Name | Parameters (100M) | Model Size(MiB) | Forecast | Covariate Forecast | Multivariate Forecast | Anomaly Detection | Imputation | ||
| timemoe | timemoe-server.py | Maple728/TimeMoE-50M | 0.50 | 227 | ✔ | ✘ | ✘ | ✘ | ✘ |
| Maple728/TimeMoE-200M | 4.53 | 906 | |||||||
| moirai | moirai-server.py | Salesforce/moirai-moe-1.0-R-small | 1.17 | 469 | ✔ | ✔ | ✘ | ✘ | ✘ |
| Salesforce/moirai-moe-1.0-R-base | 9.35 | 3,740 | |||||||
| chronos | chronos-server.py | amazon/chronos-bolt-tiny | 0.09 | 35 | ✔ | ✘ | ✘ | ✘ | ✘ |
| amazon/chronos-bolt-mini | 0.21 | 85 | |||||||
| amazon/chronos-bolt-small | 0.48 | 191 | |||||||
| amazon/chronos-bolt-base | 2.05 | 821 | |||||||
| timesfm | timesfm-server.py | google/timesfm-2.0-500m-pytorch | 4.99 | 2,000 | ✔ | ✘ | ✘ | ✘ | ✘ |
| moment | moment-server.py | AutonLab/MOMENT-1-small | 0.38 | 152 | ✘ | ✘ | ✘ | ✘ | ✔ |
| AutonLab/MOMENT-1-base | 1.13 | 454 | |||||||
| AutonLab/MOMENT-1-large | 3.46 | 1,039 | |||||||
This document describes how to integrate an independent TSFM service into TDengine, using Time-MoE as an example, and how to use the model in SQL statements for time-series forecasting.
Prepare Your Environment
Before using TSFMs, prepare your environment as follows. Install a Python environment and use PiPy to install dependencies:
pip install torch==2.4.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
pip install flask==3.0.3
pip install transformers==4.40.0
pip install accelerate
You can use the virtual Python environment installed by TDgpt or a separate environment.
Configure TSFM Service Path & Port
The lib/taosanalytics/time-moe.py (rename to /lib/taosanalytics/tsfmservice/timemoe-service.py since 3.3.6.4) file in the TDgpt root directory deploys and serves the Time-MoE model. Modify this file to set an appropriate URL.
@app.route('/ds_predict', methods=['POST'])
def time_moe():
...
Change ds_predict to the URL that you want to use in your environment.
app.run(
host='0.0.0.0',
port=6062,
threaded=True,
debug=False
)
In this section, you can update the port if desired. After you have set your URL and port number, restart the service.
Run the Python Script (Available before 3.3.8.x)
⚠️ NOTE:The following method is only available before 3.3.8.x, if you're using later version, please refer to [Dynamic Model Download](#Dynamic Model Download)
nohup python time-moe.py > service_output.out 2>&1 &
The script automatically downloads Time-MoE-200M from Hugging Face the first time it is run. You can modify time-moe.py to use TimeMoE-50M if you prefer a smaller version.
Check the service-output.out file to confirm that the model has been loaded:
Running on all addresses (0.0.0.0)
Running on http://127.0.0.1:6062
Verify the Service
Verify that the service is running normally:
curl 127.0.0.1:6062/ds_predict
The following indicates that Time-MoE has been deployed:
<!doctype html>
<html lang=en>
<title>405 Method Not Allowed</title>
<h1>Method Not Allowed</h1>
<p>The method is not allowed for the requested URL.</p>
Load the Model into TDgpt
You can modify the timemoe.py file and use it in TDgpt. In this example, Time-MoE is adapted to provide forecasting.
class _TimeMOEService(AbstractForecastService):
# model name, user-defined, used as model key
name = 'timemoe-fc'
# description
desc = ("Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts; "
"Ref to https://github.com/Time-MoE/Time-MoE")
def __init__(self):
super().__init__()
# Use the default address if the service URL is not specified in the taosanode.ini configuration file.
if self.service_host is None:
self.service_host = 'http://127.0.0.1:6062/timemoe'
def execute(self):
# Verify support for past covariate analysis; raise an exception if unsupported. (Note: time-moe lacks this support and will trigger the exception.)
if len(self.past_dynamic_real):
raise ValueError("covariate forecast is not supported yet")
super().execute()
Add your code to /usr/local/taos/taosanode/lib/taosanalytics/algo/fc/. Actually, you can find a timemoe.py file that we have already prepared already.
TDgpt has built-in support for Time-MoE. You can run SHOW ANODES FULL and see that forecasting based on Time-MoE is listed as timemoe-fc.
Configure TSFM Service Path
Modify the [tsfm-service] section of /etc/taos/taosanode.ini:
[tsfm-service]
timemoe-fc = http://127.0.0.1:6062/ds_predict
Add the path for your deployment. The key is the name of the model defined in your Python code, and the value is the URL of Time-MoE on your local machine.
Then restart the taosanode service and run UPDATE ALL ANODES. You can now use Time-MoE forecasting in your SQL statements.
Use a TSFM in SQL
SELECT FORECAST(i32, 'algo=timemoe-fc')
FROM foo;
Deploying Other Time-Series Foundation Models
The logic for registering models in TDgpt after deploying them locally is similar across all types. You only need to modify the Class Name and the Model Service Name (Key) and set the correct service address. Adaptation files for Chronos, TimesFM, and Moirai are provided by default; users of version 3.3.6.4 and later only need to start the corresponding services locally.
The deployment and startup methods are as follows:
Starting the Moirai Service
To avoid dependency conflicts, it is recommended to prepare a clean Python virtual environment and install the libraries there.
pip install torch==2.3.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
pip install uni2ts
pip install flask
Configure the service address in the moirai-server.py file (see above for the method) and set the model to be loaded (if necessary).
_model_list = [
'Salesforce/moirai-moe-1.0-R-small', # small model with 117M parameters
'Salesforce/moirai-moe-1.0-R-base', # base model with 205M parameters
]
pretrained_model = MoiraiMoEModule.from_pretrained(
_model_list[0] # Loads the 'small' model by default; change to 1 to load 'base'
).to(device)
Execute the command to start the service. The model files will be downloaded automatically during the first startup. If the download speed is too slow, you can use a domestic mirror (see above for setup).
nohup python moirai-server.py > service_output.out 2>&1 &
Follow the same steps as above to check the service status.
Starting the Chronos Service
Install dependencies in a clean Python virtual environment:
pip install torch==2.3.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
pip install chronos-forecasting
pip install flask
Set the service address and model in chronos-server.py. You can also use the default values.
def main():
app.run(
host='0.0.0.0',
port=6063,
threaded=True,
debug=False
)
_model_list = [
'amazon/chronos-bolt-tiny', # 9M parameters, based on t5-efficient-tiny
'amazon/chronos-bolt-mini', # 21M parameters, based on t5-efficient-mini
'amazon/chronos-bolt-small', # 48M parameters, based on t5-efficient-small
'amazon/chronos-bolt-base', # 205M parameters, based on t5-efficient-base
]
model = BaseChronosPipeline.from_pretrained(
_model_list[0], # Loads the 'tiny' model by default; modify the index to change the model
device_map=device,
torch_dtype=torch.bfloat16,
)
Execute the following in the shell to start the service:
nohup python chronos-server.py > service_output.out 2>&1 &
Starting the TimesFM Service
Install dependencies in a clean Python virtual environment:
pip install torch==2.3.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
pip install timesfm
pip install jax
pip install flask==3.0.3
Adjust the service address in timesfm-server.py if necessary, then execute the command below:
nohup python timesfm-server.py > service_output.out 2>&1 &
Starting the Moment Service
Install dependencies in a clean Python virtual environment:
pip install torch==2.3.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
pip install transformers==4.33.3
pip install numpy==1.25.2
pip install matplotlib
pip install pandas==1.5
pip install scikit-learn
pip install flask==3.0.3
pip install momentfm
Adjust the service address in moment-server.py if necessary, then execute the command below:
nohup python moment-server.py > service_output.out 2>&1 &
Service Management Scripts (Start and Stop)
To simplify management, TDgpt (v3.4.0.0+) provides unified scripts: start-model.sh and stop-model.sh. These allow you to start or stop specific or all foundation model services with a single command.
Start Script
The start-model.sh script loads the corresponding Python virtual environment and initiates the model service script based on the specified model name.
After a root installation, the script is located in <tdgpt_root>/bin/. A symbolic link is automatically created at /usr/bin/start-model for global access.
Logs are output to /var/log/taos/taosanode/taosanode_service_<model_name>.log by default.
Usage:
Usage: /usr/bin/start-model [-c config_file] [model_name|all] [other_params...]
Supported models: tdtsfm, timesfm, timemoe, moirai, chronos, moment
Options:
-c config_file: Specifies the configuration file (Default:/etc/taos/taosanode.ini).-h, --help: Displays help information.
Examples:
- Start all services in the background:
/usr/bin/start-model all - Start a specific service (e.g., TimesFM):
/usr/bin/start-model timesfm - Specify a custom config file:
/usr/bin/start-model -c /path/to/custom_taosanode.ini
Stop Script
stop-model.sh is used to terminate specified or all model services. It automatically identifies and kills the relevant Python processes.
Examples:
- Stop the TimesFM service:
/usr/bin/stop-model timesfm - Stop all model services:
/usr/bin/stop-model all
Dynamic Model Download
In versions 3.3.8.x and later, you can specify different model scales during startup. If no parameters are provided, the driver file ([xxx]-server.py) will automatically load the model with the smallest parameter scale.
Additionally, if you have manually downloaded model files, you can run them by specifying the local path.
# Run the chronos-bolt-tiny model located at /var/lib/taos/taosanode/model/chronos.
# If the directory doesn't exist, it will download automatically to that path.
# The third parameter (True) enables the mirror site for faster downloads (recommended for users in China).
python chronos-server.py /var/lib/taos/taosanode/model/chronos/ amazon/chronos-bolt-tiny True
Transformers Version Requirements
| Model Name | Transformers Version |
|---|---|
| time-moe, moirai, tdtsfm | 4.40 |
| chronos | 4.55 |
| moment | 4.33 |
| timesfm | N/A |
References
- Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts
Paper | GitHub Repo