Services provided by Ververica Cloud Console are subject to deployment environments and network environments. Therefore, when you develop Python API jobs, take note of the following limits:

  • Python is pre-installed in your Ververica Cloud Console cluster, and common Python libraries such as pandas, NumPy, and PyArrow are pre-installed in the Python environment. Therefore, you must develop code with the latest python version.

  • Java Development Kit (JDK) 1.11 is used in the running environment of Ververica Cloud Console. If your Python API job depends on a third-party JAR package, make sure that the JAR package is compatible with JDK 1.11.

  • Only open source Scala 2.11 is supported. If your Python API job depends on a third-party JAR package, make sure that the JAR package that is compatible with Scala 2.11 is used.

Step 1: Upload the Python package#

Before the job runs, you need to follow these steps to upload the Python package, Python job file, or Python dependency to Ververica Cloud Console.


The maximum number of packages that the system can upload is 200, and the size of each package is limited to 200 MB. If the Python package exceeds 200 MB, we recommend that you upload it through S3, and for more information, read AWS docs.

  1. Log in to Ververica Cloud.

  2. On the Dashboard page, locate the workspace you want to manage, click the title of the workspace or this icon ⋮, and select Open Console.

3. In the left-side navigation pane of the Console, click Artifacts. 3. Click Upload Artifact and select the Python package that you want to upload.

You need to upload the official JAR package of PyFlink.


We recommend that you upload Python resources through a separate Python job portal, see Deploy a Python job.

Step 2: Deploy a Python job#

  1. On the Deployments page, click Create Deployment.

  1. Enter the information about Python job deployment.



Deployment type

Select Python.

Deployment name

Enter the name of the Python job.

Engine version

Starting with the latest VVR version.

Python URI

The Uniform Resource Identifier (URI) to access the Python draft file that you want to upload. Python draft files can be .py files or .zip files. Note: If your job is a Python API type, you need to fill in the official JAR package of PyFlink for the official JAR download address.

Entry Module

The entry point class of the program. If you select a .py Python draft file, you do not need to specify this parameter. If you select a .zip Python draft file, you must specify this parameter. For example, you can enter example.word_count in the Entry Module field.

Entry Point Main Arguments

You can pass in the parameter here and call it inside the main method. (1) The parameter information length should not be greater than 1024, and it is not recommended to pass complex parameters, which refer to parameters that include line breaks, spaces or other special characters. If you need to pass in complex parameters, use additional dependency files to transfer. (2) If your job is of the Python API type, you need to upload your Python job file first. After the Python job file is uploaded, it will be uploaded to the /flink/usrlib/ directory of the job running node by default. Note: If your Python job file name is, the Entrypoint main args needs to be filled in -py /flink/usrlib/ The path of the Python job file needs to be filled in as a full path, /flink/usrlib/ cannot be omitted and cannot be changed.

Python Libraries

A third-party Python package. The third-party Python package that you uploaded is added to PYTHON PATH of the Python worker process so that the package can be directly accessed in Python user-defined functions (UDFs). For more information about how to use third-party Python packages, see Use a third-party Python package.

Python Archives

Archive files. Only ZIP files such as .zip, .jar, .whl, and .egg are supported. Archive files are decompressed to the working directory of the Python worker process. For example, if the name of the compressed file where the archive files are located is, the following code can be written in the Python UDFs to access the archive file. def map(): with open("") as f: ...

Additional Dependencies

(1) (Recommended) Select the target you have uploaded to attach dependent files. You must upload the dependent files on the left side of Artifacts page in advance or upload the attachment dependent files on the S3 console. (2) Enter the S3 path of the dependent file attached to the target. You must upload the attached dependent files to the S3 bucket corresponding to the current instance in advance, and the S3 bucket that you selected when you activated Flink Full Hosting must upload the attached dependent file. (3) Fill in the URL of the dependent file attached to the target, and currently only URLs that end with the file name, such as s3://xxxxxx/file. You need to upload the attached dependent files to a publicly accessible HTTP service in advance. Note: Session clusters do not support setting additional dependent files, and only Per-Job clusters support setting additional dependent files. The attachment-dependent files uploaded in the above three methods will eventually be loaded into the /flink/usrlib directory of the pod where JM and TM are located when the job runs. Session mode jobs do not support configuring additional dependent file paths.

Deploy to the Session cluster

If you select Submit to Session cluster, select the target Session cluster from the drop-down list below. For more information about how to create a Session cluster, see the Session Clusters section.


Optionally, fill in the description information.

  1. Click Deploy

When the deployment is complete, start the job on the Deployments page. For more information about how to start a job, see the Start Deployment section.