Step 1: Upload the JAR package#

Before the job runs, follow these steps to upload the JAR package, Python job file, or Python dependency to Ververica Cloud Console.


The maximum number of packages that the system can upload is 200, and the size of each package is limited to 200 MB. If the JAR package exceeds 200 MB, we recommend that you upload it through S3, and for more information, read AWS docs.

  1. Log in to Ververica Cloud.

  2. On the Dashboard page, locate the workspace you want to manage, click the title of the workspace or this icon ⋮, and select Open Console.

3. In the left-side navigation pane of the Console, click Artifacts. 3. Click Upload Artifact and select the JAR package that you want to upload.

Step 2: Deploy a JAR job#

  1. On the Deployments page, click Create Deployment.

  1. Enter the information about JAR job deployment.



Deployment type

Select JAR.

Deployment name

Enter the name of the JAR job.

Engine version

Starting with the latest VVR version.


Select a file or manually upload a new file, you can drag and drop the file into this area or click the icon on the right to select the file to upload.

Entry Point Class

The entry class for the program. If your JAR package does not specify a main class, enter the fully qualified path to your Entry Point Class class here.

Entry Point Main Arguments

You can pass in the parameter here and call it inside the main method. The parameter information length should not be greater than 1024, and it is not recommended to pass complex parameters, which refer to parameters that include line breaks, spaces or other special characters. If you need to pass in complex parameters, use additional dependency files to transfer.

Additional Dependencies

(1) (Recommended) Select the target you have uploaded to attach dependent files. You must upload the dependent files on the left side of Artifacts page in advance or upload the attachment dependent files on the S3 console. (2) Enter the S3 path of the dependent file attached to the target. You must upload the attached dependent files to the S3 bucket corresponding to the current instance in advance, and the S3 bucket that you selected when you activated Flink Full Hosting must upload the attached dependent file. (3) Fill in the URL of the dependent file attached to the target, and currently only URLs that end with the file name, such as s3://xxxxxx/file. You need to upload the attached dependent files to a publicly accessible HTTP service in advance. Note: Session clusters do not support setting additional dependent files, and only Per-Job clusters support setting additional dependent files. Session mode jobs do not support configuring additional dependent file paths.

Deploy to the Session cluster

If you select Submit to Session cluster, select the target Session cluster from the drop-down list below. For more information about how to create a Session cluster, see the Session Clusters section.

  1. Click Deploy

When the deployment is complete, start the job on the Deployments page. For more information about how to start a job, see the Start Deployment section.