🧰Upload from API/S3
Last updated
Last updated
This technical guide is designed to provide users with instructions on uploading data to VaultNode. It encompasses technical documentation and step-by-step procedures for transferring data from AWS S3 to the Filecoin network through VaultNode. Feel free to utilize this guide to initiate the process of uploading your files to our storage system immediately!
Link: https://s3api.vaultnode.co
Data Backup and Recovery: Covers the needs of cloud service providers and enterprises, providing data backup, recovery, and disaster recovery solutions.
Regulatory and AI Data Management: Ensures regulatory compliance in data storage, and offers efficient data backup for AI training.
Big Data and Autonomous Driving: Facilitates data security for big data scenarios and autonomous driving data storage.
Healthcare and Security Data: Offers reliable storage for medical imaging and surveillance data.
User Data and Research: Handles user behavior/transaction data and scientific research data.
Document Management and IoT: Supports digital archive management and IoT device data applications.
The VaultNode Advantage: Offers secure, reliable, and cost-effective data storage across all these scenarios to drive business innovation and digital transformation.
File Encryption: The client first encrypts the file to ensure data security. This step is carried out by the client themselves to protect their data from unauthorized access during the upload and storage process.
File Upload: The client uploads the encrypted file to Vaultnode object storage via an S3-compatible API. This step can be carried out using various S3-compatible tools, such as rclone, etc.
File Management: After uploading the file, the client can manage their files using S3-compatible management tools (like rclone). These tools can help clients easily view, manage, and operate their files in Vaultnode object storage.
File Encapsulation: After receiving the file, the platform dispatches an encapsulation task to encapsulate the data to a designated node. This step is automatically completed by the platform, and the client does not need to perform any operations.
Data On-Chain: After the encapsulation is completed, the data will be put on the Filecoin network. Clients can view the corresponding on-chain information of the data through the node, including the location of the data, status, etc.
Describes Vaultnode API operations in detail. Also provides sample requests, responses, and errors for the S3 client tools.
Vaultnode supports all clients that support S3 (AWS CLI、 rclone、 Cyberduck、S3cmd、CloudBerry Explorer、DragonDisk、s3fs)
Vaultnode supports all the clients and tools of the standard S3 protocol, with the AWS CLI as an example of configuration and usage.
How to Use the AWS CLI to Interact with Vaultnode S3 Compatible API.
For more information on AWS CLI, see AWS CLI Help
Download and install AWS CLI tools.
Get Access Key and Access Secret Keys.(Provided separately)
Use Vaultnode endpoint url: https://s3api.vaultnode.co
You can contact our engineers by submitting your email application: Authorized for use by the Vaultnode
Access Key ID :<Access Key ID>
Secret Access Key:<Secret Access Key>
Vaultnode endpoint url:https://s3api.vaultnode.co
aws configure
The command will generate a series of prompts, fill in the following:
AWS Access Key ID [W76I]: <Access Key ID>
AWS Secret Access Key [G8Ms]: <Secret Access Key>
Default region name [us-east-1]:
Default output format [None]:
To create a new bucket on Vaultnode using the AWS CLI, use the following command:
aws --endpoint-url https://s3api.vaultnode.co s3 mb s3://<bucket-name>
For example, to create a new bucket named "mybucket":
aws --endpoint-url https://s3api.vaultnode.co s3 mb s3://mybucket
Bucket names must be unique across all Vaultnode users, be between 3 and 63 characters long, and can only contain lowercase characters, numbers, and dashes.The terminal should return the following line:
make_bucket: mybucket
The following command will list all buckets in your Vaultnode account:
aws --endpoint-url https://s3api.vaultnode.co s3 ls
The terminal should return all currently authorized buckets:
2023-05-02 15:36:38 vault
To upload a single file, use the following command:
aws --endpoint https://s3api.vaultnode.co s3 cp [filename] s3://[bucket-name]
For example, to upload a file named "today.txt" to the bucket "mybucket":
aws --endpoint https://s3api.vaultnode.co s3 cp ~/docs/today.txt s3://mybucket
To verify that this file was uploaded by listing the contents of the bucket with the command used earlier:
aws --endpoint https://s3api.vaultnode.co s3 ls s3://mybucket
To upload multiple objects, use the following command:
aws --endpoint https://s3api.vaultnode.co s3 sync [folder name] s3://[bucket-name]
For example, to upload the contents of a folder named "docs", use the following command:
aws --endpoint https://s3api.vaultnode.co s3 sync ~/docs/ s3://mybucket/docs/
To list the files of a bucket, use the following command:
aws --endpoint https://s3api.vaultnode.co s3 ls s3://[bucket-name]
For example, to list the files of 'mybucket':
aws --endpoint https://s3api.vaultnode.co s3 ls s3://mybucket
To move from bucket 1 to bucket 2, use the following command:
aws --endpoint-url https://s3api.vaultnode.co s3 mv s3://[bucket-name1]/[file-name] s3://[bucket-name2]/
To copy from bucket 1 to bucket 2, use the following command:
aws --endpoint-url https://s3api.vaultnode.co s3 cp s3://[bucket-name1]/[file-name] s3://[bucket-name2]/
To download a single file, use the following command:
aws --endpoint https://s3api.vaultnode.co s3 cp s3://[bucket-name]/[file-name] /path/to/download/filename
For example, to download a file named "today.txt" from the bucket "mybucket":
aws --endpoint https://s3api.vaultnode.co s3 cp s3://mybucket/docs/today.txt ~/docs/today.txt
To download a folder, use the following command:
aws --endpoint https://s3api.vaultnode.co s3 cp --recursive s3://[bucket-name]/[folder name] /path/to/download/folder
For example, to upload the contents of a folder named "photos", use the following command:
aws --endpoint https://s3api.vaultnode.co s3 cp --recursive s3://mybucket/docs ~/docs
To delete files, use the following command:
aws --endpoint https://s3api.vaultnode.co s3 rm s3://[bucket_name]/[file_name]
To delete all files in the bucket, use the following command:
aws --endpoint https://s3api.vaultnode.co s3 rm --recursive s3://[bucket_name]/
For example, to delete all files from the bucket "mybucket":
aws --endpoint https://s3api.vaultnode.co s3 rm --recursive s3://mybucket/
To create a presigned URL using the AWS CLI, use the following command syntax:
aws s3 --endpoint https://s3api.vaultnode.co presign s3://mybucket/file.name
This command should return a presigned URL. By default, the expiration time is one hour.You can specify different expiration times by adding flags and minutes. --expires-in
The Vaultnode S3 Gateway supports a API that is compatible with the basic data access model of the Amazon S3 API.
2.16.1. API Endpoint
The Vaultnode S3-Compatible API endpoint URL is:https://s3api.vaultnode.co
2.16.2. Authentication
Vaultnode S3-Compatible API supports AWS v4 signature (AWS4-HMAC-SHA256) for authentication, supports AWS v2 signature, v4 signature is recommended.
2.16.3. Https Protocol
Vaultnode maintains strict https-only standards. This means that objects and API calls are only served over https. The port for this connection is the https standard port 443 . There is currently no way to disable this feature. Requests sent over the http protocol will be redirected to https.
2.16.4. Rate Limit
Vaultnode S3-Compatible API has an effective rate limit of 100 RPS (requests per second).
2.16.5. Supported API Methods
When a response payload is present, all responses are returned using UTF-8 encoded XML.
Method Name
Method Description
CreateBucket
Creates a new bucket.
ListBuckets
Returns a list of all buckets owned by the authenticated sender of the request.
DeleteBucket
Deletes the bucket. All objects in the bucket must be deleted before the bucket itself can be deleted.
ListObjects
Returns some or all (up to 1,000) of the objects in a bucket. We recommend that you use the newer version, ListObjectsV2
ListObjectsV2
Returns some or all (up to 1,000) of the objects in a bucket.
GetObject
Retrieves objects from DSS. To use GET, you must have READ access to the object.
HeadObject
The HEAD action retrieves metadata from an object without returning the object itself.
PutObject
Adds an object to a bucket. You must have WRITE permissions on a bucket to add an object to it.
CopyObject
Creates a copy of an object that is already stored in Oort DSS.
DeleteObject
Delete an object from a bucket. If there isn't a null object, Oort DSS does not remove any objects but will still respond that the command was successful.
DeleteObjects
Delete an object from a bucket. If there isn't a null object, Oort DSS does not remove any objects but will still respond that the command was successful.
AbortMultipartUpload
This action aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID.
CreateMultipartUpload
This action initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload.
CompleteMultipartUpload
Completes a multipart upload by assembling previously uploaded parts.
UploadPart
Uploads a part in a multipart upload.
UploadPartCopy
Uploads a part by copying data from an existing object as data source.
GetSignedUrl
Supports pre-signed URLs for downloading and uploading objects.