The AWS SDK for JavaScript can support both Node.js and browser environments concurrently.
Import AWS SDK for JavaScript From Node.js
Ensure that Node.js v13 or higher is installed, and that npm and npx are already installed.
Initialize a TypeScript project.
npm i typescript@5.2.2 ts-node@10.9.1 @types/node@20.8.4 --save-dev
npx tsc --init
Import AWS SDK for JavaScript
npm i @aws-sdk/client-s3@3.427.0 @aws-sdk/client-sts@3.427.0 @aws-sdk/lib-storage@3.427.0 @aws-sdk/s3-request-presigner@3.427.0
For each code example that follows, create the code in an index.ts file, and then execute
npx ts-node index.ts
Import AWS SDK for JavaScript from browser
The list of browsers supported by the AWS SDK for JavaScript can be found in the official documentation.
Initialize a TypeScript project.
npm i webpack@5.89.0 webpack-cli@5.1.4 @webpack-cli/generators@3.0.7 path-browserify@1.0.1 --save-dev
npx webpack init
Set up the tsconfig.json
file, ensuring that the following options are set:
{
"compilerOptions": {
"module": "NodeNext",
"moduleResolution": "NodeNext"
}
}
Import AWS SDK for JavaScript
npm i @aws-sdk/client-s3@3.427.0 @aws-sdk/client-sts@3.427.0 @aws-sdk/lib-storage@3.427.0 @aws-sdk/s3-request-presigner@3.427.0
Execute webpack
npm run build
Start webpack dev server
npx webpack serve
This command will automatically start a browser, import the src/index.ts
file, and execute the code.
Note: when accessing the S3 interface from a browser, you may need to modify the cross-origin configuration.
UploadObject
Client-side upload
Create index.ts
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
getSignedUrl(s3, new PutObjectCommand({ Bucket: "<Bucket>", Key: "<Key>" }))
.then((data) => {
console.log(data);
})
.catch((err) => {
console.error(err);
});
This code will generate a pre-signed client-side upload URL, valid for 900 seconds, which the client can use to send an PUT request and upload a file within the expiration time.
The following is an example of uploading a file using curl:
curl -X PUT --upload-file "<path/to/file>" "<presigned url>"
You can also specify the expiration time for the upload credentials, for example:
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
getSignedUrl(s3, new PutObjectCommand({ Bucket: "<Bucket>", Key: "<Key>" }), {
expiresIn: 3600,
})
.then((data) => {
console.log(data);
})
.catch((err) => {
console.error(err);
});
Server-side upload
PutObject(file)
This code example is not applicable to the browser scenario.
Create index.ts
import * as fs from "fs";
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
const fileStream = fs.createReadStream("<path/to/upload>");
fileStream.on("error", (err) => console.error(err));
s3.send(
new PutObjectCommand({ Bucket: "<Bucket>", Key: "<Key>", Body: fileStream })
)
.then((data) => console.log(data))
.catch((err) => console.error(err));
PutObject(stream)
Create index.ts
import { Readable } from "stream";
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
s3.send(
new PutObjectCommand({
Bucket: "<Bucket>",
Key: "<Key>",
Body: Readable.from("Hello, SUFY S3!"),
})
)
.then((data) => console.log(data))
.catch((err) => console.error(err));
MultipartUpload(file)
This code example is not applicable to the browser scenario.
Create index.ts
import * as fs from "fs";
import { StreamingBlobPayloadInputTypes } from "@smithy/types";
import {
S3Client,
CreateMultipartUploadCommand,
CreateMultipartUploadCommandOutput,
UploadPartCommand,
UploadPartCommandOutput,
CompletedPart,
CompleteMultipartUploadCommand,
CompleteMultipartUploadCommandOutput,
} from "@aws-sdk/client-s3";
async function createMultipartUpload(
s3: S3Client,
bucket: string,
key: string
): Promise<CreateMultipartUploadCommandOutput> {
return s3.send(
new CreateMultipartUploadCommand({
Bucket: bucket,
Key: key,
})
);
}
async function uploadPart(
s3: S3Client,
bucket: string,
key: string,
uploadId: string,
partNumber: number,
body: StreamingBlobPayloadInputTypes,
contentLength: number
): Promise<UploadPartCommandOutput> {
return s3.send(
new UploadPartCommand({
Bucket: bucket,
Key: key,
UploadId: uploadId,
PartNumber: partNumber,
Body: body,
ContentLength: contentLength,
})
);
}
async function completeMultipartUpload(
s3: S3Client,
bucket: string,
key: string,
uploadId: string,
parts: CompletedPart[]
): Promise<CompleteMultipartUploadCommandOutput> {
const cmd = new CompleteMultipartUploadCommand({
Bucket: bucket,
Key: key,
UploadId: uploadId,
MultipartUpload: {
Parts: parts,
},
});
return s3.send(cmd);
}
async function uploadParts(
s3: S3Client,
bucket: string,
key: string,
uploadId: string,
filePath: string | Buffer | URL
): Promise<CompletedPart[]> {
const PART_SIZE = 5 * 1024 * 1024; // part size is 5 MB
const { size: fileSize } = await fs.promises.stat(filePath);
const parts: CompletedPart[] = [];
// The example given here is a serial multipart upload. You can modify it to perform a parallel multipart upload to further improve the upload speed.
for (
let offset = 0, partNum = 1;
offset < fileSize;
offset += PART_SIZE, partNum++
) {
const options = {
start: offset,
end: Math.min(offset + PART_SIZE, fileSize) - 1,
};
const uploadPartCommandOutput = await uploadPart(
s3,
bucket,
key,
uploadId,
partNum,
fs.createReadStream(filePath, options),
options.end + 1 - options.start
);
parts.push({ PartNumber: partNum, ETag: uploadPartCommandOutput.ETag });
}
return parts;
}
async function uploadFile(
s3: S3Client,
bucket: string,
key: string,
filePath: string | Buffer | URL
): Promise<CompleteMultipartUploadCommandOutput> {
const createMultipartUploadCommandOutput = await createMultipartUpload(
s3,
bucket,
key
);
const completedParts = await uploadParts(
s3,
bucket,
key,
createMultipartUploadCommandOutput.UploadId!,
filePath
);
return await completeMultipartUpload(
s3,
bucket,
key,
createMultipartUploadCommandOutput.UploadId!,
completedParts
);
}
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
uploadFile(s3, "<Bucket>", "<Key>", "<path/to/upload>")
.then((data) => console.log(data))
.catch((err) => console.error(err));
UploadObject
Create index.ts
import { Upload } from "@aws-sdk/lib-storage";
import { S3Client } from "@aws-sdk/client-s3";
import * as fs from "fs";
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
const upload = new Upload({
client: s3,
params: {
Bucket: "<Bucket>",
Key: "<Key>",
Body: fs.createReadStream("<path/to/upload>"),
},
});
upload
.done()
.then((resp) => {
if ((resp as CompleteMultipartUploadCommandOutput).ETag) {
console.log("ETag:", (resp as CompleteMultipartUploadCommandOutput).ETag);
} else {
console.log("Aborted");
}
})
.catch((err) => {
console.error("Error:", err);
});
GetObject
Client-side get object
Create index.ts
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
getSignedUrl(s3, new GetObjectCommand({ Bucket: "<Bucket>", Key: "<Key>" }))
.then((data) => {
console.log(data);
})
.catch((err) => {
console.error(err);
});
This code will generate a pre-signed client-side download URL, valid for 900 seconds, which the client can use to send an GET request and download the file within the expiration time.
The following is an example of downloading a file using curl:
curl -o "<path/to/download>" "<presigned url>"
You can also specify the expiration time for the download credentials, for example:
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
getSignedUrl(s3, new GetObjectCommand({ Bucket: "<Bucket>", Key: "<Key>" }), {
expiresIn: 3600,
})
.then((data) => {
console.log(data);
})
.catch((err) => {
console.error(err);
});
Server-side get object
This code example is not applicable to the browser scenario.
Create index.ts
import * as fs from "fs";
import {
S3Client,
GetObjectCommand,
GetObjectCommandOutput,
} from "@aws-sdk/client-s3";
import { Writable } from "stream";
import Readable from "stream";
async function getObject(
s3: S3Client,
bucket: string,
key: string,
writable: Writable
): Promise<GetObjectCommandOutput> {
const getObjectCommandOutput = await s3.send(
new GetObjectCommand({
Bucket: bucket,
Key: key,
})
);
if (getObjectCommandOutput.Body) {
(getObjectCommandOutput.Body as Readable).pipe(writable);
}
return getObjectCommandOutput;
}
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
getObject(s3, "<Bucket>", "<Key>", fs.createWriteStream("<path/to/download>"))
.then((data) => console.log(data))
.catch((err) => console.error(err));
ObjectOperations
HeadObject
Create index.ts
import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
s3.send(
new HeadObjectCommand({
Bucket: "<Bucket>",
Key: "<Key>",
})
)
.then((data) => console.log(data))
.catch((err) => console.error(err));
ChangeStorageClass
Create index.ts
import { S3Client, CopyObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
s3.send(
new CopyObjectCommand({
Bucket: "<Bucket>",
Key: "<Key>",
CopySource: "/<Bucket>/<Key>",
StorageClass: "GLACIER",
MetadataDirective: "REPLACE",
})
)
.then((data) => console.log(data))
.catch((err) => console.error(err));
CopyObject
Create index.ts
import { S3Client, CopyObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
s3.send(
new CopyObjectCommand({
Bucket: "<ToBucket>",
Key: "<ToKey>",
CopySource: "/<FromBucket>/<ToBucket>",
MetadataDirective: "COPY",
})
)
.then((data) => console.log(data))
.catch((err) => console.error(err));
CopyObject(>5GB)
Create index.ts
import * as fs from "fs";
import {
S3Client,
CreateMultipartUploadCommand,
CreateMultipartUploadCommandOutput,
UploadPartCopyCommand,
UploadPartCopyCommandOutput,
CompletedPart,
CompleteMultipartUploadCommand,
CompleteMultipartUploadCommandOutput,
HeadObjectCommand,
HeadObjectCommandOutput,
} from "@aws-sdk/client-s3";
async function createMultipartUpload(
s3: S3Client,
bucket: string,
key: string
): Promise<CreateMultipartUploadCommandOutput> {
return s3.send(
new CreateMultipartUploadCommand({
Bucket: bucket,
Key: key,
})
);
}
async function uploadPartCopy(
s3: S3Client,
fromBucket: string,
fromKey: string,
toBucket: string,
toKey: string,
uploadId: string,
partNumber: number,
from: number,
end: number
): Promise<UploadPartCopyCommandOutput> {
return s3.send(
new UploadPartCopyCommand({
Bucket: toBucket,
Key: toKey,
UploadId: uploadId,
PartNumber: partNumber,
CopySource: "/" + fromBucket + "/" + fromKey,
CopySourceRange: "bytes=" + from + "-" + (end - 1),
})
);
}
async function completeMultipartUpload(
s3: S3Client,
bucket: string,
key: string,
uploadId: string,
parts: CompletedPart[]
): Promise<CompleteMultipartUploadCommandOutput> {
const cmd = new CompleteMultipartUploadCommand({
Bucket: bucket,
Key: key,
UploadId: uploadId,
MultipartUpload: {
Parts: parts,
},
});
return s3.send(cmd);
}
async function uploadPartsCopy(
s3: S3Client,
fromBucket: string,
fromKey: string,
toBucket: string,
toKey: string,
uploadId: string
): Promise<CompletedPart[]> {
const PART_SIZE = 5 * 1024 * 1024; // 分片大小为 5 MB
const headObjectCommand = await s3.send(
new HeadObjectCommand({
Bucket: fromBucket,
Key: fromKey,
})
);
const contentLength: number = headObjectCommand.ContentLength!;
const parts: CompletedPart[] = [];
// The example given here is a serial multipart copy. You can modify it to perform a parallel multipart copy to further improve the copy speed.
for (
let offset = 0, partNum = 1;
offset < contentLength;
offset += PART_SIZE, partNum++
) {
const uploadPartCommandOutput = await uploadPartCopy(
s3,
fromBucket,
fromKey,
toBucket,
toKey,
uploadId,
partNum,
offset,
Math.min(offset + PART_SIZE, contentLength)
);
parts.push({
PartNumber: partNum,
ETag: uploadPartCommandOutput.CopyPartResult!.ETag,
});
}
return parts;
}
async function copyFile(
s3: S3Client,
fromBucket: string,
fromKey: string,
toBucket: string,
toKey: string
): Promise<CompleteMultipartUploadCommandOutput> {
const createMultipartUploadCommandOutput = await createMultipartUpload(
s3,
toBucket,
toKey
);
const completedParts = await uploadPartsCopy(
s3,
fromBucket,
fromKey,
toBucket,
toKey,
createMultipartUploadCommandOutput.UploadId!
);
return await completeMultipartUpload(
s3,
toBucket,
toKey,
createMultipartUploadCommandOutput.UploadId!,
completedParts
);
}
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
copyFile(s3, "<FromBucket>", "<FromKey>", "<ToBucket>", "<ToKey>")
.then((data) => console.log(data))
.catch((err) => console.error(err));
DeleteObject
Create index.ts
import { S3Client, DeleteObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
s3.send(
new DeleteObjectCommand({
Bucket: "<"Bucket>",
Key: "<Key>",
})
)
.then((data) => console.log(data))
.catch((err) => console.error(err));
ListObjects
Create index.ts
import { S3Client, ListObjectsCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
s3.send(
new ListObjectsCommand({
Bucket: "<Bucket>",
Prefix: "<KeyPrefix>",
})
)
.then(({ Contents: contents }) => console.log(contents))
.catch((err) => console.error(err));
DeleteObjects
Create index.ts
import { S3Client, DeleteObjectsCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "ap-southeast-2", // Asia Pacific (Hanoi) RegionID
endpoint: "https://mos.ap-southeast-2.sufybkt.com", // Asia Pacific (Hanoi) Endpoint
credentials: {
accessKeyId: "<AccessKey>",
secretAccessKey: "<SecretKey>",
},
});
s3.send(
new DeleteObjectsCommand({
Bucket: "<Bucket>",
Delete: {
Objects: [
{
Key: "<Key1>",
},
{
Key: "<Key2>",
},
{
Key: "<Key3>",
},
],
},
})
)
.then((data) => console.log(data))
.catch((err) => console.error(err));