AWS has changed the dependencies versions and naming convention in Java SDK V2
You can follow the AWS Developer Guide for more details
Upasana | August 23, 2022 | 4 min read | 4,118 views | AWS Tutorials
Spring boot tutorial for AWS SDK V2 for doing object level operations on S3 bucket. We will specifically cover PutObject, GetObject and GetUrl operation on S3 Objects using AWS SDK V2 S3Client.
For AWS SDK V1.x, follow the other article on this website:
Configure Gradle Build
Creating Singleton Bean for S3 service client
Uploading file to S3 Bucket
Download file from S3 bucket
S3Utilities to getUrl for an Object
The AWS SDK for Java V2 is a major rewrite of the version V1 code base. It’s built on top of Java 8+ and adds several frequently requested features. These include support for non-blocking I/O and the ability to plug in a different HTTP implementation at run time.
To use the AWS SDK for Java in your Gradle project, use Spring’s dependency management plugin for Gradle.
group 'foo.bar'
version '1.0'
apply plugin: 'java'
sourceCompatibility = 1.8
repositories {
mavenCentral()
}
dependencies {
implementation platform('software.amazon.awssdk:bom:2.5.29')
implementation 'software.amazon.awssdk:s3' (1)
}
1 | Gradle automatically resolves the correct version of your SDK dependencies using the information from the BOM. No need to specify the version for service client libraries. |
To make requests to AWS, you first need to create a service client object (S3Client for example). AWS SDK V2 provides service client builders to facilitate creation of service clients.
AWS SDK V2 has changed the class naming convention and removed AWS prefix from most of the classes. AmazonS3Client
has been replaced with S3Client
. When using Spring Boot, we can simply create bean of S3Client and use it as and when required.
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
@Configuration
public class S3Config {
@Bean(destroyMethod = "close")
public S3Client s3Client() {
return S3Client.builder()
.region(Region.of(region))
.credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create(accessKey, secretKey)))
.build();
}
}
Service clients in the SDK are thread-safe. For best performance, treat them as long-lived objects. Each client has its own connection pool resource that is released when the client is garbage collected. The clients in the AWS SDK for Java V2 now extend the AutoClosable interface. For best practices, explicitly close a client by calling the close method.
Now we have the service client bean ready, which we can inject into a service and start uploading an object to S3 bucket with specified keyname.
@Service
public class S3Service {
private static final Logger logger = LoggerFactory.getLogger(S3Service.class);
@Autowired
private S3Client s3Client;
@Value("${aws.s3.bucket}")
private String bucket;
public String upload(String keyName, byte[] attachment) {
try {
logger.info("Uploading a PDF to S3 - {}", keyName);
PutObjectResponse putObjectResult = s3Client.putObject(
PutObjectRequest.builder()
.bucket(bucket)
.key(keyName)
.contentType(MediaType.APPLICATION_PDF.toString())
.contentLength((long) attachment.length)
.build(),
RequestBody.fromByteBuffer(ByteBuffer.wrap(attachment)));
final URL reportUrl = s3Client.utilities().getUrl(GetUrlRequest.builder().bucket(bucket).key(keyName).build());
logger.info("putObjectResult = " + putObjectResult);
logger.info("reportUrl = " + reportUrl);
return reportUrl.toString();
} catch (SdkServiceException ase) {
logger.error("Caught an AmazonServiceException, which " + "means your request made it "
+ "to Amazon S3, but was rejected with an error response" + " for some reason.", ase);
logger.info("Error Message: " + ase.getMessage());
logger.info("Key: " + keyName);
throw ase;
} catch (SdkClientException ace) {
logger.error("Caught an AmazonClientException, which " + "means the client encountered "
+ "an internal error while trying to " + "communicate with S3, "
+ "such as not being able to access the network.", ace);
logger.error("Error Message: {}, {}", keyName, ace.getMessage());
throw ace;
}
}
}
S3Client
exposes S3Utilities
object that can be used to create the utilities class that heps us with getting URL for a given S3 object.
We can compose a GetObjectRequest
using builder pattern specifying the bucket name and key and then use s3 service client to get the object and save it into a byte array or file.
@Service
public class S3Service {
private static final Logger logger = LoggerFactory.getLogger(S3Service.class);
@Autowired
private S3Client s3Client;
@Value("${aws.s3.bucket}")
private String bucket;
public byte[] getObject(String keyName) {
try {
logger.info("Retrieving file from S3 for key: {}/{}", bucket, keyName);
ResponseBytes<GetObjectResponse> s3Object = s3Client.getObject(
GetObjectRequest.builder().bucket(bucket).key(keyName).build(),
ResponseTransformer.toBytes());
final byte[] bytes = s3Object.asByteArray();
return bytes;
} catch (SdkClientException ase) {
logger.error("Caught an AmazonServiceException, which " + "means your request made it "
+ "to Amazon S3, but was rejected with an error response" + " for some reason: " + keyName, ase);
throw ase;
} catch (SdkServiceException ace) {
logger.error("Caught an AmazonClientException, which " + "means the client encountered "
+ "an internal error while trying to " + "communicate with S3, "
+ "such as not being able to access the network: " + keyName, ace);
throw ace;
}
}
}
If object content is too big, you can directly stream it into a file without loading inmemory. This won’t cause OOME (Out of Memory Error) because full object content are never loaded into memory.
import software.amazon.awssdk.core.sync.ResponseTransformer;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.*;
@Slf4j
@Service
public class S3Service {
@Autowired
private S3Client s3Client;
@Value("${aws.s3.bucket}")
private String bucket;
public void transferTo(String keyName, Path destination) {
log.info("Downloading file from S3 for key: {}/{}", bucket, keyName);
var getRequest = GetObjectRequest.builder().bucket(bucket).key(keyName).build();
s3Client.getObject(getRequest, ResponseTransformer.toFile(destination));
}
}
That covers all.