Currently we are using an api to upload image(multipart file with avy size ~15 MB) via an API which is taking significantly higher time(avg 30 sec, max ~7 minutes) and now trying to find an optimization for this. So i found lot of articles related to direct upload with presigned url but with that there comes some con with that like losing out on observability, validation(file size, file type) etc. But I am thinking to go via api upload only and do streaming of multipart file to S3. So Question what would be better approach presigned url vs download via application server and which approach is more used in these days in industries considering the file size, scalability and robustness?
Current implementation is like this: User case: get photo uploaded by user
@RequestMapping(value = {"/photos/{photoId}"}, method = RequestMethod.POST, consumes = "multipart/*")
public ResponseEntity<?> savePhotos(
@PathVariable String photoId,
@RequestParam(required = false) String caption,
@RequestParam("file") MultipartFile photo
) {
log.info("Start upload to S3");
byte[] photoBytes = null;
if (!photo.isEmpty()) {
try {
photoBytes = photo.getBytes();
if (photoBytes.length == 0) {
throw new IOException("Photo bytes null in request");
}
} catch (IOException e) {
log.error("Unable to get photo bytes: {}", e.getMessage());
return new ResponseEntity<>(new ErrorForm(e.getMessage()), HttpStatus.INTERNAL_SERVER_ERROR);
}
}
try {
final Photo photoResponse = photoService.savePhoto(photoId, caption, photoBytes);
log.info("Uploaded successfully");
return new ResponseEntity<>(photoResponse, HttpStatus.OK);
} catch (Exception e) {
log.error("Error while persisting the photo={}", photoId, e);
return new ResponseEntity<>(new ErrorForm(e.getMessage()), HttpStatus.INTERNAL_SERVER_ERROR);
}
}
Expectation: To improve latency, and minimize failure.