My app encodes a PCM file to a m4a file using MediaMuxer, MediaFormat, and MediaCodec. I read some code that sets things like this:
MediaFormat outputFormat = MediaFormat.createAudioFormat("audio/mp4a-latm", SampleRate, 1);
outputFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
outputFormat.setInteger(MediaFormat.KEY_BIT_RATE, 96000);
outputFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 16384);
I searched for MediaFormat.KEY_MAX_INPUT_SIZE, but it is not clear to me why it is necessary to be set. I've read that some Samsung devices crash without this, but I don't know if this is true.
Is it necessary/good/advisable to set this? If so, to which value(s)?
As stated by the documentation, MediaFormat.KEY_MAX_INPUT_SIZE is the maximum size of a buffer.
A smaller buffer increases disk IO and other stream operations as it is flushed (for output) or read (for input) in smaller chunks and more frequently. Larger buffer decreases IO frequency but consumes more memory and may increase non-IO resources usage during larger transfers.
Roughly, considerations are similar to these for BufferedInputStream/BufferedOutputStream size - usually, larger buffer = better performance, reduced IO, but with an increase of CPU or DMA usage at the time of transfer and longer/larger memory allocation (hence too big is not always good for a specific use case, there should be an optimal buffer/IO balance specific for the use case, data and the hardware).
For a media format, there are additional considerations, as too small buffer may trigger various bugs in the vendor/platform code (all this MediaXX framework is extremely buggy on many devices) or be too slow in the case of a live stream. Depending on the platform and usage case, the default may be inappropriate.
The most prominent difference the buffer size makes for video decoders, an inappropriate, too small buffer size for a large, complex video file (with the firmware default frequently being inappropriate for such a case) can result in an increase of IO operations by a magnitude of tens or even hundreds of thousands and decrease performance many folds and hog resources as well as trigger sporadic and very hard to track vendor bugs and failures.