I'm streaming H264 NALs from a server, wrapping them as FLV tags and passing them into a NetStream with appendBytes (Data Generation Mode). However, while the video is playing normally the stream is delayed by around a second.
I've tried setting bufferTime, bufferTimeMax but with no luck to prevent the buffering going on.
I've also tried various combinations of NetStream.seek() and NetStream.appendBytesAction() with RESET_SEEK and END_SEQUENCE, again to no avail.
Is there is a trick I'm missing here, is there a way to prevent that delay?
Interestingly I don't see the delay on the audio I'm passing in (PCMU) so I end up with lip sync issues.
Updated: Still stuck, so posting the code I'm using:
var timestamp : uint = networkPayload.readUnsignedInt();
if (videoTimestampBase == 0) {
videoTimestampBase = timestamp;
}
timestamp = timestamp - videoTimestampBase;
timestamp = timestamp / 90.0;
// skip 7 bytes of marker
networkPayload.position = 7;
var nalType : int = networkPayload.readByte();
nalType &= 0x1F;
networkPayload.position = 7;
// reformat Annex B bitstream encoding, to Mp4 - remove timestamp and bitstream marker (3 bytes)
var mp4Payload : ByteArray = new ByteArray();
var mp4PayloadLength : int = networkPayload.bytesAvailable;
mp4Payload.writeUnsignedInt(mp4PayloadLength);
mp4Payload.writeBytes(networkPayload, 7, mp4PayloadLength);
mp4Payload.position = 0;
if (nalType == 8) {
// PPS
ppsNAL = new ByteArray();
// special case for PPS/SPS - don't length encode
ppsLength = mp4Payload.bytesAvailable - 4;
ppsNAL.writeBytes(mp4Payload, 4, mp4Payload.bytesAvailable - 4);
if (spsNAL == null) {
return;
}
} else if (nalType == 7) {
// SPS
spsNAL = new ByteArray();
// special case for PPS/SPS - don't length encode
spsLength = mp4Payload.bytesAvailable - 4;
spsNAL.writeBytes(mp4Payload, 4, mp4Payload.bytesAvailable - 4);
if (ppsNAL == null) {
return;
}
}
if ((spsNAL != null) && (ppsNAL != null)) {
Log.debug(TAG, "Writing sequence header: " + spsLength + "," + ppsLength + "," + timestamp);
var sequenceHeaderTag : FLVTagVideo = new FLVTagVideo();
sequenceHeaderTag.codecID = FLVTagVideo.CODEC_ID_AVC;
sequenceHeaderTag.frameType = FLVTagVideo.FRAME_TYPE_KEYFRAME;
sequenceHeaderTag.timestamp = timestamp;
sequenceHeaderTag.avcPacketType = FLVTagVideo.AVC_PACKET_TYPE_SEQUENCE_HEADER;
spsNAL.position = 1;
var profile : int = spsNAL.readByte();
var compatibility : int = spsNAL.readByte();
var level : int = spsNAL.readByte();
Log.debug(TAG, profile + "," + compatibility + "," + level + "," + spsLength);
var avcc : ByteArray = new ByteArray();
avcc.writeByte(0x01); // avcC version 1
// profile, compatibility, level
avcc.writeByte(profile);
avcc.writeByte(compatibility);
avcc.writeByte(0x20); //level);
avcc.writeByte(0xff); // 111111 + 2 bit NAL size - 1
avcc.writeByte(0xe1); // number of SPS
avcc.writeByte(spsLength >> 8); // 16-bit SPS byte count
avcc.writeByte(spsLength);
avcc.writeBytes(spsNAL, 0, spsLength); // the SPS
avcc.writeByte(0x01); // number of PPS
avcc.writeByte(ppsLength >> 8); // 16-bit PPS byte count
avcc.writeByte(ppsLength);
avcc.writeBytes(ppsNAL, 0, ppsLength);
sequenceHeaderTag.data = avcc;
// clear the pps/sps til next buffer
var bytes : ByteArray = new ByteArray();
sequenceHeaderTag.write(bytes);
stream.appendBytes(bytes);
ppsNAL = null;
spsNAL = null;
} else {
if ((timestamp != currentTimestamp) || (currentVideoTag == null)) {
if (currentVideoTag != null) {
currentVideoTag.data = currentSegment;
var tagData : ByteArray = new ByteArray();
currentVideoTag.write(tagData);
stream.appendBytes(tagData);
}
currentVideoTag = new FLVTagVideo();
currentVideoTag.codecID = FLVTagVideo.CODEC_ID_AVC;
currentVideoTag.frameType = FLVTagVideo.FRAME_TYPE_INTER;
if (nalType == 5) {
currentVideoTag.frameType = FLVTagVideo.FRAME_TYPE_KEYFRAME;
}
lastNalType = nalType;
currentVideoTag.avcPacketType = FLVTagVideo.AVC_PACKET_TYPE_NALU;
currentVideoTag.timestamp = timestamp;
currentVideoTag.avcCompositionTimeOffset = 0;
currentSegment = new ByteArray();
currentTimestamp = timestamp;
}
mp4Payload.position = 0;
currentSegment.writeBytes(mp4Payload);
}
Update, bit more detail, here's the timestamps being passed:
DEBUG: StreamPlayback: 66,-32,20,19
DEBUG: StreamPlayback: Timestamp: 0
DEBUG: StreamPlayback: Timestamp: 63
DEBUG: StreamPlayback: stream status update: netStatus NetStream.Buffer.Full
DEBUG: StreamPlayback: Timestamp: 137
DEBUG: StreamPlayback: Timestamp: 200
DEBUG: StreamPlayback: Timestamp: 264
DEBUG: StreamPlayback: Timestamp: 328
DEBUG: StreamPlayback: Timestamp: 403
DEBUG: StreamPlayback: Timestamp: 467
DEBUG: StreamPlayback: Timestamp: 531
DEBUG: StreamPlayback: Timestamp: 595
DEBUG: StreamPlayback: Timestamp: 659
DEBUG: StreamPlayback: Timestamp: 723
DEBUG: StreamPlayback: Timestamp: 830
DEBUG: StreamPlayback: Timestamp: 894
DEBUG: StreamPlayback: Timestamp: 958
DEBUG: StreamPlayback: Timestamp: 1021
DEBUG: StreamPlayback: Timestamp: 1086
DEBUG: StreamPlayback: Timestamp: 1161
DEBUG: StreamPlayback: Timestamp: 1225
DEBUG: StreamPlayback: Timestamp: 1289
DEBUG: StreamPlayback: Timestamp: 1353
DEBUG: StreamPlayback: Timestamp: 1418
DEBUG: StreamPlayback: Timestamp: 1491
DEBUG: StreamPlayback: Timestamp: 1556
DEBUG: StreamPlayback: Timestamp: 1633
DEBUG: StreamPlayback: Timestamp: 1684
DEBUG: StreamPlayback: Timestamp: 1747
DEBUG: StreamPlayback: stream status update: netStatus NetStream.Video.DimensionChange
DEBUG: StreamPlayback: Timestamp: 1811
Cheers,
Kev
Solution one :
Why not pause NetStream before starting any appends? You then append tags until NetStream confirms a "dimension change". In the Net Status handler for this you then unPause the NetStream.
Hopefully it will play synchronised since the playhead for neither sound or video has not moved whilst in pause mode.
If that doesn't work, then you could try...
Solution two :
Use bitmap data to create a dynamic video frame made of just a simple colour block. The block has a resolution size different to your video stream. You append the block first and the difference with your own video frame will trigger the dimension change.
Note : If your video triggers it too late (ie: the A/V is not synchronised then it means your are sending too many audio tags at first (possibly with a incorrect timestamp of after the video's time?)... Try checking timestamps. The audio is always before the video and must not exceed the related video tag's timestamp).
The example code below makes a 100 width x 50 height video frame (Bitmap data is encoded to Screen-Video format and appended as a video tag).
Here is the related code for :
force_Dimension_Adjust();