环信Android端实时语音视频通话时在本地实现通话录音录像
环信Android端实时语音视频通话时在本地实现通话录音录像背景主要思路EMCallSurfaceView获取视频流WebRtcAudioRecordWebRtcAudioTrack音视频编码混合参考资料背景最近一个项目需求,需要将环信实时通话的过程录制保存为音频/视频文件。查找环信API之后发现,环信的makeVideoCall方法仅提供一个recordOnServer参数,将音视频录制保存..
环信Android端实时语音视频通话时在本地实现通话录音录像
背景
最近一个项目需求,需要将环信实时通话的过程录制保存为音频/视频文件。查找环信API之后发现,环信的makeVideoCall方法仅提供一个recordOnServer参数,将音视频录制保存在环信服务器。未提供本地录制音视频的API。
主要思路
查找环信底层调用Android系统API播放音频和显示视频图像的位置,在相应的位置将音频流和视频流拦截下来,并提交给Android系统的MediaCodec和MediaMuxer进行编码和混合。
EMCallSurfaceView获取视频流
用Android Studio打开hyphenatechat_3.6.1.jar包,找到EMCallSurfaceView,该类位于com.hyphenate.media包中。该视图用于环信的视频显示。该类继承自SurfaceViewRenderer。SurfaceViewRenderer类定义了一个方法:
public void onFrame(VideoFrame frame)
当获取到视频图像帧时,该方法会被调用,并传入参数VideoFrame。我们只需要继承EMCallSurfaceView,并重新实现onFrame方法。再调用VideoFrame.getBuffer().toI420()方法可以获取到I420格式的视频帧数据,之后传递给MediaCodec即可完成视频的编码过程
WebRtcAudioRecord
这两个类都位于com.superrtc.audio包中。WebRtcAudioTrack负责实时语音播放,具体是调用AudioTrack类完成播放过程。WebRtcAudioRecord类负责实时语音的采集,调用AudioRecord类完成具体的采集过程。
这两个类在JavaAudioDeviceModule.Builder的createAudioDeviceModule方法中被创建。如果想获取这两个类的实例,可以修改jar包中这两个类的代码,将他们改成类似单例模式,通过getInstance()方法获取这两个类的实例。WebRtcAudioRecord
//添加静态成员变量
private static WebRtcAudioRecord mWebRtcAudioRecord;
//添加静态方法
public static WebRtcAudioRecord getInstance() {
return mWebRtcpAudioRecord;
}
//构造方法中修改如下
public WebRtcAudioRecord(...) {
...
mWebRtcAudioRecord = this
}
WebRtcAudioRecord的内部类AudioRecordThread负责采集的具体实时,其中的run方法如下:
public void run() {
Process.setThreadPriority(-19);
Logging.d("WebRtcAudioRecordExternal", "AudioRecordThread" + WebRtcAudioUtils.getThreadInfo());
WebRtcAudioRecord.assertTrue(WebRtcAudioRecord.this.audioRecord.getRecordingState() == 3);
long var1 = System.nanoTime();
while(this.keepAlive) {
int bytesReadx = false;
int bytesRead = WebRtcAudioRecord.this.audioRecord.read(WebRtcAudioRecord.this.byteBuffer, WebRtcAudioRecord.this.byteBuffer.capacity());
if (bytesRead == WebRtcAudioRecord.this.byteBuffer.capacity()) {
if (WebRtcAudioRecord.this.listener != null) {
WebRtcAudioRecord.this.listener.onAudioRecordReceived(WebRtcAudioRecord.this.byteBuffer, bytesRead);
}
if (WebRtcAudioRecord.this.microphoneMute) {
WebRtcAudioRecord.this.byteBuffer.clear();
WebRtcAudioRecord.this.byteBuffer.put(WebRtcAudioRecord.this.emptyBytes);
}
if (this.keepAlive) {
WebRtcAudioRecord.this.nativeDataIsRecorded(WebRtcAudioRecord.this.nativeAudioRecord, bytesRead);
}
if (WebRtcAudioRecord.this.audioSamplesReadyCallback != null) {
byte[] data = Arrays.copyOfRange(WebRtcAudioRecord.this.byteBuffer.array(), WebRtcAudioRecord.this.byteBuffer.arrayOffset(), WebRtcAudioRecord.this.byteBuffer.capacity() + WebRtcAudioRecord.this.byteBuffer.arrayOffset());
WebRtcAudioRecord.this.audioSamplesReadyCallback.onWebRtcAudioRecordSamplesReady(new AudioSamples(WebRtcAudioRecord.this.audioRecord.getAudioFormat(), WebRtcAudioRecord.this.audioRecord.getChannelCount(), WebRtcAudioRecord.this.audioRecord.getSampleRate(), data));
}
} else {
String errorMessage = "AudioRecord.read failed: " + bytesRead;
Logging.e("WebRtcAudioRecordExternal", errorMessage);
if (bytesRead == -3) {
this.keepAlive = false;
WebRtcAudioRecord.this.reportWebRtcAudioRecordError(errorMessage);
}
}
}
try {
if (WebRtcAudioRecord.this.audioRecord != null) {
WebRtcAudioRecord.this.audioRecord.stop();
}
} catch (IllegalStateException var5) {
Logging.e("WebRtcAudioRecordExternal", "AudioRecord.stop failed: " + var5.getMessage());
}
}
可以看出AudioRecordThread通过循环调用AudioRecord的read方法读取音频数据。在WebRtcAudioRecord类中添加如下代码
public interface IWebRtcAudioRecordListener {
void onAudioRecordReceived(ByteBuffer var1, Integer var2);
}
protected WebRtcAudioRecord.IWebRtcAudioRecordListener listener;
public void setListener(WebRtcAudioRecord.IWebRtcAudioRecordListener listener) {
this.listener = listener;
}
并在AudioRecordThread的run方法中添加如下代码
if (bytesRead == WebRtcAudioRecord.this.byteBuffer.capacity()) {
//新增代码
if (WebRtcAudioRecord.this.listener != null) {
WebRtcAudioRecord.this.listener.onAudioRecordReceived(WebRtcAudioRecord.this.byteBuffer, bytesRead);
}
...
并且设置WebRtcRecord类为public即可。即可通过调用WebRtcRecord.getInstance().setListener()方法将listener传递进去并监听音频数据的采集过程,获取音频数据
WebRtcAudioTrack
录制实时通话时,需要将通信双方的语音数据都录制下来,因此除了通过WebRtcAudioRecord获取到本地的音频数据外,还需要通过WebRtcAudioTrack获取对方传过来的音频数据。WebRtcAudioTrack在环信中用于音频播放。同样找到AudioTrackThread内部类。run方法如下:
public void run() {
Process.setThreadPriority(-19);
Logging.d("WebRtcAudioTrackExternal", "AudioTrackThread" + WebRtcAudioUtils.getThreadInfo());
WebRtcAudioTrack.assertTrue(WebRtcAudioTrack.this.audioTrack.getPlayState() == 3);
for(int sizeInBytes = WebRtcAudioTrack.this.byteBuffer.capacity(); this.keepAlive; WebRtcAudioTrack.this.byteBuffer.rewind()) {
WebRtcAudioTrack.nativeGetPlayoutData(WebRtcAudioTrack.this.nativeAudioTrack, sizeInBytes);
WebRtcAudioTrack.assertTrue(sizeInBytes <= WebRtcAudioTrack.this.byteBuffer.remaining());
if (WebRtcAudioTrack.this.speakerMute) {
WebRtcAudioTrack.this.byteBuffer.clear();
WebRtcAudioTrack.this.byteBuffer.put(WebRtcAudioTrack.this.emptyBytes);
WebRtcAudioTrack.this.byteBuffer.position(0);
}
/***
***新增代码
***/
if (WebRtcAudioTrack.this.webRtcAudioTrackListener != null) {
WebRtcAudioTrack.this.webRtcAudioTrackListener.onWebRtcAudioTrackReceived(WebRtcAudioTrack.this.byteBuffer, sizeInBytes);
}
int bytesWritten = this.writeBytes(WebRtcAudioTrack.this.audioTrack, WebRtcAudioTrack.this.byteBuffer, sizeInBytes);
if (bytesWritten != sizeInBytes) {
Logging.e("WebRtcAudioTrackExternal", "AudioTrack.write played invalid number of bytes: " + bytesWritten);
if (bytesWritten < 0) {
this.keepAlive = false;
WebRtcAudioTrack.this.reportWebRtcAudioTrackError("AudioTrack.write failed: " + bytesWritten);
}
}
}
if (WebRtcAudioTrack.this.audioTrack != null) {
Logging.d("WebRtcAudioTrackExternal", "Calling AudioTrack.stop...");
try {
WebRtcAudioTrack.this.audioTrack.stop();
Logging.d("WebRtcAudioTrackExternal", "AudioTrack.stop is done.");
} catch (IllegalStateException var3) {
Logging.e("WebRtcAudioTrackExternal", "AudioTrack.stop failed: " + var3.getMessage());
}
}
}
通过nativeGetPlayoutData方法获取音频数据,然后writeBytes方法将音频数据写入AudioTrack完成播放。同理我们只需要定义listener并在run方法中实现回调,即可获取WebRtcAudioTrack的音频数据。与WebRtcAudioRecord类似。
音视频编码混合
通过上述方法获取到的视频数据是I420p格式的,需要转化为I420sp格式。可以参考libyuv。音频格式是PCM格式。将音频和视频格式传递给MediaCodec模块,编码后再传递给MediaMuxer模块生成mp4文件即可。音视频的编码过程可以查看参考资料
谢谢阅读
参考资料
音视频编码混合
[1]: https://blog.csdn.net/luoyouren/article/details/52135476
修改第三方jar包内的类
[2]: https://www.jianshu.com/p/b54368815d45
更多推荐
所有评论(0)