Ios11 Screen Recording (Ios Screen Recording Development Guide)
1. Overview
During the live broadcast, sharing the screen is divided into two steps: screen data collection and streaming media Data push. For the previous iOS, the screen acquisition requires the permission of the system, which is limited by the iOS system. Third-party apps do not have permission to directly record the screen, and must be implemented through system functions.
This article will describe the research results of the application, implementation, limitations and implementation details of screen sharing in the iOS system. (Note: Since iOS 10 and previous systems only support in-app recording screen, so it is just a brief introduction, without detailed description)
Second, application
Screen sharing appeared in the video very early In the meeting, it later appeared on a large scale in some game live broadcasts. Apple did not support screen sharing in the early days, but with the popularity of live broadcasting, Apple also supports screen sharing according to user needs, and launched the ReplayKit library to cater for this scenario.
The scenes of screen sharing on the market are roughly classified as follows:
1. Remote operation screen: assist others to operate the mobile phone. For example, young people can help the elderly to set up Youyou resource network remotely, *** to help customers solve software failures or use help groups to effectively solve the problem of low language expression efficiency.
Two. Game live broadcast: well-known game anchors can broadcast the live broadcast of the games they play on their mobile phones to others, and can teach and explain the games, so that others can better learn game skills.
Three. Video conferencing: The meeting room will display the content in the mobile phone to other people for explanation, such as email content, pictures, documents in the mobile phone, etc., so that participants can quickly share information and improve communication efficiency.
Third, each system implements screen sharing.
The technology of screen sharing on the iOS system mainly lies in the differences between different versions of the system. Here we compare the implementation and limitations of each system version. First of all, due to the need to use hardware such as the camera and microphone of the mobile phone, it cannot be debugged and used on the simulator. First of all, let’s take a look at the current coverage of each version.
System coverage
According to the data on Apple’s official website, as of June 2021, the share of iOS system versions is roughly as shown in the figure below. It can be seen that the current user coverage of iOS13 and below is less than 2%, while iOS14 is about 90%, and iOS13 is about 8%. In order to take into account the old version, the current applications on the market are generally compatible with iOS 9.
iOS 8
In OS 8 and previous versions, the system does not provide the corresponding function, it calls the private API by cracking the function of the system to achieve. Because iOS8 is too old, the devices running iOS8 system can basically not support the live broadcast function, so I won’t discuss it in detail here. Those who are interested can study it.
IOS 9 system
Apple iOS 9 introduced the ReplayKit framework, which provides screen recording function, but the limitation is that it can only record the screen in this App. After recording, a video file will be generated, which can only be previewed with RPPreviewViewController, and the generated file can be compiled. During the recording process, no data can be obtained, and only the entire mp4 file of the final recording can be provided to the developer. Therefore, it is not a real live screen sharing, and the real-time performance cannot be guaranteed.
IOS 10 system
IOS 10 Apple launched the broadcast upload extension and broadcast setting UI extension to solve the problem of screen recording.
First introduce the App extension, the official document (extended official document). Extension is an extension of App, which to a certain extent breaks the limitation of the sandbox and provides the possibility of communication between applications. Extension is an independent process with its own life cycle. As shown in the figure below:
Although the iOS 10 system solves a series of shortcomings of the previous system, it still cannot solve the problem of only recording the screen content of the current app, which will limit the use of some applications. The problem of using the scene.
IOS 11 system
In the era of the official live broadcast of iOS 11, in order to meet the market demand, Apple provides the function of cross-app recording screen, which can realize the function of recognizing the whole screen. Although ReplayKit2 can already meet most of the needs of developers, for users, when this version realizes screen live broadcast, users need to configure the access control permission of screen recording in the mobile phone settings in advance, so that the screen recording button can be displayed in the system Pull up the management menu, pull down the bottom menu to call out the shortcut management menu during recording, press and hold the round button of screen recording to start recording and live broadcasting. Complicated operating procedures have raised the threshold for users to use. So the screen sharing feature on iOS 11 is also pretty thin.
IOS12 system
IOS 12 is optimized on the basis of iOS11, and provides RPSystemBroadcastPickerView, which solves the problem of recording screen, and users do not need to manually start it in the control center.
Summary
Combined with the above analysis of the screen recording limitations of various iOS system versions, from the perspective of version stability and release reliability, screen recording should be provided from iOS12 function, and the previous system version is not compatible. If only the app page is recorded for live broadcast, then the system is compatible with iOS 9.
Four. Precautions for screen sharing
Due to the high screen resolution of iOS mobile phones, considering the memory usage and transmission efficiency, the image acquisition and processing process needs to be optimized, and the resolution is generally limited to 720P.
The memory limit of the extended child process is 50M. when this lineWhen the memory in the process exceeds 50M, the program will crash. Because of this limitation, similar processing solutions in the industry will limit their video quality to no more than 720P, keep the video pin count of high-end models within 30, and keep the video frame rate of low-end models within 10.
The crash of the child process will cause the page to play the prompt box all the time, and the user can only restart the phone to solve the problem.
When the child process communicates with the host app, it needs to choose different forms according to the content of the transmission: 1. Share files by configuring application groups or user defaults. 2. Inter-process notifications: CFNotificationCenter, general opening and closing, etc. This can be achieved through notifications. 3. Scenes such as screen sharing through Socket transmission are more suitable for this.
The realization of verb (abbreviation of verb) anyRTC screen sharing
AnyRTC video screen sharing can be realized in two ways:
One is to expand the child process The screen sharing video data is sent to the host app through Socket transmission, and the host app inserts the stream into the SDK through self-receiving push, and only one video stream can be transmitted, which can be screen sharing or camera video stream.
One is to initialize the SDK in the extended sub-process, set the pull stream to not subscribe to other people’s audio and video, and act as the sender. This method realizes that a client can send the video stream of its own camera or the stream of screen sharing by only entering the same channel through two uids.
The local socket is transferred to the host application.
Creativity: Blog
The general idea is: set a socket locally, transmit it to the host app in the form of tcp, and perform complex operations in the host app, thus effectively solving the problem of expansion 50M limit problem.
Directly use the SDK in the sub-process
Idea: Use the SDK directly in the extension, only send streams, not receive streams. At the same time, we should also pay attention to the problem of expanding 50M (1: The application limits the live broadcast of horizontal and vertical screens, both horizontal and vertical screens are acceptable, and the application of horizontal and vertical screen switching is likely to cause a sudden increase in memory. 2: Low-performance machine limitations Video frame rate (1~10 frames))
1. Initialization
Set the channel attribute to live mode, set the host role, and enable the video module.
//Instantiate the rtc object rtcKit=[ARtcEngineKitsharedEngineWithAppId:appIddelegate:self];[rtcKitsetChannelProfile:ARChannelProfileLiveBroadcasting];[rtcKitsetClientRole:ARClientRoleBroadcaster];[rtcKitenableVideo];
2. Set the resolution of screen sharing.
Due to the 50M limit in the sub-process, for the stability of the system, it is recommended that the resolution not exceed 720P.
According to the width and height of the screen, use the resolution conversion to calculate a better resolution output.
The frame rate of the video is recommended to be set to 5 frames for low-end models, and not to exceed 30 frames for high-end models.
Share the resolution of the screen, you can adjust the bit rate appropriately, it is recommended not to exceed 1800.
//Get the better resolution of the current screen CGSizescreenSize=[[UIScreenmainScreen]currentMode].size;CGSizeboundingSize=CGSizeMake(720,1280);CGFloatmW=boundingSize.width/screenSize.width;CGFloatmH=boundingSize.height/ screenSize.height;if(mH
3. Set the external audio and video sources.
Set the external video source as capture, and the internal capture will be automatically closed after it is turned on.
Set the external audio source collection, and the internal audio collection will be automatically closed after it is turned on.
//Configure the external video source [rtcKitsetExternalVideoSource:YESuseTexture:YESpushMode:YES];//Push the external audio frame of Youyou Resource Network [rtcKitenableExternalAudioSourceWithSampleRate:48000channelsPerFrame:2 ];
4. Prohibit receiving audio and video.
As a screen sharing terminal, you only need to send stream, not receive stream.
//Prohibit receiving all audio and video streams [rtcKitmuteAllRemoteVideoStreams: YES];[rtcKitmuteAllRemoteAudioStreams:YES];
5. Join the channel
Get the user ID in the host app, assemble it into a layer, and mark it as someone’s auxiliary traffic.
Get the channel ID being used in the host app, and enter the channel as a user auxiliary stream when starting screen sharing.
//Get the user ID in the hostapp, perform a layer of assembly, and mark it as an auxiliary stream of someone NSString*uid=[NSStringstringWithFormat:@”%@_sub”,self.userId];//Join the channel [rtcKitjoinCchannelByToken:nilchannelId:self.channelIduid:uidjoinSuccess:^(NSString*_Nonnullchannel,NSString*_Nonnulluid,NSIntegerelapsed){NSLog(@”joinSuccess”);}];
6. Hair flow
RPSampleBufferTypeVideo: Obtain video data and send it out through the external plug-flow interface.
RPSampleBufferTypeAudioApp: Obtain the audio source in the application, and use the external plug-flow interface to send audio data.
RPSampleBufferTypeAudioMic: Obtain the audio source of the microphone, and use the external plug-flow interface to send audio data.
Video congestion requires the assembly of video data, such as video type, timestamp, and rotation angle.
-(void)processSampleBuffer:(CMSampleBufferRef)sampleBufferwithType:(RPSampleBufferType)sampleBufferType{switch(sampleBufferType){caseRPSampleBufferTypeVideo:{//处理视频数据CVPixelBufferRefpixelBuffer=CMSampleBufferGetImageBuffer(sampleBuffer);if(pixelBuffer){CMTimetimestamp=CMSampleBufferGetPresentationTimeStamp(sampleBuffer );ARVideoFrame*videoFrame=[[ARVideoFramealloc]init];videoFrame.format=12;videoFrame.time=timestamp;videoFrame.textureBuf=pixelBuffer;videoFrame.rotation=[selfgetRotateByBuffer:sampleBuffer];[rtcKitpushExternalVideoFrame];videoFrame:}} ;caseRPSampleBufferTypeAudioApp://Process audio data, the audio is generated by App[rtcKitpushExternalAudioFrameSampleBuffer:sampleBuffertype:ARAudioTypeApp];break;caseRPSampleBufferTypeAudioMic://Process audio data, the audio is generated by the microphone [rtcKitpushExternalAudioFrameSampleBuffer:sampleBuffertype:ARAudioTypeMic];break;default:break;}}
The screen can be shared through the above steps, . In order to facilitate developers to get started faster, you can refer to the demo to get started quickly.
IOS screen sharing
Android screen sharing