激情久久久_欧美视频区_成人av免费_不卡视频一二三区_欧美精品在欧美一区二区少妇_欧美一区二区三区的

服務(wù)器之家:專注于服務(wù)器技術(shù)及軟件下載分享
分類導(dǎo)航

PHP教程|ASP.NET教程|Java教程|ASP教程|編程技術(shù)|正則表達式|C/C++|IOS|C#|Swift|Android|VB|R語言|JavaScript|易語言|vb.net|

服務(wù)器之家 - 編程語言 - IOS - iOS基于AVFoundation 制作用于剪輯視頻項目

iOS基于AVFoundation 制作用于剪輯視頻項目

2022-01-24 14:40bqiss IOS

這篇文章主要為大家介紹了利用AVFoundation 制作用于剪輯視頻的項目,可以實現(xiàn)視頻擴展或者回退的功能,感興趣的小伙伴快來跟隨小編一起學(xué)習(xí)吧

最近做了一個剪輯視頻的小項目,踩了一些小坑,但還是有驚無險的實現(xiàn)了功能。

其實 Apple 官方也給了一個 UIVideoEditController 讓我們來做視頻的處理,但難以進行擴展或者自定義,所以咱們就用 Apple 給的一個框架 AVFoundation 來開發(fā)自定義的視頻處理。

而且發(fā)現(xiàn)網(wǎng)上并沒有相關(guān)的并且比較系統(tǒng)的資料,于是寫下了本文,希望能對也在做視頻處理方面的新手(比如我)能帶來幫助。

 

項目效果圖

項目的功能大概就是對視頻軌道的撤銷、分割、刪除還有拖拽視頻塊來對視頻擴展或者回退的功能

iOS基于AVFoundation 制作用于剪輯視頻項目

 

功能實現(xiàn)

 

一、選取視頻并播放

通過 UIImagePickerController 選取視頻并且跳轉(zhuǎn)到自定義的編輯控制器

這一部分沒什么好說的

示例:

 //選擇視頻
       @objc func selectVideo() {
           if UIImagePickerController.isSourceTypeAvailable(.photoLibrary) {
               //初始化圖片控制器
               let imagePicker = UIImagePickerController()
               //設(shè)置代理
               imagePicker.delegate = self
               //指定圖片控制器類型
               imagePicker.sourceType = .photoLibrary
               //只顯示視頻類型的文件
               imagePicker.mediaTypes = [kUTTypeMovie as String]
               //彈出控制器,顯示界面
               self.present(imagePicker, animated: true, completion: nil)
           }
           else {
               print("讀取相冊錯誤")
           }
       }

    func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
        //獲取視頻路徑(選擇后視頻會自動復(fù)制到app臨時文件夾下)
        guard let videoURL = info[UIImagePickerController.InfoKey.mediaURL] as? URL else {
            return
        }
        let pathString = videoURL.relativePath
        print("視頻地址:(pathString)")

        //圖片控制器退出
        self.dismiss(animated: true, completion: {
            let editorVC = EditorVideoViewController.init(with: videoURL)

            editorVC.modalPresentationStyle = UIModalPresentationStyle.fullScreen
            self.present(editorVC, animated: true) {

            }
        })
    }

 

二、按幀獲取縮略圖初始化視頻軌道

CMTime

在講實現(xiàn)方法之前先介紹一下 CMTime,CMTime 可以用于描述更精確的時間,比如我們想表達視頻中的一個瞬間例如 1:01 大多數(shù)時候你可以用 NSTimeInterval t = 61.0 這是沒有什么大問題的,但浮點數(shù)有個比較嚴重的問題就是無法精確的表達10的-6次方比如將一百萬個0.0000001相加,運算結(jié)果可能會變成1.0000000000079181,在視頻流傳輸?shù)倪^程中伴隨著大量的數(shù)據(jù)加減,這樣就會造成誤差,所以我們需要另一種表達時間的方式,那就是 CMTime

CMTime是一種C函數(shù)結(jié)構(gòu)體,有4個成員。

typedef struct {

CMTimeValue value; // 當(dāng)前的CMTimeValue 的值

CMTimeScale timescale; // 當(dāng)前的CMTimeValue 的參考標準 (比如:1000)

CMTimeFlags flags;

CMTimeEpoch epoch;

} CMTime;

比如說平時我們所說的如果 timescale = 1000,那么 CMTimeValue = 1000 * 1 = 100

CMTimeScale timescale: 當(dāng)前的CMTimeValue 的參考標準,它表示1秒的時間被分成了多少份。因為整個CMTime的精度是由它控制的所以它顯的尤為重要。例如,當(dāng)timescale為1的時候,CMTime不能表示1秒一下的時間和1秒內(nèi)的增長。相同的,當(dāng)timescale為1000的時候,每秒鐘便被分成了1000份,CMTime的value便代表了多少毫秒。

實現(xiàn)方法

調(diào)用方法 generateCGImagesAsynchronously(forTimes requestedTimes: [NSValue], completionHandler handler: @escaping AVAssetImageGeneratorCompletionHandler)

 /**
    	@method			generateCGImagesAsynchronouslyForTimes:completionHandler:
    	@abstract		Returns a series of CGImageRefs for an asset at or near the specified times.
    	@param			requestedTimes
    					An NSArray of NSValues, each containing a CMTime, specifying the asset times at which an image is requested.
    	@param			handler
    					A block that will be called when an image request is complete.
    	@discussion		Employs an efficient "batch mode" for getting images in time order.
    					The client will receive exactly one handler callback for each requested time in requestedTimes.
    					Changes to generator properties (snap behavior, maximum size, etc...) will not affect outstanding asynchronous image generation requests.
    					The generated image is not retained.  Clients should retain the image if they wish it to persist after the completion handler returns.
    */
    open func generateCGImagesAsynchronously(forTimes requestedTimes: [NSValue], completionHandler handler: @escaping AVAssetImageGeneratorCompletionHandler)

瀏覽官方的注釋,可以看出需要傳入兩個參數(shù) :

requestedTimes: [NSValue]:請求時間的數(shù)組(類型為 NSValue)每一個元素包含一個 CMTime,用于指定請求視頻的時間。

completionHandler handler: @escaping AVAssetImageGeneratorCompletionHandler: 圖像請求完成時將調(diào)用的塊,由于方法是異步調(diào)用的,所以需要返回主線程更新 UI。

示例:

func splitVideoFileUrlFps(splitFileUrl:URL, fps:Float, splitCompleteClosure:@escaping (Bool, [UIImage]) -> Void) {
        var splitImages = [UIImage]()
		
		//初始化 Asset
        let optDict = NSDictionary(object: NSNumber(value: false), forKey: AVURLAssetPreferPreciseDurationAndTimingKey as NSCopying)
        let urlAsset = AVURLAsset(url: splitFileUrl, options: optDict as? [String : Any])

        let cmTime = urlAsset.duration
        let durationSeconds: Float64 = CMTimeGetSeconds(cmTime)

        var times = [NSValue]()
        let totalFrames: Float64 = durationSeconds * Float64(fps)
        var timeFrame: CMTime

		//定義 CMTime 即請求縮略圖的時間間隔
        for i in 0...Int(totalFrames) {
            timeFrame = CMTimeMake(value: Int64(i), timescale: Int32(fps))
            let timeValue = NSValue(time: timeFrame)

            times.append(timeValue)
        }

        let imageGenerator = AVAssetImageGenerator(asset: urlAsset)
        imageGenerator.requestedTimeToleranceBefore = CMTime.zero
        imageGenerator.requestedTimeToleranceAfter = CMTime.zero

        let timesCount = times.count
		
		//調(diào)用獲取縮略圖的方法
        imageGenerator.generateCGImagesAsynchronously(forTimes: times) { (requestedTime, image, actualTime, result, error) in

        var isSuccess = false
        switch (result) {
        case AVAssetImageGenerator.Result.cancelled:
            print("cancelled------")

        case AVAssetImageGenerator.Result.failed:
            print("failed++++++")

        case AVAssetImageGenerator.Result.succeeded:

            let framImg = UIImage(cgImage: image!)

            splitImages.append(self.flipImage(image: framImg, orientaion: 1))
            if (Int(requestedTime.value) == (timesCount-1)) { //最后一幀時 回調(diào)賦值
                isSuccess = true
                splitCompleteClosure(isSuccess, splitImages)
                print("completed")
            }
        }
        }
    }

				//調(diào)用時利用回調(diào)更新 UI
self.splitVideoFileUrlFps(splitFileUrl: url, fps: 1) { [weak self](isSuccess, splitImgs) in
            if isSuccess {
                //由于方法是異步的,所以需要回主線程更新 UI
                DispatchQueue.main.async {
                
                    }
                print("圖片總數(shù)目imgcount:(String(describing: self?.imageArr.count))")
            }
        }

 

三、視頻指定時間跳轉(zhuǎn)

 /**
     @method			seekToTime:toleranceBefore:toleranceAfter:
     @abstract			Moves the playback cursor within a specified time bound.
     @param				time
     @param				toleranceBefore
     @param				toleranceAfter
     @discussion		Use this method to seek to a specified time for the current player item.
    					The time seeked to will be within the range [time-toleranceBefore, time+toleranceAfter] and may differ from the specified time for efficiency.
    					Pass kCMTimeZero for both toleranceBefore and toleranceAfter to request sample accurate seeking which may incur additional decoding delay. 
    					Messaging this method with beforeTolerance:kCMTimePositiveInfinity and afterTolerance:kCMTimePositiveInfinity is the same as messaging seekToTime: directly.
     */
    open func seek(to time: CMTime, toleranceBefore: CMTime, toleranceAfter: CMTime)

三個傳入的參數(shù) time: CMTime, toleranceBefore: CMTime, tolearnceAfter: CMTime ,time 參數(shù)很好理解,即為想要跳轉(zhuǎn)的時間。那么后面兩個參數(shù),按照官方的注釋理解,簡單來說為“誤差的容忍度”,他將會在你擬定的這個區(qū)間內(nèi)跳轉(zhuǎn),即為 [time-toleranceBefore, time+toleranceAfter] ,當(dāng)然如果你傳 kCMTimeZero(在我當(dāng)前的版本這個參數(shù)被被改為了 CMTime.zero),即為精確搜索,但是這會導(dǎo)致額外的解碼時間。

示例:

	let totalTime = self.avPlayer.currentItem?.duration
	let scale = self.avPlayer.currentItem?.duration.timescale
	
	//width:跳轉(zhuǎn)到的視頻軌長度 videoWidth:視頻軌總長度
 	let process = width / videoWidth
 	
      //快進函數(shù)
 	self.avPlayer.seek(to: CMTimeMake(value: Int64(totalTime * process * scale!), timescale: scale!), toleranceBefore: CMTime.zero, toleranceAfter: CMTime.zero)

 

四、播放器監(jiān)聽

通過播放器的監(jiān)聽我們可以改變控制軌道的移動,達到視頻播放器和視頻軌道的聯(lián)動

/**
    	@method			addPeriodicTimeObserverForInterval:queue:usingBlock:
    	@abstract		Requests invocation of a block during playback to report changing time.
    	@param			interval
    	  The interval of invocation of the block during normal playback, according to progress of the current time of the player.
    	@param			queue
    	  The serial queue onto which block should be enqueued.  If you pass NULL, the main queue (obtained using dispatch_get_main_queue()) will be used.  Passing a
    	  concurrent queue to this method will result in undefined behavior.
    	@param			block
    	  The block to be invoked periodically.
    	@result
    	  An object conforming to the NSObject protocol.  You must retain this returned value as long as you want the time observer to be invoked by the player.
    	  Pass this object to -removeTimeObserver: to cancel time observation.
    	@discussion		The block is invoked periodically at the interval specified, interpreted according to the timeline of the current item.
    					The block is also invoked whenever time jumps and whenever playback starts or stops.
    					If the interval corresponds to a very short interval in real time, the player may invoke the block less frequently
    					than requested. Even so, the player will invoke the block sufficiently often for the client to update indications
    					of the current time appropriately in its end-user interface.
    					Each call to -addPeriodicTimeObserverForInterval:queue:usingBlock: should be paired with a corresponding call to -removeTimeObserver:.
    					Releasing the observer object without a call to -removeTimeObserver: will result in undefined behavior.
    */
    open func addPeriodicTimeObserver(forInterval interval: CMTime, queue: DispatchQueue?, using block: @escaping (CMTime) -> Void) -> Any

比較重要的一個參數(shù)是 interval: CMTime 這決定了代碼回調(diào)的間隔時間,同時如果你在這個回調(diào)里改變視頻軌道的 frame 那么這也會決定視頻軌道移動的流暢度

示例:

//player的監(jiān)聽
        self.avPlayer.addPeriodicTimeObserver(forInterval: CMTimeMake(value: 1, timescale: 120), queue: DispatchQueue.main) { [weak self](time) in
                //與軌道的聯(lián)動操作
        }

與快進方法沖突的問題

這個監(jiān)聽方法和第三點中的快進方法會造成一個問題:當(dāng)你拖動視頻軌道并且去快進的時候也會觸發(fā)這個回調(diào)于是就造成了 拖動視頻軌道 frame (改變 frame) -> 快進方法 -> 觸發(fā)回調(diào) -> 改變 frame 這一個死循環(huán)。那么就得添加判斷條件來不去觸發(fā)這個回調(diào)。

快進方法與播放器聯(lián)動帶來的問題

播放視頻是異步的,并且快進方法解碼視頻需要時間,所以就導(dǎo)致了在雙方聯(lián)動的過程中帶來的時間差。并且當(dāng)你認為視頻已經(jīng)快進完成的時候,想要去改變視頻軌道的位置,由于解碼帶來的時間,導(dǎo)致了在回調(diào)的時候會傳入幾個錯誤的時間,使得視頻軌道來回晃動。所以當(dāng)前項目的做法是,回調(diào)時需要判斷將要改變的 frame 是否合法(是否過大、過?。?/p>

ps:如果關(guān)于這兩個問題有更好的解決辦法,歡迎一起討論!

 

五、導(dǎo)出視頻

 /**
        @method         insertTimeRange:ofTrack:atTime:error:
        @abstract       Inserts a timeRange of a source track into a track of a composition.
        @param          timeRange
                        Specifies the timeRange of the track to be inserted.
        @param          track
                        Specifies the source track to be inserted. Only AVAssetTracks of AVURLAssets and AVCompositions are supported (AVCompositions starting in MacOS X 10.10 and iOS 8.0).
        @param          startTime
                        Specifies the time at which the inserted track is to be presented by the composition track. You may pass kCMTimeInvalid for startTime to indicate that the timeRange should be appended to the end of the track.
        @param          error
                        Describes failures that may be reported to the user, e.g. the asset that was selected for insertion in the composition is restricted by copy-protection.
        @result         A BOOL value indicating the success of the insertion.
        @discussion
          You provide a reference to an AVAssetTrack and the timeRange within it that you want to insert. You specify the start time in the target composition track at which the timeRange should be inserted.
    
          Note that the inserted track timeRange will be presented at its natural duration and rate. It can be scaled to a different duration (and presented at a different rate) via -scaleTimeRange:toDuration:.
    */
    open func insertTimeRange(_ timeRange: CMTimeRange, of track: AVAssetTrack, at startTime: CMTime) throws

 傳入的三個參數(shù):

timeRange: CMTimeRange:指定要插入的視頻的時間范圍

track: AVAssetTrack:指定要插入的視頻軌道。僅支持AVURLAssets和AVCompositions的AvassetTrack(從MacOS X 10.10和iOS 8.0開始的AVCompositions)。

starTime: CMTime: 指定合成視頻插入的時間點。可以傳遞kCMTimeInvalid 參數(shù),以指定視頻應(yīng)附加到前一個視頻的末尾。

示例:

	let composition = AVMutableComposition()
            //合并視頻、音頻軌道
    let videoTrack = composition.addMutableTrack(
                        withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID())
  	let audioTrack = composition.addMutableTrack(
                        withMediaType: AVMediaType.audio, preferredTrackID: CMPersistentTrackID())


 	let asset = AVAsset.init(url: self.url)

	var insertTime: CMTime = CMTime.zero

 	let timeScale = self.avPlayer.currentItem?.duration.timescale

			//循環(huán)每個片段的信息
	for clipsInfo in self.clipsInfoArr {
            
            //片段的總時間
		let clipsDuration = Double(Float(clipsInfo.width) / self.videoWidth) * self.totalTime
            
            //片段的開始時間
		let startDuration = -Float(clipsInfo.offset) / self.perSecondLength

      	do {
           	try videoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTimeMake(value: Int64(startDuration * Float(timeScale!)), timescale: timeScale!), duration:CMTimeMake(value: Int64(clipsDuration * Double(timeScale!)), timescale: timeScale!)), of: asset.tracks(withMediaType: AVMediaType.video)[0], at: insertTime)
                } catch _ {}

		do {
         	try audioTrack?.insertTimeRange(CMTimeRangeMake(start: CMTimeMake(value: Int64(startDuration * Float(timeScale!)), timescale: timeScale!), duration:CMTimeMake(value: Int64(clipsDuration * Double(timeScale!)), timescale: timeScale!)), of: asset.tracks(withMediaType: AVMediaType.audio)[0], at: insertTime)
      		 } catch _ {}
   		insertTime = CMTimeAdd(insertTime, CMTimeMake(value: Int64(clipsDuration * Double(timeScale!)), timescale: timeScale!))
            }

  		videoTrack?.preferredTransform = CGAffineTransform(rotationAngle: CGFloat.pi / 2)

            //獲取合并后的視頻路徑
  		let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory,.userDomainMask,true)[0]

  		let destinationPath = documentsPath + "/mergeVideo-(arc4random()%1000).mov"
     	print("合并后的視頻:(destinationPath)")

end:通過這幾個 API 再加上交互的邏輯就能實現(xiàn)完整的剪輯功能啦!如果文中有不足的地方,歡迎指出!

到此這篇關(guān)于iOS基于AVFoundation 制作用于剪輯視頻項目的文章就介紹到這了,更多相關(guān)iOS AVFoundation 剪輯視頻內(nèi)容請搜索服務(wù)器之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持服務(wù)器之家!

原文鏈接:https://blog.csdn.net/bqiss/article/details/121772575

延伸 · 閱讀

精彩推薦
主站蜘蛛池模板: 国产成人av一区二区 | 国产免费www | 在线观看国产www | 操碰视频在线观看 | 久久国产在线观看 | 国产乱xxxx | 日本在线一区二区 | 免费的性生活视频 | 黄网站免费观看视频 | 成人性视频欧美一区二区三区 | 国产视频在线播放 | 中文字幕免费在线观看视频 | 亚洲性生活视频 | 亚洲欧美国产高清va在线播放 | 91精品国产综合久久男男 | 九一免费版在线观看 | 日本成人在线播放 | 国产午夜精品久久久久久免费视 | 亚洲成人精品久久久 | 欧美国产一区二区三区 | 97久久曰曰久久久 | 99爱视频在线观看 | 亚洲婷婷日日综合婷婷噜噜噜 | 在线日韩在线 | 国产乱子视频 | 国产高清毛片 | 一本色道久久综合亚洲精品图片 | 毛片a区| 一区播放| 免费看日产一区二区三区 | 中文字幕在线视频网站 | 成人短视频在线观看免费 | 中文在线国产 | 狠狠一区二区 | 成人福利在线视频 | 视频二区国产 | 伊人午夜视频 | 美女网站黄在线观看 | 久久国产精品电影 | 免费人成在线观看网站 | 成人毛片网|