国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 系統 > iOS > 正文

iOS實現微信朋友圈視頻截取功能

2020-07-26 02:25:26
字體:
來源:轉載
供稿:網友

序言

微信現在這么普及,功能也做的越來越強大,不知大家對于微信朋友圈發視頻截取的功能或者蘋果拍視頻對視頻編輯的功能有沒有了解(作者這里也猜測,微信的這個功能也是仿蘋果的)。感覺這個功能確實很方便實用,近來作者也在研究音視頻功能,所以就實現了一下這個功能。

功能其實看著挺簡單,實現過程也踩了不少坑。一方面記錄一下;另一方面也算是對實現過程的再一次梳理,這樣大家看代碼也會比較明白。

效果

我們先看看我實現的效果

 

實現

實現過程分析

整個功能可以分為三部分:

  • 視頻播放

這部分我們單獨封裝一個視頻播放器即可

  • 下邊的滑動視圖

這部分實現過程比較復雜,一共分成了4部分。灰色遮蓋、左右把手滑塊、滑塊中間上下兩條線、圖片管理視圖

控制器視圖邏輯組裝和功能實現

  • 視頻播放器的封裝

這里使用AVPlayer、playerLayer、AVPlayerItem這三個類實現了視頻播放功能;由于整個事件都是基于KVO監聽的,所以增加了Block代碼提供了對外監聽使用。

#import "FOFMoviePlayer.h"@interface FOFMoviePlayer(){  AVPlayerLooper *_playerLooper;  AVPlayerItem *_playItem;  BOOL _loop;}@property(nonatomic,strong)NSURL *url;@property(nonatomic,strong)AVPlayer *player;@property(nonatomic,strong)AVPlayerLayer *playerLayer;@property(nonatomic,strong)AVPlayerItem *playItem;@property (nonatomic,assign) CMTime duration;@end@implementation FOFMoviePlayer-(instancetype)initWithFrame:(CGRect)frame url:(NSURL *)url superLayer:(CALayer *)superLayer{  self = [super init];  if (self) {    [self initplayers:superLayer];    _playerLayer.frame = frame;    self.url = url;  }  return self;}-(instancetype)initWithFrame:(CGRect)frame url:(NSURL *)url superLayer:(CALayer *)superLayer loop:(BOOL)loop{  self = [self initWithFrame:frame url:url superLayer:superLayer];  if (self) {    _loop = loop;  }  return self;}- (void)initplayers:(CALayer *)superLayer{  self.player = [[AVPlayer alloc] init];  self.playerLayer = [AVPlayerLayer playerLayerWithPlayer:self.player];  self.playerLayer.videoGravity = AVLayerVideoGravityResize;  [superLayer addSublayer:self.playerLayer];}- (void)initLoopPlayers:(CALayer *)superLayer{  self.player = [[AVQueuePlayer alloc] init];  self.playerLayer = [AVPlayerLayer playerLayerWithPlayer:self.player];  self.playerLayer.videoGravity = AVLayerVideoGravityResize;  [superLayer addSublayer:self.playerLayer];}-(void)fof_play{  [self.player play];}-(void)fof_pause{  [self.player pause];}#pragma mark - Observe-(void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary*)change context:(void *)context{  if ([keyPath isEqualToString:@"status"]) {    AVPlayerItem *item = (AVPlayerItem *)object;    AVPlayerItemStatus status = [[change objectForKey:@"new"] intValue]; // 獲取更改后的狀態    if (status == AVPlayerItemStatusReadyToPlay) {      _duration = item.duration;//只有在此狀態下才能獲取,不能在AVPlayerItem初始化后馬上獲取      NSLog(@"準備播放");      if (self.blockStatusReadyPlay) {        self.blockStatusReadyPlay(item);      }    } else if (status == AVPlayerItemStatusFailed) {      if (self.blockStatusFailed) {        self.blockStatusFailed();      }      AVPlayerItem *item = (AVPlayerItem *)object;      NSLog(@"%@",item.error);      NSLog(@"AVPlayerStatusFailed");    } else {      self.blockStatusUnknown();      NSLog(@"%@",item.error);      NSLog(@"AVPlayerStatusUnknown");    }  }else if ([keyPath isEqualToString:@"tracking"]){    NSInteger status = [change[@"new"] integerValue];    if (self.blockTracking) {      self.blockTracking(status);    }    if (status) {//正在拖動      [self.player pause];    }else{//停止拖動    }  }else if ([keyPath isEqualToString:@"loadedTimeRanges"]){    NSArray *array = _playItem.loadedTimeRanges;    CMTimeRange timeRange = [array.firstObject CMTimeRangeValue];//本次緩沖時間范圍    CGFloat startSeconds = CMTimeGetSeconds(timeRange.start);    CGFloat durationSeconds = CMTimeGetSeconds(timeRange.duration);    NSTimeInterval totalBuffer = startSeconds + durationSeconds;//緩沖總長度    double progress = totalBuffer/CMTimeGetSeconds(_duration);    if (self.blockLoadedTimeRanges) {      self.blockLoadedTimeRanges(progress);    }    NSLog(@"當前緩沖時間:%f",totalBuffer);  }else if ([keyPath isEqualToString:@"playbackBufferEmpty"]){    NSLog(@"緩存不夠,不能播放!");  }else if ([keyPath isEqualToString:@"playbackLikelyToKeepUp"]){    if (self.blockPlaybackLikelyToKeepUp) {      self.blockPlaybackLikelyToKeepUp([change[@"new"] boolValue]);    }  }}-(void)setUrl:(NSURL *)url{  _url = url;  [self.player replaceCurrentItemWithPlayerItem:self.playItem];}-(AVPlayerItem *)playItem{  _playItem = [[AVPlayerItem alloc] initWithURL:_url];  //監聽播放器的狀態,準備好播放、失敗、未知錯誤  [_playItem addObserver:self forKeyPath:@"status" options:NSKeyValueObservingOptionNew context:nil];  //  監聽緩存的時間  [_playItem addObserver:self forKeyPath:@"loadedTimeRanges" options:NSKeyValueObservingOptionNew context:nil];  //  監聽獲取當緩存不夠,視頻加載不出來的情況:  [_playItem addObserver:self forKeyPath:@"playbackBufferEmpty" options:NSKeyValueObservingOptionNew context:nil];  //  用于監聽緩存足夠播放的狀態  [_playItem addObserver:self forKeyPath:@"playbackLikelyToKeepUp" options:NSKeyValueObservingOptionNew context:nil];  [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(private_playerMovieFinish) name:AVPlayerItemDidPlayToEndTimeNotification object:nil];  return _playItem;}- (void)private_playerMovieFinish{  NSLog(@"播放結束");  if (self.blockPlayToEndTime) {    self.blockPlayToEndTime();  }  if (_loop) {//默認提供一個循環播放的功能    [self.player pause];    CMTime time = CMTimeMake(1, 1);    __weak typeof(self)this = self;    [self.player seekToTime:time completionHandler:^(BOOL finished) {      [this.player play];    }];  }}-(void)dealloc{  NSLog(@"-----銷毀-----");}@end

視頻播放器就不重點講了,作者計劃單獨寫一篇有關視頻播放器的。

下邊的滑動視圖

灰色遮蓋

灰色遮蓋比較簡單這里作者只是用了UIView

self.leftMaskView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 0, height)];self.leftMaskView.backgroundColor = [UIColor grayColor];self.leftMaskView.alpha = 0.8;[self addSubview:self.leftMaskView];self.rightMaskView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 0, height)];self.rightMaskView.backgroundColor = [UIColor grayColor];self.rightMaskView.alpha = 0.8;

滑塊中間上下兩條線

這兩根線單獨封裝了一個視圖Line,一開始也想到用一個UIView就好了,但是發現一個問題,就是把手的滑動與線的滑動速度不匹配,線比較慢。

@implementation Line-(void)setBeginPoint:(CGPoint)beginPoint{  _beginPoint = beginPoint;  [self setNeedsDisplay];}-(void)setEndPoint:(CGPoint)endPoint{  _endPoint = endPoint;  [self setNeedsDisplay];}- (void)drawRect:(CGRect)rect {  CGContextRef context = UIGraphicsGetCurrentContext();  CGContextSetLineWidth(context, 3);  CGContextSetStrokeColorWithColor(context, [UIColor colorWithWhite:0.9 alpha:1].CGColor);  CGContextMoveToPoint(context, self.beginPoint.x, self.beginPoint.y);  CGContextAddLineToPoint(context, self.endPoint.x, self.endPoint.y);  CGContextStrokePath(context);}

圖片管理視圖

這里封裝了一個VideoPieces,用來組裝把手、線、遮蓋的邏輯,并且用來顯示圖片。由于圖片只有10張,所以這里緊緊是一個for循環,增加了10個UIImageView

@interface VideoPieces(){  CGPoint _beginPoint;}@property(nonatomic,strong) Haft *leftHaft;@property(nonatomic,strong) Haft *rightHaft;@property(nonatomic,strong) Line *topLine;@property(nonatomic,strong) Line *bottomLine;@property(nonatomic,strong) UIView *leftMaskView;@property(nonatomic,strong) UIView *rightMaskView;@end@implementation VideoPieces-(instancetype)initWithFrame:(CGRect)frame{  self = [super initWithFrame:frame];  if (self) {    [self initSubViews:frame];  }  return self;}- (void)initSubViews:(CGRect)frame{  CGFloat height = CGRectGetHeight(frame);  CGFloat width = CGRectGetWidth(frame);  CGFloat minGap = 30;  CGFloat widthHaft = 10;  CGFloat heightLine = 3;  _leftHaft = [[Haft alloc] initWithFrame:CGRectMake(0, 0, widthHaft, height)];  _leftHaft.alpha = 0.8;  _leftHaft.backgroundColor = [UIColor colorWithWhite:0.9 alpha:1];  _leftHaft.rightEdgeInset = 20;  _leftHaft.lefEdgeInset = 5;  __weak typeof(self) this = self;  [_leftHaft setBlockMove:^(CGPoint point) {    CGFloat maxX = this.rightHaft.frame.origin.x-minGap;    if (point.x=minX) {      this.topLine.endPoint = CGPointMake(point.x-widthHaft, heightLine/2.0);      this.bottomLine.endPoint = CGPointMake(point.x-widthHaft, heightLine/2.0);      this.rightHaft.frame = CGRectMake(point.x, 0, widthHaft, height);      this.rightMaskView.frame = CGRectMake(point.x+widthHaft, 0, width-point.x-widthHaft, height);      if (this.blockSeekOffRight) {        this.blockSeekOffRight(point.x);      }    }  }];  [_rightHaft setBlockMoveEnd:^{    if (this.blockMoveEnd) {      this.blockMoveEnd();    }  }];  _topLine = [[Line alloc] init];  _topLine.alpha = 0.8;  _topLine.frame = CGRectMake(widthHaft, 0, width-2*widthHaft, heightLine);  _topLine.beginPoint = CGPointMake(0, heightLine/2.0);  _topLine.endPoint = CGPointMake(CGRectGetWidth(_topLine.bounds), heightLine/2.0);  _topLine.backgroundColor = [UIColor clearColor];  [self addSubview:_topLine];  _bottomLine = [[Line alloc] init];  _bottomLine.alpha = 0.8;  _bottomLine.frame = CGRectMake(widthHaft, height-heightLine, width-2*widthHaft, heightLine);  _bottomLine.beginPoint = CGPointMake(0, heightLine/2.0);  _bottomLine.endPoint = CGPointMake(CGRectGetWidth(_bottomLine.bounds), heightLine/2.0);  _bottomLine.backgroundColor = [UIColor clearColor];  [self addSubview:_bottomLine];  [self addSubview:_leftHaft];  [self addSubview:_rightHaft];  self.leftMaskView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 0, height)];  self.leftMaskView.backgroundColor = [UIColor grayColor];  self.leftMaskView.alpha = 0.8;  [self addSubview:self.leftMaskView];  self.rightMaskView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 0, height)];  self.rightMaskView.backgroundColor = [UIColor grayColor];  self.rightMaskView.alpha = 0.8;  [self addSubview:self.rightMaskView];}-(void)touchesBegan:(NSSet*)touches withEvent:(UIEvent *)event{  UITouch *touch = touches.anyObject;  _beginPoint = [touch locationInView:self];}

把手的實現

把手的實現這里優化了一點,就是滑動的時候比較靈敏,一開始用手指滑動的時候不是非常靈敏,經常手指滑動了,但是把手沒有動。

增加了靈敏度的方法其實就是增加了接收事件區域的大小,重寫了-(BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event這個方法

@implementation Haft-(instancetype)initWithFrame:(CGRect)frame{  self = [super initWithFrame:frame];  if (self) {    self.userInteractionEnabled = true;  }  return self;}-(BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event{  CGRect rect = CGRectMake(self.bounds.origin.x-self.lefEdgeInset, self.bounds.origin.y-self.topEdgeInset, CGRectGetWidth(self.bounds)+self.lefEdgeInset+self.rightEdgeInset, CGRectGetHeight(self.bounds)+self.bottomEdgeInset+self.topEdgeInset);  if (CGRectContainsPoint(rect, point)) {    return YES;  }  return NO;}-(void)touchesBegan:(NSSet*)touches withEvent:(UIEvent *)event{  NSLog(@"開始");}-(void)touchesMoved:(NSSet*)touches withEvent:(UIEvent *)event{  NSLog(@"Move");  UITouch *touch = touches.anyObject;  CGPoint point = [touch locationInView:self.superview];  CGFloat maxX = CGRectGetWidth(self.superview.bounds)-CGRectGetWidth(self.bounds);  if (point.x>maxX) {    point.x = maxX;  }  if (point.x>=0&&point.x<=(CGRectGetWidth(self.superview.bounds)-CGRectGetWidth(self.bounds))&&self.blockMove) {    self.blockMove(point);  }}-(void)touchesEnded:(NSSet*)touches withEvent:(UIEvent *)event{  if (self.blockMoveEnd) {    self.blockMoveEnd();  }}- (void)drawRect:(CGRect)rect {  CGFloat width = CGRectGetWidth(self.bounds);  CGFloat height = CGRectGetHeight(self.bounds);  CGFloat lineWidth = 1.5;  CGFloat lineHeight = 12;  CGFloat gap = (width-lineWidth*2)/3.0;  CGFloat lineY = (height-lineHeight)/2.0;  CGContextRef context = UIGraphicsGetCurrentContext();  CGContextSetLineWidth(context, lineWidth);  CGContextSetStrokeColorWithColor(context, [[UIColor grayColor] colorWithAlphaComponent:0.8].CGColor);  CGContextMoveToPoint(context, gap+lineWidth/2, lineY);  CGContextAddLineToPoint(context, gap+lineWidth/2, lineY+lineHeight);  CGContextStrokePath(context);  CGContextSetLineWidth(context, lineWidth);  CGContextSetStrokeColorWithColor(context, [[UIColor grayColor] colorWithAlphaComponent:0.8].CGColor);  CGContextMoveToPoint(context, gap*2+lineWidth+lineWidth/2, lineY);  CGContextAddLineToPoint(context, gap*2+lineWidth+lineWidth/2, lineY+lineHeight);  CGContextStrokePath(context);}

控制器視圖邏輯組裝和功能實現

這部分邏輯是最重要也是最復雜的。

獲取10張縮略圖

- (NSArray *)getVideoThumbnail:(NSString *)path count:(NSInteger)count splitCompleteBlock:(void(^)(BOOL success, NSMutableArray *splitimgs))splitCompleteBlock {  AVAsset *asset = [AVAsset assetWithURL:[NSURL fileURLWithPath:path]];  NSMutableArray *arrayImages = [NSMutableArray array];  [asset loadValuesAsynchronouslyForKeys:@[@"duration"] completionHandler:^{    AVAssetImageGenerator *generator = [AVAssetImageGenerator assetImageGeneratorWithAsset:asset];//    generator.maximumSize = CGSizeMake(480,136);//如果是CGSizeMake(480,136),則獲取到的圖片是{240, 136}。與實際大小成比例    generator.appliesPreferredTrackTransform = YES;//這個屬性保證我們獲取的圖片的方向是正確的。比如有的視頻需要旋轉手機方向才是視頻的正確方向。    /**因為有誤差,所以需要設置以下兩個屬性。如果不設置誤差有點大,設置了之后相差非常非常的小**/    generator.requestedTimeToleranceAfter = kCMTimeZero;    generator.requestedTimeToleranceBefore = kCMTimeZero;    Float64 seconds = CMTimeGetSeconds(asset.duration);    NSMutableArray *array = [NSMutableArray array];    for (int i = 0; i      CMTime time = CMTimeMakeWithSeconds(i*(seconds/10.0),1);//想要獲取圖片的時間位置      [array addObject:[NSValue valueWithCMTime:time]];    }    __block int i = 0;    [generator generateCGImagesAsynchronouslyForTimes:array completionHandler:^(CMTime requestedTime, CGImageRef _Nullable imageRef, CMTime actualTime, AVAssetImageGeneratorResult result, NSError * _Nullable error) {      i++;      if (result==AVAssetImageGeneratorSucceeded) {        UIImage *image = [UIImage imageWithCGImage:imageRef];        [arrayImages addObject:image];      }else{        NSLog(@"獲取圖片失敗!!!");      }      if (i==count) {        dispatch_async(dispatch_get_main_queue(), ^{          splitCompleteBlock(YES,arrayImages);        });      }    }];  }];  return arrayImages;}

10張圖片很容易獲取到,不過這里要注意一點:回調的時候要放到異步主隊列回調!要不會出現圖片顯示延遲比較嚴重的問題。

監聽左右滑塊事件

[_videoPieces setBlockSeekOffLeft:^(CGFloat offX) {  this.seeking = true;  [this.moviePlayer fof_pause];  this.lastStartSeconds = this.totalSeconds*offX/CGRectGetWidth(this.videoPieces.bounds);  [this.moviePlayer.player seekToTime:CMTimeMakeWithSeconds(this.lastStartSeconds, 1) toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];}];[_videoPieces setBlockSeekOffRight:^(CGFloat offX) {  this.seeking = true;  [this.moviePlayer fof_pause];  this.lastEndSeconds = this.totalSeconds*offX/CGRectGetWidth(this.videoPieces.bounds);  [this.moviePlayer.player seekToTime:CMTimeMakeWithSeconds(this.lastEndSeconds, 1) toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];}];

這里通過監聽左右滑塊的事件,將偏移距離轉換成時間,從而設置播放器的開始時間和結束時間。

循環播放

self.timeObserverToken = [self.moviePlayer.player addPeriodicTimeObserverForInterval:CMTimeMakeWithSeconds(0.5, NSEC_PER_SEC) queue:dispatch_get_main_queue() usingBlock:^(CMTime time) {  if (!this.seeking) {    if (fabs(CMTimeGetSeconds(time)-this.lastEndSeconds)<=0.02) {        [this.moviePlayer fof_pause];        [this private_replayAtBeginTime:this.lastStartSeconds];      }  }}];

這里有兩個注意點:

1. addPeriodicTimeObserverForInterval要進行釋放,否則會有內存泄漏。

-(void)dealloc{  [self.moviePlayer.player removeTimeObserver:self.timeObserverToken];}

2.這里監聽了播放時間,進而計算是否達到了我們右邊把手拖動的時間,如果達到了則重新播放。 這個問題作者思考了很久,怎么實現邊播放邊截取?差點進入了一個誤區,真去截取視頻。其實這里不用截取視頻,只是控制播放時間和結束時間就可以了,最后只截取一次就行了。

總結

這次微信小視頻編輯實現過程中,確實遇到了挺多的小問題。不過通過仔細的研究,最終完美實現了,有種如釋重負的感覺。哈哈。

源碼

GitHub源碼

總結

以上所述是小編給大家介紹的iOS實現微信朋友圈視頻截取功能,希望對大家有所幫助,如果大家有任何疑問請給我留言,小編會及時回復大家的。在此也非常感謝大家對武林網網站的支持!

發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 阿瓦提县| 合山市| 专栏| 闵行区| 西青区| 崇信县| 长宁区| 彩票| 邵阳县| 金门县| 察隅县| 沾化县| 竹北市| 平顺县| 梨树县| 阿拉善右旗| 九寨沟县| 都兰县| 武功县| 麻栗坡县| 平昌县| 炎陵县| 石城县| 苏尼特左旗| 平乐县| 延庆县| 安溪县| 固镇县| 南阳市| 南开区| 卓资县| 安新县| 济南市| 高碑店市| 乡宁县| 达尔| 将乐县| 北流市| 吴忠市| 乐昌市| 安多县|