场景实例
在controller的根view上有两个view, 分别为红色和蓝色,其中蓝色的view为红色的view的子view,现在希望扩大蓝色view的点击区域,在点击红色view中的非蓝色区域,蓝色区域也能响应。
设计实现
这个该怎么做呢?实际上是需要扩大蓝色view的响应区域。 这里牵扯到响应者对象,什么是响应者
响应者对象
在iOS中,不是任何对象都能响应事件,只有继承自UIRespone的对象才可能可以接受并响应事件,我们称之为"响应者对象"。 UIApplication、UIWindow、UIViewController、UIView以及所有继承自UIView的UIKit类,都直接或间接继承自UIResponder,因此他们都是响应者对象,都可以接受并响应事件。
UIResponder中的触摸事件处理方法
所有继承自UIResponder的子类,都可以重写以下四个方法来处理不同的触摸事件
1. 一根或者多根手指开始触摸屏幕
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
2.一根或者多根手指在屏幕上移动(随着手指的移动,会持续调用该方法)
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
3.一根或者多根手指离开屏幕
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
4.触摸结束前,某个系统事件(例如电话呼入)会打断触摸过程
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event
UITouch对象
在上述方法中,参数touches中每个元素的类型是UITouch,他的定义如下:
UIKIT_EXTERN API_AVAILABLE(ios(2.0)) @interface UITouch : NSObject
//时间戳记录了触摸事件产生或变化时的时间,单位是秒
@property(nonatomic,readonly) NSTimeInterval timestamp;
//触摸事件在屏幕上有一个周期,即触摸开始、触摸点移动、触摸结束,还有中途取消。通过phase可以查看当前触摸事件在一个周期中所处的状态。
@property(nonatomic,readonly) UITouchPhase phase;
//点按次数(点1次算1,再点一下算2)
@property(nonatomic,readonly) NSUInteger tapCount;
@property(nonatomic,readonly) UITouchType type API_AVAILABLE(ios(9.0));
// majorRadius and majorRadiusTolerance are in points
// The majorRadius will be accurate +/- the majorRadiusTolerance
@property(nonatomic,readonly) CGFloat majorRadius API_AVAILABLE(ios(8.0));
@property(nonatomic,readonly) CGFloat majorRadiusTolerance API_AVAILABLE(ios(8.0));
@property(nullable,nonatomic,readonly,strong) UIWindow *window;
//用户点击的视图
@property(nullable,nonatomic,readonly,strong) UIView *view;
@property(nullable,nonatomic,readonly,copy) NSArray <UIGestureRecognizer *> *gestureRecognizers API_AVAILABLE(ios(3.2));
//用户点击的位置
- (CGPoint)locationInView:(nullable UIView *)view;
//用户前一次点击的位置
- (CGPoint)previousLocationInView:(nullable UIView *)view;
// Use these methods to gain additional precision that may be available from touches.
// Do not use precise locations for hit testing. A touch may hit test inside a view, yet have a precise location that lies just outside.
- (CGPoint)preciseLocationInView:(nullable UIView *)view API_AVAILABLE(ios(9.1));
- (CGPoint)precisePreviousLocationInView:(nullable UIView *)view API_AVAILABLE(ios(9.1));
// Force of the touch, where 1.0 represents the force of an average touch
@property(nonatomic,readonly) CGFloat force API_AVAILABLE(ios(9.0));
// Maximum possible force with this input mechanism
@property(nonatomic,readonly) CGFloat maximumPossibleForce API_AVAILABLE(ios(9.0));
// Azimuth angle. Valid only for stylus touch types. Zero radians points along the positive X axis.
// Passing a nil for the view parameter will return the azimuth relative to the touch's window.
- (CGFloat)azimuthAngleInView:(nullable UIView *)view API_AVAILABLE(ios(9.1));
// A unit vector that points in the direction of the azimuth angle. Valid only for stylus touch types.
// Passing nil for the view parameter will return a unit vector relative to the touch's window.
- (CGVector)azimuthUnitVectorInView:(nullable UIView *)view API_AVAILABLE(ios(9.1));
// Altitude angle. Valid only for stylus touch types.
// Zero radians indicates that the stylus is parallel to the screen surface,
// while M_PI/2 radians indicates that it is normal to the screen surface.
@property(nonatomic,readonly) CGFloat altitudeAngle API_AVAILABLE(ios(9.1));
// An index which allows you to correlate updates with the original touch.
// Is only guaranteed non-nil if this UITouch expects or is an update.
@property(nonatomic,readonly) NSNumber * _Nullable estimationUpdateIndex API_AVAILABLE(ios(9.1));
// A set of properties that has estimated values
// Only denoting properties that are currently estimated
@property(nonatomic,readonly) UITouchProperties estimatedProperties API_AVAILABLE(ios(9.1));
// A set of properties that expect to have incoming updates in the future.
// If no updates are expected for an estimated property the current value is our final estimate.
// This happens e.g. for azimuth/altitude values when entering from the edges
@property(nonatomic,readonly) UITouchProperties estimatedPropertiesExpectingUpdates API_AVAILABLE(ios(9.1));
@end
事件处理方法练习
下面是一个简单的demo,实现的效果就是view会根据手指的移动而移动 单点触摸:
//
// CPViewController.m
#import "CPViewController.h"
@interface CPViewController ()
@property (nonatomic, strong) UIImageView *myView;
@end
@implementation CPViewController
// 集合演练
- (void)demoSet
{
// NSSet : 集合,同样是保存一组数据,不过集合中的对象“没有顺序”
// 要访问NSSet中的对象,使用anyObject
// 集合的用处:例如可重用单元格,在缓冲区找一个就拿出来了
// NSArray : 存储有序的对象,对象的顺序是按照添加的先后次序来决定,通过下标来访问数组中的对象
NSSet *set = [NSSet setWithObjects:@1, @2, @3, @4, nil];
NSLog(@"%@", set.anyObject);
}
- (UIView *)myView
{
if (!_myView) {
_myView = [[UIImageView alloc] initWithFrame:CGRectMake(110, 100, 100, 100)];
_myView.image = [UIImage imageNamed:@"hero_fly_1"];
[self.view addSubview:_myView];
}
return _myView;
}
- (void)viewDidLoad
{
[super viewDidLoad];
[self myView];
}
// 1. 手指按下
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
// 从集合中取出UITouch对象
UITouch *touch = touches.anyObject;
//打开这句,然后屏蔽touchesMoved里的代码,可实现myView跟着手指跑
//[self moveView1:touch];
NSLog(@"%d", touch.tapCount);
NSLog(@"%s", __func__);
}
// 2. 手指移动
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
NSLog(@"%s", __func__);
// 随着手指移动,移动红色的视图
// 1. 取出触摸对象
UITouch *touch = touches.anyObject;
// 2. 手指当前的位置
CGPoint location = [touch locationInView:self.view];
// 3. 手指之前的位置
CGPoint pLocation = [touch previousLocationInView:self.view];
// 4. 计算两点之间的偏移
CGPoint offset = CGPointMake(location.x - pLocation.x, location.y - pLocation.y);
// 5. 设置视图位置
// self.myView.center = CGPointMake(self.myView.center.x + offset.x, self.myView.center.y + offset.y);
// 6. 使用transform设置位置,提示,在调整对象位置时,最好使用transform
self.myView.transform = CGAffineTransformTranslate(self.myView.transform, offset.x, 0);
}
- (void)moveView1:(UITouch *)touch
{
// 随着手指移动,移动红色的视图
// 1. 取出触摸对象
CGPoint location = [touch locationInView:self.view];
// 2. 设置红色视图的位置,在第一次移动的时候,会产生跳跃
self.myView.center = location;
}
// 3. 手指抬起
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
NSLog(@"%s", __func__);
}
// 4. 触摸被取消(中断),例如打电话被中断
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event
{
NSLog(@"%s", __func__);
}
@end
多点触摸:
//
// CPViewController.m
// 02-多点触摸
#import "CPViewController.h"
@interface CPViewController ()
/** 图片数组 */
@property (nonatomic, strong) NSArray *images;
@end
@implementation CPViewController
- (NSArray *)images
{
if (!_images) {
_images = @[[UIImage imageNamed:@"spark_blue"], [UIImage imageNamed:@"spark_red"]];
}
return _images;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// 支持多点
self.view.multipleTouchEnabled = YES;
}
// 手指按下
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
// // 遍历集合中的所有触摸
// int i = 0;
// for (UITouch *touch in touches) {
// // 取出触摸点的位置
// CGPoint location = [touch locationInView:self.view];
//
// // 在触摸点的位置添加图片
// UIImageView *imageView = [[UIImageView alloc] initWithImage:self.images[i]];
//
// imageView.center = location;
//
// [self.view addSubview:imageView];
//
// i++;
// }
}
// 手指移动
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
// 遍历集合中的所有触摸
int i = 0;
for (UITouch *touch in touches) {
// 取出触摸点的位置
CGPoint location = [touch locationInView:self.view];
// 在触摸点的位置添加图片
UIImageView *imageView = [[UIImageView alloc] initWithImage:self.images[i]];
imageView.center = location;
[self.view addSubview:imageView];
i++;
// 要将这些图像视图删除!延迟一段时间
[UIView animateWithDuration:2.0f animations:^{
imageView.alpha = 0.3;
} completion:^(BOOL finished) {
// 从界面上删除
[imageView removeFromSuperview];
}];
}
}
// 手指抬起
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
NSLog(@"%d", self.view.subviews.count);
}
@end
所有UIResponder的子类,只要重写了以上四个事件处理方法,就可以接收事件,并作出相应的响应。 比如一个UIView重写了这四个事件,在事件处理方法中,他可以判断触摸点是否落在自己的frame内,然后作出相应动作;
响应触摸点的视图可以通过参数UITouch.view来获取,但是这里有个问题,这个view是是怎么确定的呢?这个view 一定是包含该区域点的最小view么? 比如上述场景实例中,如果我的触摸点在蓝色区域内,响应的一定是蓝色的view么, 能否改成其他的view呢?
确定触摸点所在的view背后的原理是事件响应链机制:
事件响应链机制的流程如下所示:
(1) 运行循环检测到屏幕被点击了,首先通知UIApplication找找谁被点击了;
(2) UIApplication通知UIWindow去找找谁被点击了;
(3) UIWindow告诉控制器去找找谁被点击了;
(4) 控制器通知view去找找谁被点击了;
(5) view会通知内部的btn看看是不是被点击了;
(6) btn发现是自己被点击了,于是他告诉view它被点击了; (7) view再告诉控制器是btn被点击了; (8) 控制器再告诉UIwindow是btn被点击了; (9) UIWindow在告诉UIApplication是button被点击了; (10) UIApplication再告诉runloop是btn被点击了; (11) runloop最后通知控制控制器执行点击方法;
如果找到btn时,btn发现自己未被点击,那么它就会不断的往上通知一直传达到runloop,最后此次事件会被抛弃
纵观整个流程,多次重复的一个词汇是”去找找“,于是又回到了上面的问题,怎么去确定触摸点所在的view?
在事件响应链中,其实用的是hitTest,什么是hitTest? 每个继承自UIResponder的子类都会继承一个hitTest方法,这个方法就是用来确定触摸点所在的view, 它的查找过程有点像深度优先遍历,它先去找UIApplication下的第一个window,如果触摸点在当前的范围内,就去找当前window的第一个controller,如果触摸点在当前范围内,controller就会去找他的第一个view,如果触摸点不在第一个view,就会去找他的第二个view,直到找到view x为止,然后再不断地返回view x。 ru虽然没法看到该API的实现,但是猜测它的实现应该如下:
- (UIView*)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
// 如果交互未打开,或者透明度小于0.05 或者 视图被隐藏
if (self.userInteractionEnabled == NO || self.alpha < 0.05 || self.hidden == YES)
{
return nil;
}
// 如果 touch 的point 在 self 的bounds 内
if ([self pointInside:point withEvent:event])
{
NSInteger count = self.subviews.count;
for ( int i = 0; i < count; I++)
{
UIView* subView = self.subviews[count - 1 - I];
//进行坐标转化
CGPoint coverPoint = [subView convertPoint:point fromView:self];
// 调用子视图的 hitTest 重复上面的步骤。找到了,返回hitTest view ,没找到返回有自身处理
UIView *hitTestView = [subView hitTest:coverPoint withEvent:event];
if (hitTestView)
{
return hitTestView;
}
}
return self;
}
return nil;
}
如果在某个view重写了hitTest,在hitTest中阻隔了系统继续再往下查找(比如直接return self),那么最终找到的的view 可能未必是包含该触摸点的最小view;
比如场景实例中,如果在红色view重写了hitTest,并且在hitTest中直接return self,那么在controller的touch begin方法中,通过touch.view获得的view ,一定是红色的view,不管在点击的区域是白色区域、红色区域还是蓝色区域.
扩大button的点击范围
hitTest已经知道了是系统轮循遍历响应者的。那么pointInside则可以达到改变点击范围。
//返回视图层级中能响应触控点的最深视图
- (nullable UIView *)hitTest:(CGPoint)point withEvent:(nullable UIEvent *)event;
//返回视图是否包含指定的某个点
- (BOOL)pointInside:(CGPoint)point withEvent:(nullable UIEvent *)event; // default returns YES if point is in bounds
至于如何做到扩大button的点击范围,且看下面。 第一种方法:
继承于UIButton,然后重写方法 -(BOOL)pointInside:(CGPoint)point withEvent:(UIEvent*)event
#import <UIKit/UIKit.h>
@interface MyBigButton : UIButton
@end
#import "MyBigButton.h"
@implementation MyBigButton
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent*)event{
CGRect bounds = self.bounds;
//若原热区小于44x44,则放大热区,否则保持原大小不变
CGFloat widthDelta = MAX(44.0 - bounds.size.width, 0);
CGFloat heightDelta = MAX(44.0 - bounds.size.height, 0);
bounds = CGRectInset(bounds, -0.5 * widthDelta, -0.5 * heightDelta);
return CGRectContainsPoint(bounds, point);
}
@end
*/
第二种方法:
给UIbutton添加分类
[button setHitTestEdgeInsets:UIEdgeInsetsMake(-50, - 50, -50, - 50)];
#import <UIKit/UIKit.h>
@interface UIButton (BigFream)
@property(nonatomic, assign) UIEdgeInsets hitTestEdgeInsets;
@end
#import "UIButton+BigFream.h"
#import <objc/runtime.h>
@implementation UIButton (BigFream)
@dynamic hitTestEdgeInsets;
static const NSString *KEY_HIT_TEST_EDGE_INSETS = @"HitTestEdgeInsets";
-(void)setHitTestEdgeInsets:(UIEdgeInsets)hitTestEdgeInsets {
NSValue *value = [NSValue value:&hitTestEdgeInsets withObjCType:@encode(UIEdgeInsets)];
objc_setAssociatedObject(self, &KEY_HIT_TEST_EDGE_INSETS, value, OBJC_ASSOCIATION_RETAIN_NONATOMIC);
}
-(UIEdgeInsets)hitTestEdgeInsets {
NSValue *value = objc_getAssociatedObject(self, &KEY_HIT_TEST_EDGE_INSETS);
if(value) {
UIEdgeInsets edgeInsets;
[value getValue:&edgeInsets];
return edgeInsets;
}else {
return UIEdgeInsetsZero;
}
}
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
//如果 button 边界值无变化 失效 隐藏 或者透明 直接返回。
if(UIEdgeInsetsEqualToEdgeInsets(self.hitTestEdgeInsets, UIEdgeInsetsZero) || !self.enabled || self.hidden || self.alpha == 0 ) {
return [super pointInside:point withEvent:event];
}
CGRect relativeFrame = self.bounds;
//UIEdgeInsetsInsetRect表示在原来的rect基础上根据边缘距离切一个rect出来
CGRect hitFrame = UIEdgeInsetsInsetRect(relativeFrame, self.hitTestEdgeInsets);
return CGRectContainsPoint(hitFrame, point);
}
拓展延伸
那有没有什么情况,view会不接收和响应触摸事件呢?
UIView不接受触摸事件的四种情况
1.当前视图或父视图不接收用户交互: userInteractionEnabled = NO 提示:UIImageView的userInteractionEnabled默认就是NO,因此UIImageView以及它的子控件默认是不能接收触摸事件的
2.隐藏: hidden = YES
3.透明: alpha = 0.0 ~ 0.01
4.当前视图虽添加在父视图上,但是位置偏移出父视图即子视图的位置超出了父视图的有效位置
eg:黄色view添加在绿色view上,但是偏移出view范围,虽然黄色view可以展示,但是点击黄色view时,是后面白色的大view去响应了点击。
此处提示:如果设置了绿色view..clipsToBounds = YES;这句代码,含义就是裁剪超出绿色view的范围,那么黄色view就不会显示了。