持续创作,加速成长!这是我参与「掘金日新计划 · 10 月更文挑战」的第31天,10月更文诚意加码,激发写作潜力|掘金·日新计划 - 掘金 (juejin.cn)点击查看活动详情
窗口函数
1.窗口聚合
准备数据
cookie1,2015-04-10,1
cookie1,2015-04-13,3
cookie1,2015-04-11,5
cookie1,2015-04-12,2
cookie1,2015-04-15,4
cookie1,2015-04-16,4
cookie1,2015-04-14,4
cookie2,2015-04-14,3
cookie2,2015-04-15,9
cookie2,2015-04-16,7
cookie2,2015-04-10,2
cookie2,2015-04-11,3
cookie2,2015-04-12,5
cookie2,2015-04-13,6
创建表。
create table cookie1(cookieid string, createtime string, pv int) row format delimited fields terminated by ',';
加载数据。
load data local inpath "/usr/datadir/cookie1.txt" into table cookie1;
sum(pv) over()
我们通过cookieid分组,createtime排序,求pv的和。
求之前行到当前行的pv和。不加范围限定,默认也是这种。
sum(pv) over(partition by cookieid order by createtime rows between unbounded preceding and current row) as pv1,
sum(pv) over (partition by cookieid order by createtime) as pv2,
如果只进行了分组,没有排序,会将分组内的所有数据进行求和。
sum(pv) over (partition by cookieid) as pv3,
求当前行与前3行的pv和。
sum(pv) over (partition by cookieid order by createtime rows between 3 preceding and current row) as pv4,```
当前行的前3行到后2行。
sum(pv) over(partition by cookid order by createtime rows between 3 preceding and 2 following) as pv5,
当前行到最后行。
```sum(pv) over (partition by cookieid order by createtime rows between current row and unbounded following) as pv6,
代码
select cookieid,createtime,
pv,
sum(pv) over (partition by cookieid order by createtime rows between unbounded preceding and current row) as pv1,
sum(pv) over (partition by cookieid order by createtime) as pv2,
sum(pv) over (partition by cookieid) as pv3,
sum(pv) over (partition by cookieid order by createtime rows between 3 preceding and current row) as pv4,
sum(pv) over (partition by cookieid order by createtime rows between 3 preceding and 2 following) as pv5,
sum(pv) over (partition by cookieid order by createtime rows between current row and unbounded following) as pv6
from cookie1;
avg(pv) over()
min(pv) over()
max(pv) over()
2.窗口分片
数据准备
cookie1,2015-04-10,1
cookie1,2015-04-11,5
cookie1,2015-04-12,7
cookie1,2015-04-13,3
cookie1,2015-04-14,2
cookie1,2015-04-15,4
cookie1,2015-04-16,4
cookie2,2015-04-10,2
cookie2,2015-04-11,3
cookie2,2015-04-12,5
cookie2,2015-04-13,6
cookie2,2015-04-14,3
cookie2,2015-04-15,9
cookie2,2015-04-16,7
创建表
create table cookie2(cookieid string, createtime string, pv int)
row format delimited
fields terminated by ',';
加载数据
load data local inpath "/usr/datadir/cookie2.txt" into table cookie2;
查看数据
以下不支持rows between
ntile(n) over()
按顺序将组内的数据分为几片,一般用来求前几分之几的数据。
ntile(2) over (partition by cookieid order by createtime) as rn1
如果不加分区,会将所有数据分成多片。
ntile(4) over (order by createtime) as rn3
select cookieid,createtime,
pv,
ntile(2) over (partition by cookieid order by createtime) as rn1,
ntile(3) over (partition by cookieid order by createtime) as rn2,
ntile(4) over (order by createtime) as rn3
from cookie1
order by cookieid,createtime;
例:比如,统计一个cookie,pv数最多的前1/3的天。
create table cookie_temp
as
select
cookieid,
createtime,
pv,
ntile(3) over (partition by cookieid order by pv desc) as rn
from cookie2;
例:我们取rn=1的就是pv最多的前三分之一
3.窗口排序
row_number() over()
分组排序,并记录名次,一般用来取前n名
row_number() over (partition by cookieid order by pv desc) as rn1
100 99 98 98 97 96
1,2,3,4,5,6
rank() over()
rank() over(partition by cookieid order by pv desc) as rn2
100 99 98 98 97 96
1,2,3,3,5,6
dense_rank() over()
dense_rank() over(partition by cookieid order by pv desc) as rn3
100 99 98 98 97 96
1,2,3,3,4,5
select
cookieid,
createtime,
pv,
rank() over (partition by cookieid order by pv desc) as rn1,
dense_rank() over (partition by cookieid order by pv desc) as rn2,
row_number() over (partition by cookieid order by pv desc) as rn3
from cookie2
where cookieid='cookie1';
4.首尾值
FIRST_VALUE(url) over ()
分组排序后截至到当前行的第一个值。
FIRST_VALUE(url) over (partition by cookieid order by createtime desc) as last1
LAST_VALUE(url) over ()
分组排序后截至到当前行的最后一个值。
FIRST_VALUE(url) over (partition by cookieid order by createtime desc) as last2
select cookieid,createtime,url,
row_number() over (partition by cookieid order by createtime) as rn,
FIRST_VALUE(url) over (partition by cookieid order by createtime desc) as last1,
LAST_VALUE(url) over (partition by cookieid order by createtime desc) as last2
from cookie3;
自定义函数
当 Hive 提供的内置函数无法满足业务处理需要时,此时就可以考虑使用用户自定义函数。
UDF(user-defined function)作用于单个数据行,产生一个数据行作为输出。
UDAF(用户定义聚集函数 User- Defined Aggregation Funcation):接收多个输入数据行,并产生一个输出数据行。类似于max、min。
UDTF(表格生成函数 User-Defined Table Functions):接收一行输入,输出多行。类似于explode。
1.自定义UDF
- 创建Maven项目,并导入依赖(eclipse中也可将hive-exec的jar包复制进来然后build path)。
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>2.3.3</version>
<exclusions>
<exclusion>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
</exclusion>
</exclusions>
</dependency>
- 自定义一个java类继承UDF,重载 evaluate 方法。
import org.apache.hadoop.hive.ql.exec.UDF;
public class ToLower extends UDF {
public String evaluate(String field) {
String result = field.toLowerCase();
return result;
}
}
- 打成jar包上传到服务器。
4.添加jar包到hive中。
5.创建临时函数与开发好的 class 关联起来。
create temporary function tolowercase as 'com.udf.toLower';
6.在HQL种使用
注意:这种方式创建的临时函数只在一次hive会话中有效,重启会话后就无效
7.永久生效。
如果需要经常使用该自定义函数,可以考虑创建永久函数:
拷贝jar包到hive的lib目录下。
创建永久关联函数。
create function tolowercase as 'cn.jixiang.udf.ToLower';
8.删除函数。
删除临时函数
drop temporary function tolowercase;
删除永久函数
drop function tolowercase;
Hive的Shell操作
hive -e 从命令行执行指定的HQL
例:hive -e "select * from student"
hive -f 执行 HQL 脚本
例:echo "select * from student" > hive.sql
hive -f hive.sql