快捷搜索:  汽车  科技

hive 数据类型:hive学习笔记之二 复杂数据类型

hive 数据类型:hive学习笔记之二 复杂数据类型hive> select person array_contains(friends 'tom_friend_0') from t2; OK person _c1 tom true jerry false Time taken: 0.061 seconds Fetched: 2 row(s)第一条记录的friends数组中有三个元素,借助LATERAL VIEW语法可以把这三个元素拆成三行,SQL如下:select t.person single_friend from ( select person friends from t2 where person='tom' ) t LATERAL VIEW explode(t.friends) v as single_friend;执行结果如下,可见数组中的每个元素都能拆成单

欢迎访问我的GitHub

https://github.com/zq2599/blog_demos

内容:所有原创文章分类和汇总,及配套源码,涉及Java、Docker、Kubernetes、DevOPS等;

本篇概览
  • 作为《hive学习笔记》的第二篇,前面咱们了解了基本类型,本篇要学习的是复杂数据类型;
  • 复杂数据类型一共有四种:
  1. ARRAY:数组
  2. MAP:键值对
  3. STRUCT:命名字段集合
  4. UNIONTYPE:从几种数据类型中指明选择一种,UNION的值必须与这些数据类型之一完全匹配;
  • 接下来逐个学习;
准备环境
  1. 确保hadoop已经启动;
  2. 进入hive控制台的交互模式;
  3. 执行以下命令,使查询结果中带有字段名:

set hive.cli.print.header=true;ARRAY

  1. 创建名为t2的表,只有person和friends两个字段,person是字符串类型,friends是数组类型,通过文本文件导入数据时,person和friends之间的分隔符是竖线,friends内部的多个元素之间的分隔符是逗号,注意声明分隔符的语法:

create table if not exists t2( person string friends array<string> ) row format delimited fields terminated by '|' collection items terminated by ' ';

  1. 创建文本文件002.txt,内容如下,可见只有两条记录,第一条person字段值为tom,friends字段里面有三个元素,用逗号分隔:

tom|tom_friend_0 tom_friend_1 tom_friend_2 jerry|jerry_friend_0 jerry_friend_1 jerry_friend_2 jerry_friend_3 jerry_friend_4 jerry_friend_5

  1. 执行以下语句,从本地的002.txt文件导入数据到t2表:

load data local inpath '/home/hadoop/temp/202010/25/002.txt' into table t2;

  1. 查看全部数据:

hive> select * from t2; OK t2.person t2.friends tom ["tom_friend_0" "tom_friend_1" "tom_friend_2"] jerry ["jerry_friend_0" "jerry_friend_1" "jerry_friend_2" "jerry_friend_3" "jerry_friend_4" "jerry_friend_5"] Time taken: 0.052 seconds Fetched: 2 row(s)

  1. 查询friends中的某个元素的SQL:

select person friends[0] friends[3] from t2;

执行结果如下,第一条记录没有friends[3],显示为NULL:

hive> select person friends[0] friends[3] from t2; OK person _c1 _c2 tom tom_friend_0 NULL jerry jerry_friend_0 jerry_friend_3 Time taken: 0.052 seconds Fetched: 2 row(s)

  1. 数组元素中是否包含某值的SQL:

select person array_contains(friends 'tom_friend_0') from t2;

执行结果如下,第一条记录friends数组中有tom_friend_0,显示为true,第二条记录不包含,就显示false:

hive> select person array_contains(friends 'tom_friend_0') from t2; OK person _c1 tom true jerry false Time taken: 0.061 seconds Fetched: 2 row(s)

  1. 第一条记录的friends数组中有三个元素,借助LATERAL VIEW语法可以把这三个元素拆成三行,SQL如下:

select t.person single_friend from ( select person friends from t2 where person='tom' ) t LATERAL VIEW explode(t.friends) v as single_friend;

执行结果如下,可见数组中的每个元素都能拆成单独一行:

OK t.person single_friend tom tom_friend_0 tom tom_friend_1 tom tom_friend_2 Time taken: 0.058 seconds Fetched: 3 row(s)

  • 以上就是数组的基本操作,接下来是键值对;
MAP,建表,导入数据
  • 接下来打算创建名为t3的表,只有person和address两个字段,person是字符串类型,address是MAP类型,通过文本文件导入数据时,对分隔符的定义如下:
  1. person和address之间的分隔符是竖线;
  2. address内部有多个键值对,它们的分隔符是逗号;
  3. 而每个键值对的键和值的分隔符是冒号;
  • 满足上述要求的建表语句如下所示:

create table if not exists t3( person string address map<string string> ) row format delimited fields terminated by '|' collection items terminated by ' ' map keys terminated by ':';

  • 创建文本文件003.txt,可见用了三种分隔符来分隔字段、MAP中的多个元素、每个元素键和值:

tom|province:guangdong city:shenzhen jerry|province:jiangsu city:nanjing

  • 导入003.txt的数据到t3表:

load data local inpath '/home/hadoop/temp/202010/25/003.txt' into table t3;MAP,查询

  1. 查看全部数据:

hive> select * from t3; OK t3.person t3.address tom {"province":"guangdong" "city":"shenzhen"} jerry {"province":"jiangsu" "city":"nanjing"} Time taken: 0.075 seconds Fetched: 2 row(s)

  1. 查看MAP中的某个key,语法是field[“xxx”]:

hive> select person address["province"] from t3; OK person _c1 tom guangdong jerry jiangsu Time taken: 0.075 seconds Fetched: 2 row(s)

  1. 使用if函数,下面的SQL是判断address字段中是否有"street"键,如果有就显示对应的值,没有就显示filed street not exists:

select person if(address['street'] is null "filed street not exists" address['street']) from t3;

输出如下,由于address字段只有province和city两个键,因此会显示filed street not exists:

OK tom filed street not exists jerry filed street not exists Time taken: 0.087 seconds Fetched: 2 row(s)

  1. 使用explode将address字段的每个键值对展示成一行:

hive> select explode(address) from t3; OK province guangdong city shenzhen province jiangsu city nanjing Time taken: 0.081 seconds Fetched: 4 row(s)

  1. 上面的explode函数只能展示address字段,如果还要展示其他字段就要继续LATERAL VIEW语法,如下,可见前面的数组展开为一个字段,MAP展开为两个字段,分别是key和value:

select t.person address_key address_value from ( select person address from t3 where person='tom' ) t LATERAL VIEW explode(t.address) v as address_key address_value;

结果如下:

OK tom province guangdong tom city shenzhen Time taken: 0.118 seconds Fetched: 2 row(s)

  1. size函数可以查看MAP中键值对的数量:

hive> select person size(address) from t3; OK tom 2 jerry 2 Time taken: 0.082 seconds Fetched: 2 row(s)STRUCT

  1. STRUCT是一种记录类型,它封装了一个命名的字段集合,里面有很多属性,新建名为t4的表,其info字段就是STRUCT类型,里面有age和city两个属性,person和info之间的分隔符是竖线,info内部的多个元素之间的分隔符是逗号,注意声明分隔符的语法:

create table if not exists t4( person string info struct<age:int city:string> ) row format delimited fields terminated by '|' collection items terminated by ' ';

  1. 准备好名为004.txt的文本文件,内容如下:

tom|11 shenzhen jerry|12 nanjing

  1. 加载004.txt的数据到t4表:

load data local inpath '/home/hadoop/temp/202010/25/004.txt' into table t4;

  1. 查看t4的所有数据:

hive> select * from t4; OK tom {"age":11 "city":"shenzhen"} jerry {"age":12 "city":"nanjing"} Time taken: 0.063 seconds Fetched: 2 row(s)

  1. 查看指定字段,用filedname.xxx语法:

hive> select person info.city from t4; OK tom shenzhen jerry nanjing Time taken: 0.141 seconds Fetched: 2 row(s)UNION

  • 最后一种是UNIONTYPE,这是从几种数据类型中指明选择一种,由于UNIONTYPE数据的创建设计到UDF(create_union),这里先不展开了,先看看建表语句:

CREATE TABLE union_test(foo UNIONTYPE<int double array<string> struct<a:int b:string>>);

  • 查询结果:

SELECT foo FROM union_test; {0:1} {1:2.0} {2:["three" "four"]} {3:{"a":5 "b":"five"}} {2:["six" "seven"]} {3:{"a":8 "b":"eight"}} {0:9} {1:10.0}

  • 至此,hive的基础数据类型和复杂数据类型咱们都实际操作过一遍了,接下来的文章将展开更多hive知识,期待与您共同进步;
欢迎关注我的公众号:程序员欣宸

hive 数据类型:hive学习笔记之二 复杂数据类型(1)

猜您喜欢: