Kafka 0.10.0 SASL/PLAIN身份认证及权限实现
本文主要介绍一下使用官方发布的 Kafka 0.10.0 版如何实现 SASL/PLAIN 认证机制以及权限控制。
Kafka 安全机制
Kafka 的安全机制主要分为两部分:
- 身份认证(Authentication):对client 与服务器的连接进行身份认证。
- 权限控制(Authorization):实现对于消息级别的权限控制
In release 0.9.0.0, the Kafka community added a number of features that, used either separately or together, increases security in a Kafka cluster.
These features are considered to be of beta quality. The following security measures are currently supported:
- Authentication of connections to brokers from clients (producers and consumers), other brokers and tools, using either SSL or SASL (Kerberos). SASL/PLAIN can also be used from release 0.10.0.0 onwards.
- Authentication of connections from brokers to ZooKeeper
- Encryption of data transferred between brokers and clients, between brokers, or between brokers and tools using SSL (Note that there is a performance degradation when SSL is enabled, the magnitude of which depends on the CPU type and the JVM implementation.)
- Authorization of read / write operations by clients
- Authorization is pluggable and integration with external authorization services is supported
这段话的中文意思也就是说
- 可以使用 SSL 或者 SASL 进行客户端(producer 和 consumer)、其他 brokers、tools与 brokers 之间连接的认证,SASL/PLAIN将在0.10.0中得到支持;
- 对brokers和zookeeper之间的连接进行Authentication;
- 数据传输用SSL加密,性能会下降;
- 对clients的读写操作进行Authorization;
- Authorization 是pluggable,与外部的authorization services结合进行支持。
Kafka身份认证
Kafka 目前支持SSL、SASL/Kerberos、SASL/PLAIN三种认证机制,关于这些认证机制的介绍可以参考一下三篇文章。
SASL/PLAIN 认证
可以参考kafka使用SASL验证,这个官方文档的中文版。
Kafka Server 端配置
需要在 Kafka 安装目录下的 config/server.properties
文件中配置以下信息
1 | listeners=SASL_PLAINTEXT://ip:pot |
如果要配置多个超级账户,可以按如下配置:
1 | super.users=User:admin;User:alice |
还需要配置一个名 kafka_server_jaas.conf
的配置文件,将配置文件放置在conf
目录下。
1 | KafkaServer { |
这里,我们配置了两个用户:admin 和 alice,密码分别为 admin 和 alice。
最后需要为 Kafka 添加 java.security.auth.login.config
环境变量。在 bin/kafka-run-class.sh
中添加以下内容
1 | KAFKA_SASL_OPTS='-Djava.security.auth.login.config=/opt/meituan/kafka_2.10-0.10.0.0/config/kafka_server_jaas.conf' |
注:实际上,我们只是添加了第一行,并在第4和第6行中添加了 $KAFKA_SASL_OPTS 这个环境变量。
KafkaClient 配置
首先需要在客户端配置 kafka_client_jaas.conf
文件
1 | KafkaClient { |
然后在(producer 和 consumer)程序中添加环境变量和配置,如下所示
1 | System.setProperty("java.security.auth.login.config", ".../kafka_client_jaas.conf"); // 环境变量添加,需要输入配置文件的路径 |
配置完以上内容后,就可以正常运行 producer 和 consumer 程序,如果账户密码错误的话,程序就不能正常进行,但是不会有任何提示,这方面后面会进行一些改进。
Kafka权限控制
这个小节介绍一下 Kafka 的 ACL 。
权限的内容
权限 | 说明 |
---|---|
READ | 读取topic |
WRITE | 写入topic |
DELETE | 删除topic |
CREATE | 创建topic |
ALTER | 修改topic |
DESCRIBE | 获取topic的信息 |
ClusterAction | |
ALL | 所有权限 |
访问控制列表ACL存储在zk上,路径为/kafka-acl
。
权限配置
Kafka 提供的命令如下表所示
Option | Description | Default | Option type |
---|---|---|---|
–add | Indicates to the script that user is trying to add an acl. | Action | |
–remove | Indicates to the script that user is trying to remove an acl. | Action | |
–list | Indicates to the script that user is trying to list acts. | Action | |
–authorizer | Fully qualified class name of the authorizer. | kafka.security.auth.SimpleAclAuthorizer | Configuration |
–authorizer-properties | key=val pairs that will be passed to authorizer for initialization. For the default authorizer the example values are: zookeeper.connect=localhost:2181 | Configuration | |
–cluster | Specifies cluster as resource. | Resource | |
–topic [topic-name] | Specifies the topic as resource. | Resource | |
–group [group-name] | Specifies the consumer-group as resource. | Resource | |
–allow-principal | Principal is in PrincipalType:name format that will be added to ACL with Allow permission. You can specify multiple –allow-principal in a single command. | Principal | |
–deny-principal | Principal is in PrincipalType:name format that will be added to ACL with Deny permission. You can specify multiple –deny-principal in a single command. | Principal | |
–allow-host | IP address from which principals listed in –allow-principal will have access. | if –allow-principal is specified defaults to * which translates to “all hosts” | Host |
–deny-host | IP address from which principals listed in –deny-principal will be denied access. | if –deny-principal is specified defaults to * which translates to “all hosts” | Host |
–operation | Operation that will be allowed or denied. Valid values are : Read, Write, Create, Delete, Alter, Describe, ClusterAction, All | All | Operation |
–producer | Convenience option to add/remove acls for producer role. This will generate acls that allows WRITE, DESCRIBE on topic and CREATE on cluster. | Convenience | |
–consumer | Convenience option to add/remove acls for consumer role. This will generate acls that allows READ, DESCRIBE on topic and READ on consumer-group. | Convenience |
权限设置
通过几个例子介绍一下如何进行权限设置。
add 操作
1 | # 为用户 alice 在 test(topic)上添加读写的权限 |
list 操作
1 | # 列出 topic 为 test 的所有权限账户 |
输出信息为:
1 | Current ACLs for resource `Topic:test`: |
remove 操作
1 | # 移除 acl |
producer 和 consumer 的操作
1 | # producer |
填坑
本小节记录了在使用 SASL/PLAIN 时遇到的一些坑。
Controller连接broker失败
错误信息如下:
1 | [2016-07-27 17:45:46,047] WARN [Controller-1-to-broker-1-send-thread], Controller 1's connection to broker XXXX:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) |
查找原因查找了半天,之前以为是kafka_server_jaas.conf
文件的格式有问题,改了之后发现 Kafka 有时启动正常,有时不能正常启动,修改之前 conf 文件为:
1 | KafkaServer { |
最后分析可能是因为没有在 user 中配置 admin 账户,因为 broker 之间也开启了身份认证,修改之后的配置文件如下。
1 | KafkaServer { |
修改完之后,Kafka 就可以正常运行了。
参考:
- confluent的官网博客:Apache Kafka Security 101
- Kafka 官网:KIP-12 - Kafka Sasl/Kerberos and SSL implementation
- Kafka Security
- Kafka 官网:Kafka security
- Kafka 官网中文翻译kafka使用SASL验证
公众号
个人公众号(柳年思水)已经上线,最新文章会同步在公众号发布,欢迎大家关注~