kafka-connect-jdbc sink を試した
source の次は sink も試します。
データベース準備
mysql> create database myjdbc2; Query OK, 1 row affected (0.00 sec) mysql> use myjdbc2; Database changed mysql> show tables; Empty set (0.00 sec)
コンシューマーでメッセージ確認
>bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic myjdbctopic-authors --from-beginning {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":true,"field":"name"}],"optional":false,"name":"authors"},"payload":{"id":1,"name":"qwer"}} {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":true,"field":"name"}],"optional":false,"name":"authors"},"payload":{"id":2,"name":"asdf"}} {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":true,"field":"name"}],"optional":false,"name":"authors"},"payload":{"id":3,"name":"zxcv"}} {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":true,"field":"name"}],"optional":false,"name":"authors"},"payload":{"id":4,"name":"aaaaa"}} {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":true,"field":"name"}],"optional":false,"name":"authors"},"payload":{"id":5,"name":"bbbb"}} Processed a total of 5 messages
connect jdbc source 起動
connect-standalone-plugin.properties
bootstrap.servers=localhost:9092 key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=true value.converter.schemas.enable=true offset.storage.file.filename=/tmp/connect.offsets offset.flush.interval.ms=10000 # plugin.path=C:\opt\kafka2\plugins
connect-jdbc-sink.properties
name=myjdbcconnect-sink connector.class=io.confluent.connect.jdbc.JdbcSinkConnector tasks.max=1 topic=myjdbctopic-authors topics.regex=myjdbctopic-(.*) connection.url=jdbc:mysql://localhost:3306/myjdbc2 connection.user= connection.password= auto.create=true
topics.regex
途中 topics.regex
付けろと怒られた。
org.apache.kafka.common.config.ConfigException: Must configure one of topics or topics.regex
topics.regex
か table.name.format
付けろと。
コネクタ起動
cd C:\opt\kafka2 bin\windows\connect-standalone.bat config\connect-standalone-plugin.properties config\connect-jdbc-sink.properties
db確認
mysql> show tables; +---------------------+ | Tables_in_myjdbc2 | +---------------------+ | myjdbctopic-authors | +---------------------+ 1 row in set (0.00 sec) mysql> desc `myjdbctopic-authors`; +-------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+--------------+------+-----+---------+-------+ | name | varchar(256) | YES | | NULL | | | id | int(11) | NO | | NULL | | +-------+--------------+------+-----+---------+-------+ 2 rows in set (0.01 sec) mysql> show index from `myjdbctopic-authors`; Empty set (0.02 sec) mysql> select * from `myjdbctopic-authors`; +-------+----+ | name | id | +-------+----+ | qwer | 1 | | asdf | 2 | | zxcv | 3 | | aaaaa | 4 | | bbbb | 5 | +-------+----+ 5 rows in set (0.00 sec)
参考
↑の stackoverflow は transforms 使ってテーブル名をいい感じにしているっぽい。
kafka-connect-jdbc source を試した
プラグイン準備
- ここから ダウンロード https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc
- kafka-connect-jdbc-5.3.1.jar を C:\opt\kafka2\libs に配置
mysql-connector準備
- ここからダウンロード https://mvnrepository.com/artifact/mysql/mysql-connector-java/5.1.47
- mysql-connector-java-5.1.48.jar を C:\opt\kafka2\libs に配置
mysql 準備
db,table,record作成
create database myjdbc; use myjdbc create table authors ( id int(8) not null auto_increment, name varchar(20), primary key (id) ); insert into authors (name) values ('qwer'),('asdf'),('zxcv');
確認
mysql> select * from authors; +----+------+ | id | name | +----+------+ | 1 | qwer | | 2 | asdf | | 3 | zxcv | +----+------+ 3 rows in set (0.00 sec)
zookeeper起動
cd C:\opt\kafka2 bin\windows\zookeeper-server-start.bat config\zookeeper.properties
kafka起動
cd C:\opt\kafka2 bin\windows\kafka-server-start.bat config\server.properties
connect jdbc source 起動
connect-standalone-plugin.properties
bootstrap.servers=localhost:9092 offset.storage.file.filename=/tmp/connect.offsets key.converter.schemas.enable=true value.converter.schemas.enable=true key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter #plugin.path=C:\opt\kafka2\plugins
connect-jdbc-source.properties
name=myjdbcconnect connector.class=io.confluent.connect.jdbc.JdbcSourceConnector tasks.max=1 topic.prefix=myjdbctopic connection.url=jdbc:mysql://localhost:3306/myjdbc connection.user= connection.password= mode=incrementing incrementing.column.name=id table.whitelist=authors
コネクタ起動
cd C:\opt\kafka2 bin\windows\connect-standalone.bat config\connect-standalone-plugin.properties config\connect-jdbc-source.properties
トピック確認
> bin\windows\kafka-topics.bat --list --zookeeper=localhost:2181 __consumer_offsets connect-test myjdbctopic-authors
コンシューマ起動
> bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic myjdbctopic-authors --from-beginning {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":true,"field":"name"}],"optional":false,"name":"authors"},"payload":{"id":1,"name":"qwer"}} {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":true,"field":"name"}],"optional":false,"name":"authors"},"payload":{"id":2,"name":"asdf"}} {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":true,"field":"name"}],"optional":false,"name":"authors"},"payload":{"id":3,"name":"zxcv"}}
レコード追加
insert into authors (name) values ('aaaaa');
コンシューマ確認
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":true,"field":"name"}],"optional":false,"name":"authors"},"payload":{"id":4,"name":"aaaaa"}}
C:\tmp\connect.offsets 確認
バイナリファイルの一部
["myjdbcconnect",{"protocol":"1","table":"myjdbc.authors"}]uq........{"incrementing":4}
参考
https://docs.confluent.io/current/connect/kafka-connect-jdbc/index.html
https://docs.confluent.io/current/connect/kafka-connect-jdbc/source-connector/index.html
kafka connect してみる
kafka connect の FileStreamSource を試した。
properties
これから使う config 内の propertiesファイルを確認する。トピックは connect-test。 入口が test.txt。出口がtest.sink.txt。オフセットを /tmp/connect.offsets に記録するんだろう。
connect-standalone.properties
bootstrap.servers=localhost:9092 key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=true value.converter.schemas.enable=true offset.storage.file.filename=/tmp/connect.offsets offset.flush.interval.ms=10000
connect-file-source.properties
name=local-file-source connector.class=FileStreamSource tasks.max=1 file=test.txt topic=connect-test
connect-file-sink.properties
name=local-file-sink connector.class=FileStreamSink tasks.max=1 file=test.sink.txt topics=connect-test
file-source
トピック確認。まだない
cd C:\opt\kafka2 bin\windows\kafka-topics.bat --list --zookeeper=localhost:2181
スタンドアロンコネクタ起動。ファイルがないため warn。sleeping が出続ける。
bin\windows\connect-standalone.bat config\connect-standalone.properties config\connect-file-source.properties ↓↓↓ WARN Couldn't find file test.txt for FileStreamSourceTask, sleeping to wait for it to be created (org.apache.kafka.connect.file.FileStreamSourceTask)
トピック確認。まだできていない
bin\windows\kafka-topics.bat --list --zookeeper=localhost:2181
テキストファイル生成。「sleeping」が止まる
echo asdf>test.txt echo qwer>>test.txt echo zxcv>>test.txt
トピック確認。できている。 /tmp/connect.offsets
もできていた。
bin\windows\kafka-topics.bat --list --zookeeper=localhost:2181 ↓↓↓ connect-test
コンシューマー起動
bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic connect-test --from-beginning ↓↓↓ {"schema":{"type":"string","optional":false},"payload":"asdf"} {"schema":{"type":"string","optional":false},"payload":"qwer"} {"schema":{"type":"string","optional":false},"payload":"zxcv"}
テキスト追記
echo a>>test.txt echo b>>test.txt echo c>>test.txt echo d>>test.txt echo e>>test.txt
コンシューマに↓が表示された。
{"schema":{"type":"string","optional":false},"payload":"a"} {"schema":{"type":"string","optional":false},"payload":"b"} {"schema":{"type":"string","optional":false},"payload":"c"} {"schema":{"type":"string","optional":false},"payload":"d"} {"schema":{"type":"string","optional":false},"payload":"e"}
file-sink
スタンドアロンコネクタ-sink起動。失敗
bin\windows\connect-standalone.bat config\connect-standalone.properties config\connect-file-sink.properties ↓↓↓ ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone) org.apache.kafka.connect.errors.ConnectException: Unable to initialize REST server at org.apache.kafka.connect.runtime.rest.RestServer.initializeServer(RestServer.java:177) at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:85) Caused by: java.io.IOException: Failed to bind to 0.0.0.0/0.0.0.0:8083
8083 が開いていた。
curl http://localhost:8083/connector-plugins | jq . ↓↓↓ [ { "class": "org.apache.kafka.connect.file.FileStreamSinkConnector", "type": "sink", "version": "2.2.1" }, { "class": "org.apache.kafka.connect.file.FileStreamSourceConnector", "type": "source", "version": "2.2.1" } ]
スタンドアロンコネクタ-source を止めると curl は接続失敗。
続けて sink 起動。こんどは成功。test.sink.txt ができた。
bin\windows\connect-standalone.bat config\connect-standalone.properties config\connect-file-sink.properties
test.sink.txt
asdf qwer zxcv a b c d e
Kafka windows php
windows10 で kafka + php + rdkafka を試したときのメモ。
使ったもの
- jdk 11.0.2
- kafka 2.2.1
- https://kafka.apache.org/downloads#2.2.1
- C:\opt\kafka22 に配置
- PHP 7.1.33 VC14 x64 Thread Safe (2019-Oct-23 12:30:06)
- https://pecl.php.net/package/rdkafka/3.1.2/windows
- C:\opt\php\php71 に配置
- php_rdkafka-3.1.2
kafka 動作確認
zookeeper 起動
cd C:\opt\kafka22\bin\windows zookeeper-server-start.bat ..\..\config\zookeeper.properties
kafka 準備
server.properties の advertised.listeners を修正したような?
#advertised.listeners=PLAINTEXT://your.host.name:9092 advertised.listeners=PLAINTEXT://myhostname:9092
kafka 起動
kafka-server-start.bat ..\..\config\server.properties
トピック確認
kafka-topics.bat --list --zookeeper=localhost:2181
トピック作成
kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic hello-kafka ↓↓↓ Created topic "hello-kafka".
トピック確認
kafka-topics.bat --list --zookeeper=localhost:2181 ↓↓↓ hello-kafka
Producerの起動とメッセージの送信
kafka-console-producer.bat --broker-list localhost:9092 --topic hello-kafka >Hello kafka! >...
Consumerの起動とメッセージの受信
kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic hello-kafka --from-beginning
PHP
rdkafka についている examples を実行するだけ。composer しなくていい。
php producer.php php producer.php
php consumer.php ↓↓↓ Message 0 Message 1 Message 2 Message 3 Message 4 Message 5 Message 6 Message 7 Message 8 Message 9 Message 0 Message 1 Message 2 Message 3 Message 4 Message 5 Message 6 Message 7 Message 8 Message 9 Broker: No more messages
kafka あとくされなく気軽に試したい
vm 作り直すのも面倒。docker でなんとかしたい。http://wurstmeister.github.io/kafka-docker/ でやるのが有名のようだ。が実際やってみるとメンドクサイ。 こっちが楽じゃないか。
気軽に試す
kafka と zookeeper 起動
$ curl -sSL https://raw.githubusercontent.com/bitnami/bitnami-docker-kafka/master/docker-compose.yml > docker-compose.yml $ docker-compose up
確認
$ docker-compose ps Name Command State Ports ----------------------------------------------------------------------------------------------------------------- bitnami_kafka_1_b4e7444b5ac1 /entrypoint.sh /run.sh Up 0.0.0.0:9092->9092/tcp bitnami_zookeeper_1_44054781d94b /entrypoint.sh /run.sh Up 0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcp, $ docker network ls NETWORK ID NAME DRIVER SCOPE fa44c4fa3457 bitnami_default bridge local 5ee57849102a bridge bridge local 123b1da9fc93 host host local 55a73359e1e6 none null local
トピック確認。いちいちコメントがうるさいwww
$ docker run -it --rm \ --network bitnami_default \ -e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181 \ bitnami/kafka:latest kafka-topics.sh --list --zookeeper=zookeeper:2181 11:13:26.34 11:13:26.34 Welcome to the Bitnami kafka container 11:13:26.34 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker- 11:13:26.34 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-kaf 11:13:26.35 Send us your feedback at containers@bitnami.com 11:13:26.35
トピック作成
$ docker run -it --rm \ --network bitnami_default \ -e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181 \ bitnami/kafka:latest \ kafka-topics.sh --create --topic test --partitions 1 --zookeeper zookeeper:2181 --replication-factor 1
トピック確認
$ docker run -it --rm \ --network bitnami_default \ -e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181 \ bitnami/kafka:latest \ kafka-topics.sh --list --zookeeper zookeeper:2181 test
コンソールプロデューサー
$ docker run -it --rm \ --network bitnami_default \ -e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181 \ bitnami/kafka:latest \ kafka-console-producer.sh --topic=test --broker-list=kafka:9092 >asdf >qwer >zxcv >tyui >f >h >j >k >l >ggggggg >^C
コンソールコンシューマー。プロデューサーを起動したままにしておくと、コンシューマにメッセージが渡るのが確認できる。
$ docker run -it --rm \ --network bitnami_default \ -e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181 \ bitnami/kafka:latest \ kafka-console-consumer.sh --bootstrap-server=kafka:9092 --topic test --from-beginning asdf qwer zxcv tyui f h j k l ggggggg ^CProcessed a total of 10 messages
おわり
wurstmeister/kafka-docker はソケットを使ったりして面白い。が ip/port しらべたりする手間があって面倒だ。
bitnami でなくても、kafka zookeeper が入ってる docker-compose.yml はそこら中にあるからテキトーに...信用にできるイメージを使えばいいと思う。 kafka 操作するたびに docker run するのはカッコ悪いかもしれないが、要は kafka-***.sh の行だけ変えればいい。
fluent-plugin-grafana-loki を再度試した
準備
loki
docker run -d -p 3100:3100 \ --net=loki \ --name=loki \ grafana/loki
grafana
docker run -d -p 3000:3000 \ --net=loki \ --name=grafana \ grafana/grafana
fluent-plugin-grafana-loki
docker run -d -p 24224:24224 \ -e LOKI_URL=http://loki:3100 \ -v ".../lokiprac/step3_retry/conf/:/fluentd/etc/loki/" \ --net=loki \ --name=fluentd \ grafana/fluent-plugin-grafana-loki
loki.conf
<source> @type forward </source> <match loki.dev.qwer.**> @type loki url "#{ENV['LOKI_URL']}" username "#{ENV['LOKI_USERNAME']}" password "#{ENV['LOKI_PASSWORD']}" extra_labels {"env":"dev", "app":"qwer"} </match> <match loki.dev.asdf.**> @type loki url "#{ENV['LOKI_URL']}" username "#{ENV['LOKI_USERNAME']}" password "#{ENV['LOKI_PASSWORD']}" extra_labels {"env":"dev", "app":"asdf"} </match> <match loki.prod.qwer.**> @type loki url "#{ENV['LOKI_URL']}" username "#{ENV['LOKI_USERNAME']}" password "#{ENV['LOKI_PASSWORD']}" extra_labels {"env":"prod", "app":"qwer"} </match> <match loki.prod.asdf.**> @type loki url "#{ENV['LOKI_URL']}" username "#{ENV['LOKI_USERNAME']}" password "#{ENV['LOKI_PASSWORD']}" extra_labels {"env":"prod", "app":"asdf"} </match>
確認
fluent-cat
$ echo '{"message":"qwer"}' | ./fluent-cat -p 24224 loki.dev.qwer $ echo '{"message":"asdf"}' | ./fluent-cat -p 24224 loki.dev.asdf
コンテナでping -> fluentd ログドライバー -> fluent-plugin-grafana-loki -> loki に流す。
docker run --log-driver=fluentd \ --log-opt fluentd-address=localhost:24224 \ --log-opt tag=loki.prod.qwer \ alpine:3.9 ping localhost PING localhost (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: seq=0 ttl=64 time=0.116 ms 64 bytes from 127.0.0.1: seq=1 ttl=64 time=0.103 ms 64 bytes from 127.0.0.1: seq=2 ttl=64 time=0.086 ms 64 bytes from 127.0.0.1: seq=3 ttl=64 time=0.108 ms 64 bytes from 127.0.0.1: seq=4 ttl=64 time=0.110 ms 64 bytes from 127.0.0.1: seq=5 ttl=64 time=0.096 ms 64 bytes from 127.0.0.1: seq=6 ttl=64 time=0.115 ms ^C --- localhost ping statistics --- 7 packets transmitted, 7 packets received, 0% packet loss round-trip min/avg/max = 0.086/0.104/0.116 ms
今回はうまくいった
まだ https://github.com/grafana/loki/issues/271 が発生するが、retension とかの問題?
kong プラグイン配布 インストール アンインストール
前回で kong は開発環境も含んでいるのだと実感した。今回は開発したプラグインの配布について。
Packaging sources
.rockspec のディレクトリへ移動し、パッケージング。luarocks pack に失敗する。原因は zip
がない、だと?
$ cd /kong-plugin $ luarocks make kong-plugin-myplugin 0.1.0-1 is now installed in /usr/local (license: Apache 2.0) $ ll total 26 drwxrwxrwx 1 vagrant vagrant 4096 Oct 2 10:58 ./ drwxr-xr-x 26 root root 4096 Oct 8 10:25 ../ drwxrwxrwx 1 vagrant vagrant 0 Oct 2 10:58 .git/ -rwxrwxrwx 1 vagrant vagrant 9 Oct 2 10:58 .gitignore* drwxrwxrwx 1 vagrant vagrant 0 Oct 2 10:58 kong/ -rwxrwxrwx 1 vagrant vagrant 1402 Oct 2 10:58 kong-plugin-myplugin-0.1.0-1.rockspec* -rwxrwxrwx 1 vagrant vagrant 11357 Oct 2 10:58 LICENSE* -rwxrwxrwx 1 vagrant vagrant 574 Oct 2 10:58 .luacheckrc* -rwxrwxrwx 1 vagrant vagrant 360 Oct 2 10:58 README.md* drwxrwxrwx 1 vagrant vagrant 0 Oct 2 10:58 spec/ $ luarocks pack kong-plugin-myplugin 0.1.0-1 Error: Failed packing /kong-plugin/kong-plugin-myplugin-0.1.0-1.all.rock $ luarocks pack kong-plugin-myplugin-0.1.0-1.rockspec Error: Failed packing /kong-plugin/kong-plugin-myplugin-0.1.0-1.src.rock - 'zip' program not found. Make sure zip is installed and is available in your PATH (or you may want to edit the 'variables.ZIP' value in file '/usr/local/etc/luarocks/config-5.1.lua') $ zip Command 'zip' not found, but can be installed with: apt install zip Please ask your administrator.
apt install zip
して luarocks pack
で成功。作成された .rock を確認
$ sudo apt install zip $ luarocks pack kong-plugin-myplugin 0.1.0-1 Packed: /kong-plugin/kong-plugin-myplugin-0.1.0-1.all.rock $ ll *.rock -rwxrwxrwx 1 vagrant vagrant 7456 Oct 8 10:54 kong-plugin-myplugin-0.1.0-1.all.rock* $ zipinfo kong-plugin-myplugin-0.1.0-1.all.rock Archive: kong-plugin-myplugin-0.1.0-1.all.rock Zip file size: 7456 bytes, number of entries: 11 drwxr-xr-x 3.0 unx 0 bx stor 19-Oct-08 10:54 doc/ -rw-r--r-- 3.0 unx 360 tx defN 19-Oct-08 10:54 doc/README.md -rw-r--r-- 3.0 unx 11357 tx defN 19-Oct-08 10:54 doc/LICENSE -rw-r--r-- 3.0 unx 1402 tx defN 19-Oct-08 10:54 kong-plugin-myplugin-0.1.0-1.rockspec -rw-r--r-- 3.0 unx 475 tx defN 19-Oct-08 10:54 rock_manifest drwxr-xr-x 3.0 unx 0 bx stor 19-Oct-08 10:54 lua/ drwxr-xr-x 3.0 unx 0 bx stor 19-Oct-08 10:54 lua/kong/ drwxr-xr-x 3.0 unx 0 bx stor 19-Oct-08 10:54 lua/kong/plugins/ drwxr-xr-x 3.0 unx 0 bx stor 19-Oct-08 10:54 lua/kong/plugins/myplugin/ -rw-r--r-- 3.0 unx 298 tx defN 19-Oct-08 10:54 lua/kong/plugins/myplugin/schema.lua -rw-r--r-- 3.0 unx 575 tx defN 19-Oct-08 10:54 lua/kong/plugins/myplugin/handler.lua 11 files, 14467 bytes uncompressed, 5612 bytes compressed: 61.2%
Installing the plugin
別vm 作る
$ git clone https://github.com/Kong/kong-vagrant.git kong-vagrant2 $ cd kong-vagrant2 $ git clone https://github.com/Kong/kong $ cd kong $ git checkout 1.3.0 $ cd .. $ vi Vagrantfile -- vb.name = "vagrant_kong" -> vagrant_kong2 $ vagrant up $ vagrant ssh-config --host mykong2 >> ~/.ssh/config $ ssh mykong2
プラグインをインストール
$ luarocks install kong-plugin-myplugin-0.1.0-1.all.rock kong-plugin-myplugin 0.1.0-1 is now installed in /usr/local (license: Apache 2.0)
kong を起動して確認するが myplugin いない?
$ cd /kong $ bin/kong migrations bootstrap $ bin/kong start $ curl http://localhost:8001 | jq . | grep myplugin
luarock list にはいる。
$ luarock list : kong-plugin-myplugin 0.1.0-1 (installed) - /home/vagrant/.luarocks/lib/luarocks/rocks-5.1 0.1.0-1 (installed) - /usr/local/lib/luarocks/rocks-5.1 :
あー 'KONG_PLUGINS' してなかった。やり直してインストール成功
$ export KONG_PLUGINS=bundled,myplugin $ kong restart Kong stopped Kong started $ curl -sS http://localhost:8001 | jq . | grep myplugin "myplugin": true, "myplugin" "myplugin": true,
プラグイン使ってみる
サービスとルート作成
$ curl -i -X POST \ --url http://localhost:8001/services/ \ --data 'name=mockbin' \ --data 'url=http://mockbin.org/request' $ curl -i -X POST \ --url http://localhost:8001/services/mockbin/routes \ --data 'paths=/'
プラグイン適用前
$ curl -i http://localhost:8000 HTTP/1.1 200 OK
プラグイン適用
$ curl -i -X POST \ --url http://localhost:8001/services/mockbin/plugins \ --data 'name=myplugin' HTTP/1.1 201 Created {"created_at":1570616021, "config":{}, "id":"508d4234-217c-4e0c-8f43-8e51145fda66", "service":{"id":"a861ee21-52b0-4638-8633-5312e4ae2379"}, "name":"myplugin", "protocols":["grpc","grpcs","http","https"], "enabled":true, "run_on":"first", "consumer":null, "route":null, "tags":null }
プラグイン適用後
$ curl -i http://localhost:8000 HTTP/1.1 400 Bad Request : {"message":"asdf is empty"} curl -i -H "asdf: qwer" http://localhost:8000 HTTP/1.1 401 Unauthorized : {"message":"asdf is not asdf"} curl -i -H "asdf: asdf" http://localhost:8000 HTTP/1.1 200 OK
Removing a plugin
手順は3つ。
プラグインの確認を削除
$ curl http://localhost:8001/services/mockbin/plugins { "next": null, "data": [ { "created_at": 1570616021, "config": {}, "id": "508d4234-217c-4e0c-8f43-8e51145fda66", "service": { "id": "a861ee21-52b0-4638-8633-5312e4ae2379" }, "name": "myplugin", "protocols": [ "grpc", "grpcs", "http", "https" ], "enabled": true, "run_on": "first", "consumer": null, "route": null, "tags": null } ] } $ curl -i -X DELETE http://localhost:8001/plugins/508d4234-217c-4e0c-8f43-8e51145fda66 HTTP/1.1 204 No Content $ curl http://localhost:8001/services/mockbin/plugins {"next":null,"data":[]}
plugins ディレクティブからプラグインを削除し kong 再起動し確認
$ export KONG_PLUGINS=bundled $ kong restart Kong stopped Kong started $ curl -sS http://localhost:8001 | jq . | grep myplugin
プラグインを完全に削除
$ luarocks list | grep myplugin kong-plugin-myplugin $ luarocks remove kong-plugin-myplugin Checking stability of dependencies in the absence of kong-plugin-myplugin 0.1.0-1... Removing kong-plugin-myplugin 0.1.0-1... Removal successful. $ luarocks list | grep myplugin
おわり
とかなんとかやっているうちに 1.4.0rc1 が来ていた。
https://github.com/Kong/kong/releases/tag/1.4.0rc1
https://github.com/Kong/kong/blob/1.4.0rc1/CHANGELOG.md#140rc1