envoy learn/on-your-laptop
このあたり
https://www.envoyproxy.io/learn/on-your-laptop
https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/front_proxy.html
Running Envoy
https://github.com/envoyproxy/envoy の example/examples/front-proxy に今回の学習環境がある。docker-compose up で3個のコンテナが起動する。
$ git clone https://github.com/envoyproxy/envoy $ cd envoy/examples/front-proxy $ docker-compose up --build -d : $ docker-compose ps Name Command State Ports ---------------------------------------------------------------------------------------------------------------------------- front-proxy_front-envoy_1 /docker-entrypoint.sh /bin ... Up 10000/tcp, 0.0.0.0:8000->80/tcp, 0.0.0.0:8001->8001/tcp front-proxy_service1_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 80/tcp front-proxy_service2_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 80/tcp
この図のようになる
https://www.envoyproxy.io/docs/envoy/latest/_images/docker_compose_v0.1.svg
tcp:8080 -> [front_envoy_1(envoy)] -> [service1_1(envoy + service.py)] -> [service2_1(envoy + service.py)]
service.py
curl の前に service.py を確認しレスポンスを予想する。 service と trace がある。service_number パラメータを受け取ってはいるものの利用していないようだ?
https://github.com/envoyproxy/envoy/blob/master/examples/front-proxy/service.py
@app.route('/service/<service_number>') def hello(service_number): return ('Hello from behind Envoy (service {})! hostname: {} resolved' 'hostname: {}\n'.format(os.environ['SERVICE_NAME'], socket.gethostname(), socket.gethostbyname(socket.gethostname()))) @app.route('/trace/<service_number>') def trace(service_number): headers = {} # call service 2 from service 1 if int(os.environ['SERVICE_NAME']) == 1: for header in TRACE_HEADERS_TO_PROPAGATE: if header in request.headers: headers[header] = request.headers[header] ret = requests.get("http://localhost:9000/trace/2", headers=headers) return ('Hello from behind Envoy (service {})! hostname: {} resolved' 'hostname: {}\n'.format(os.environ['SERVICE_NAME'], socket.gethostname(), socket.gethostbyname(socket.gethostname())))
Sending Traffic
service に curl。パラメータで得られるレスポンスが決まっていてラウンドロビンしていない。3 は 404 になった。
$ curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 0cb6d4920789 resolvedhostname: 172.18.0.2 $ curl localhost:8000/service/2 Hello from behind Envoy (service 2)! hostname: 2ccf243aa95e resolvedhostname: 172.18.0.4 $ curl localhost:8000/service/3 [root@env1 front-proxy]# curl -v localhost:8000/service/3 * About to connect() to localhost port 8000 (#0) * Trying ::1... * Connected to localhost (::1) port 8000 (#0) > GET /service/3 HTTP/1.1 > User-Agent: curl/7.29.0 > Host: localhost:8000 > Accept: */* > < HTTP/1.1 404 Not Found < date: Thu, 30 May 2019 10:30:57 GMT < server: envoy < content-length: 0 < * Connection #0 to host localhost left intact
docker ps でコンテナのhostname を確認。
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2ccf243aa95e front-proxy_service2 "/bin/sh -c /usr/loc…" 47 seconds ago Up 44 seconds 80/tcp, 10000/tcp front-proxy_service2_1 7d80c76c6736 front-proxy_front-envoy "/docker-entrypoint.…" 47 seconds ago Up 44 seconds 0.0.0.0:8001->8001/tcp, 10000/tcp, 0.0.0.0:8000->80/tcp front-proxy_front-envoy_1 0cb6d4920789 front-proxy_service1 "/bin/sh -c /usr/loc…" 47 seconds ago Up 45 seconds 80/tcp, 10000/tcp front-proxy_service1_1
service1 の /etc/hosts
# docker exec -it 0c sh / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.18.0.2 0cb6d4920789
service2 の /etc/hosts
# docker exec -it 2c sh / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.18.0.4 2ccf243aa95e
Configuring Envoy
docker-compose.yaml はまあいいとして、front-envoy.yaml を確認。
https://github.com/envoyproxy/envoy/blob/master/examples/front-proxy/docker-compose.yaml
https://github.com/envoyproxy/envoy/blob/master/examples/front-proxy/front-envoy.yaml
トップレベルには admin
と static_resources
がある。admin
は管理側として、問題は static_resources
。前回のも static_resources
だった。
このstatic_resourcesブロックには、動的に管理されていないクラスターとリスナーの定義が含まれています。クラスタはホスト/ポートの名前付きグループで、Envoyはこれを介してトラフィックの負荷を分散します。リスナーはクライアントが接続できるネットワークの場所という名前です。このadminブロックは管理サーバーを設定します。
私たちのフロントプロキシは、HTTPトラフィックを管理するためにEnvoyを設定するフィルタチェーンを持つ、ポート80でリスンするように設定された単一のリスナを持っています。
テキストとソースが若干異なるので、ソースのほうを載せる。
listeners: - address: socket_address: address: 0.0.0.0 port_value: 80 filter_chains: - filters: - name: envoy.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager codec_type: auto stat_prefix: ingress_http route_config: name: local_route
HTTP接続マネージャフィルタの設定には、すべてのドメインのトラフィックを受け付けるように設定された単一の仮想ホストの定義があります。
/service/1 はクラスタ:service1に向いていることがわかる。
virtual_hosts: - name: backend domains: - "*" routes: - match: prefix: "/service/1" route: cluster: service1 - match: prefix: "/service/2" route: cluster: service2
ここでルートが設定され、適切なクラスタへのトラフィック/service/1と/service/2適切なクラスタへのトラフィックがマッピングされます。
次に静的なクラスタ定義があります。
clusters: - name: service1 connect_timeout: 0.25s type: strict_dns lb_policy: round_robin http2_protocol_options: {} load_assignment: cluster_name: service1 endpoints: - lb_endpoints: - endpoint: address: socket_address: address: service1 port_value: 80 - name: service2 connect_timeout: 0.25s type: strict_dns lb_policy: round_robin http2_protocol_options: {} load_assignment: cluster_name: service2 endpoints: - lb_endpoints: - endpoint: address: socket_address: address: service2 port_value: 80
Step 4: Test Envoy’s load balancing capabilities
こちらを試す https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/front_proxy.html
クラスタ:service1 をスケールさせる。
# docker-compose scale service1=3 WARNING: The scale command is deprecated. Use the up command with the --scale flag instead. Starting front-proxy_service1_1 ... done Creating front-proxy_service1_2 ... done Creating front-proxy_service1_3 ... done
docker-compose ps と docker ps
# docker-compose ps Name Command State Ports ---------------------------------------------------------------------------------------------------------------------------- front-proxy_front-envoy_1 /docker-entrypoint.sh /bin ... Up 10000/tcp, 0.0.0.0:8000->80/tcp, 0.0.0.0:8001->8001/tcp front-proxy_service1_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 80/tcp front-proxy_service1_2 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 80/tcp front-proxy_service1_3 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 80/tcp front-proxy_service2_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 80/tcp # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 280801f8ad72 front-proxy_service1 "/bin/sh -c /usr/loc…" 3 minutes ago Up 3 minutes 80/tcp, 10000/tcp front-proxy_service1_3 0d1c580854e8 front-proxy_service1 "/bin/sh -c /usr/loc…" 3 minutes ago Up 3 minutes 80/tcp, 10000/tcp front-proxy_service1_2 2ccf243aa95e front-proxy_service2 "/bin/sh -c /usr/loc…" About an hour ago Up About an hour 80/tcp, 10000/tcp front-proxy_service2_1 7d80c76c6736 front-proxy_front-envoy "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:8001->8001/tcp, 10000/tcp, 0.0.0.0:8000->80/tcp front-proxy_front-envoy_1 0cb6d4920789 front-proxy_service1 "/bin/sh -c /usr/loc…" About an hour ago Up About an hour 80/tcp, 10000/tcp front-proxy_service1_1
curl。クラスタ:service1 はスケールアップしている
# curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 280801f8ad72 resolvedhostname: 172.18.0.6 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 0cb6d4920789 resolvedhostname: 172.18.0.2 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 0d1c580854e8 resolvedhostname: 172.18.0.5
クラスタ:service2 は変わらない
# curl localhost:8000/service/2 Hello from behind Envoy (service 2)! hostname: 2ccf243aa95e resolvedhostname: 172.18.0.4
Step 5: enter containers and curl services
ここまでは、クラスタ外からフロントを通してのリクエストだったが、クラスタ内からリクエストする。
# docker exec -it front-proxy_front-envoy_1 sh # curl localhost:80/service/1 Hello from behind Envoy (service 1)! hostname: 0cb6d4920789 resolvedhostname: 172.18.0.2 # curl localhost:80/service/1 Hello from behind Envoy (service 1)! hostname: 0d1c580854e8 resolvedhostname: 172.18.0.5 # curl localhost:80/service/1 Hello from behind Envoy (service 1)! hostname: 280801f8ad72 resolvedhostname: 172.18.0.6 # curl localhost:80/service/2 Hello from behind Envoy (service 2)! hostname: 2ccf243aa95e resolvedhostname: 172.18.0.4
フロントコンテナからクラスタ名へcurl。このとき service_number パラメータでルーティングされていないことがわかる。
# curl service1/service/2 Hello from behind Envoy (service 1)! hostname: 280801f8ad72 resolvedhostname: 172.18.0.6 # curl service1/service/1 Hello from behind Envoy (service 1)! hostname: 0d1c580854e8 resolvedhostname: 172.18.0.5 # curl service1/service/1 Hello from behind Envoy (service 1)! hostname: 0cb6d4920789 resolvedhostname: 172.18.0.2 # curl service2/service/1 Hello from behind Envoy (service 2)! hostname: 2ccf243aa95e resolvedhostname: 172.18.0.4
service1クラスタのコンテナからlocalhostへcurl。何度やってもかわらない。そうだろうね。プロンプトに/ が表示されてるがわざとかな?
# docker exec -it front-proxy_service1_1 sh / # curl localhost:80/service/1 Hello from behind Envoy (service 1)! hostname: 0cb6d4920789 resolvedhostname: 172.18.0.2 / # curl localhost:80/service/1 Hello from behind Envoy (service 1)! hostname: 0cb6d4920789 resolvedhostname: 172.18.0.2
service1クラスタのコンテナからクラスタ名へcurl。
/ # curl service1/service/2 Hello from behind Envoy (service 1)! hostname: 280801f8ad72 resolvedhostname: 172.18.0.6 / # curl service1/service/1 Hello from behind Envoy (service 1)! hostname: 280801f8ad72 resolvedhostname: 172.18.0.6 / # curl service1/service/1 Hello from behind Envoy (service 1)! hostname: 0cb6d4920789 resolvedhostname: 172.18.0.2 / # curl service1/service/1 Hello from behind Envoy (service 1)! hostname: 0d1c580854e8 resolvedhostname: 172.18.0.5 / # curl service2/service/2 Hello from behind Envoy (service 2)! hostname: 2ccf243aa95e resolvedhostname: 172.18.0.4
Step 6: enter containers and curl admin
フロントから admin にcurl。server_info
# docker exec -it front-proxy_front-envoy_1 sh # curl localhost:8001/server_info { "version": "4dafba65baaf9769723f895761268eed31af629b/1.11.0-dev/Clean/RELEASE/BoringSSL", "state": "LIVE", "command_line_options": { "base_id": "0", "concurrency": 2, "config_path": "/etc/front-envoy.yaml", "config_yaml": "", "allow_unknown_fields": false, "admin_address_path": "", "local_address_ip_version": "v4", "log_level": "info", "component_log_level": "", "log_format": "[%Y-%m-%d %T.%e][%t][%l][%n] %v", "log_path": "", "hot_restart_version": false, "service_cluster": "front-proxy", "service_node": "", "service_zone": "", "mode": "Serve", "max_stats": "0", "max_obj_name_len": "0", "disable_hot_restart": false, "enable_mutex_tracing": false, "restart_epoch": 0, "cpuset_threads": false, "file_flush_interval": "10s", "drain_time": "600s", "parent_shutdown_time": "900s" }, "uptime_current_epoch": "5116s", "uptime_all_epochs": "5116s" }
stats
# curl localhost:8001/stats access_log_file.flushed_by_timer: 269 access_log_file.reopen_failed: 0 access_log_file.write_buffered: 3 access_log_file.write_completed: 3 access_log_file.write_total_buffered: 0 cluster.service1.assignment_stale: 0 cluster.service1.assignment_timeout_received: 0 cluster.service1.bind_errors: 0 cluster.service1.circuit_breakers.default.cx_open: 0 cluster.service1.circuit_breakers.default.cx_pool_open: 0 :
コンテナ止めたらどうなる?
コンテナをstopしたらどうなるか試した。1回失敗したw。その後は問題なかった。
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 280801f8ad72 front-proxy_service1 "/bin/sh -c /usr/loc…" 30 minutes ago Up 30 minutes 80/tcp, 10000/tcp front-proxy_service1_3 0d1c580854e8 front-proxy_service1 "/bin/sh -c /usr/loc…" 30 minutes ago Up 30 minutes 80/tcp, 10000/tcp front-proxy_service1_2 2ccf243aa95e front-proxy_service2 "/bin/sh -c /usr/loc…" 2 hours ago Up 2 hours 80/tcp, 10000/tcp front-proxy_service2_1 7d80c76c6736 front-proxy_front-envoy "/docker-entrypoint.…" 2 hours ago Up 2 hours 0.0.0.0:8001->8001/tcp, 10000/tcp, 0.0.0.0:8000->80/tcp front-proxy_front-envoy_1 0cb6d4920789 front-proxy_service1 "/bin/sh -c /usr/loc…" 2 hours ago Up 2 hours 80/tcp, 10000/tcp front-proxy_service1_1 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 0d1c580854e8 resolvedhostname: 172.18.0.5 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 0cb6d4920789 resolvedhostname: 172.18.0.2 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 0d1c580854e8 resolvedhostname: 172.18.0.5 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 280801f8ad72 resolvedhostname: 172.18.0.6 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 0cb6d4920789 resolvedhostname: 172.18.0.2 # docker stop 0c 0c # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 0d1c580854e8 resolvedhostname: 172.18.0.5 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 280801f8ad72 resolvedhostname: 172.18.0.6 # curl localhost:8000/service/1 upstream connect error or disconnect/reset before headers. reset reason: connection failure # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 280801f8ad72 resolvedhostname: 172.18.0.6 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 0d1c580854e8 resolvedhostname: 172.18.0.5 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 280801f8ad72 resolvedhostname: 172.18.0.6 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 0d1c580854e8 resolvedhostname: 172.18.0.5 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 0d1c580854e8 resolvedhostname: 172.18.0.5 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 280801f8ad72 resolvedhostname: 172.18.0.6 # curl localhost:8000/service/1 Hello from behind Envoy (service 1)! hostname: 280801f8ad72 resolvedhostname: 172.18.0.6