|
|
@ -0,0 +1,548 @@ |
|
|
|
container-guide.md |
|
|
|
------------------ |
|
|
|
``` |
|
|
|
@version 180521:1 |
|
|
|
@author zhangxuhong <zhangxuhong@xitu.io> |
|
|
|
``` |
|
|
|
|
|
|
|
Name |
|
|
|
---- |
|
|
|
|
|
|
|
container-guide - 容器化参考文档. |
|
|
|
|
|
|
|
|
|
|
|
Table of Contents |
|
|
|
----------------- |
|
|
|
|
|
|
|
* [Name](#name) |
|
|
|
* [Prepare For Container](#prepare-for-container) |
|
|
|
|
|
|
|
|
|
|
|
Prepare For Container |
|
|
|
--------------------- |
|
|
|
|
|
|
|
- 开始 |
|
|
|
|
|
|
|
首先, 请阅读这个gitbook来补充有关kubernetes的相关知识. [https://jimmysong.io/kubernetes-handbook/](https://jimmysong.io/kubernetes-handbook/) |
|
|
|
|
|
|
|
- 简单了解 kubernetes 架构 |
|
|
|
|
|
|
|
- 需求 |
|
|
|
|
|
|
|
我们假设接到了个计数器的需求 access-counter, 该需求要求用户传入自己的suid, 然后在redis中对该suid进行加一操作. |
|
|
|
输出 json 结构为: {"suid":"ZFnUF6YraFRqRbY7izMm", "count":12}. |
|
|
|
如果用户传入的suid为空, 则调用 kubernetes 集群中的 suid-generator 接口生成一个suid, 然后按照上面的格式返回. |
|
|
|
注意: 本示例只是为了展示 kubernetes 使用, 这个例子存在很明显的问题, 比如没有鉴权, 以及生成 suid 不应该由计数器负责. |
|
|
|
|
|
|
|
- 改造 |
|
|
|
|
|
|
|
我们的容器化方案是 docker + kubernetes, 因此我们的第一个步骤就是将我们现有的业务装到docker中. |
|
|
|
在装入 docker 之前, 我们需要简单修改一下程序来适应容器环境的一些需求. |
|
|
|
|
|
|
|
- log 问题 |
|
|
|
|
|
|
|
由于我们默认容器是不映射实体存储设备的, 也就意味着我们的容器销毁后里面的内容就全部丢失了, 所以在容器内部写日志本身就毫无意义. |
|
|
|
因此, 我们需要将 info 级别的日志直接打印到 stdout, error 级别的日志直接打印到 stderr, 然后通过特定的日志收集程序进行统一处理. |
|
|
|
|
|
|
|
- 针对 PHP 的场景 |
|
|
|
|
|
|
|
``` |
|
|
|
// 打印到 stdout |
|
|
|
error_log($log, 3, "php://stdout"); |
|
|
|
|
|
|
|
// 打印到 stderr |
|
|
|
error_log($log, 3, "php://stderr"); |
|
|
|
// 注意 error_log 函数不是二进制安全的, 意味着如果 $log 变量中的字符含有 "\0" 的话, log 会被截断, 后半部分会丢失. |
|
|
|
// 要么过滤文本中的 \0, 要么对文本进行转义(不推荐, 会导致日志人肉不可读), 要么保证文本没有 \0 (比如日志是你自己写的字面量). |
|
|
|
``` |
|
|
|
|
|
|
|
- 针对 lua 的场景 |
|
|
|
|
|
|
|
``` |
|
|
|
-- 打印到 stdout |
|
|
|
io.stdout.write(log) |
|
|
|
|
|
|
|
-- 打印到 stderr |
|
|
|
io.stderr.write(log) |
|
|
|
``` |
|
|
|
|
|
|
|
- 针对 go 的场景 |
|
|
|
|
|
|
|
``` |
|
|
|
// 打印到 stdout |
|
|
|
fmt.Fprintln(os.Stdout, log) |
|
|
|
|
|
|
|
// 打印到 stderr |
|
|
|
fmt.Fprintln(os.Stderr, log) |
|
|
|
os.Stderr.WriteString(log) |
|
|
|
logInstance := log.New(os.Stderr, "", 0) |
|
|
|
logInstance.Println(log) |
|
|
|
|
|
|
|
// 总之 go 想打印的话方式还是很多的. |
|
|
|
``` |
|
|
|
|
|
|
|
- 针对 nodejs 的场景 |
|
|
|
|
|
|
|
``` |
|
|
|
// @todo: 待好心的同学有时间补完这里, 我不会写js ... |
|
|
|
``` |
|
|
|
|
|
|
|
注意以上只是你自己写的日志, 你的 runtime (例如: php-fpm, luajit, node.js) 本身也会报错, 你使用的框架也会报错. |
|
|
|
因此还需要根据场景将 runtime 和框架的错误日志也写到 stderr. 否则线上出了故障要看日志只能 attach 到容器上去翻看了. |
|
|
|
然后容器如果是触发故障就崩溃, kubernetes 会自动重启故障, 你的故障日志就消失不见了. |
|
|
|
至于系统日志, 则会由 systemctl 统一接管, 可以用 journalctl -u {systemctlUnitName} -f 查看, 不用担心. |
|
|
|
|
|
|
|
- 容器互相调用问题 |
|
|
|
|
|
|
|
既然容器隔离开了, 那么容器间怎么通信呢? 其实很简单, 直接调用容器的 service name, kubernetes 的内置 dns 就可以解析了. |
|
|
|
|
|
|
|
- nginx 配置样例 |
|
|
|
|
|
|
|
``` |
|
|
|
location ~ \.php$ { |
|
|
|
if ( $fastcgi_script_name ~ \..*\/.*php ) { |
|
|
|
return 403; |
|
|
|
} |
|
|
|
include fastcgi.conf; |
|
|
|
fastcgi_pass ac-counter:9000; |
|
|
|
fastcgi_index index.php; |
|
|
|
} |
|
|
|
``` |
|
|
|
- php 代码调用集群内其他服务的问题 |
|
|
|
|
|
|
|
同上, 直接写 service name 即可. |
|
|
|
|
|
|
|
- 例子 |
|
|
|
|
|
|
|
``` |
|
|
|
/** |
|
|
|
* config here |
|
|
|
*/ |
|
|
|
$conf = array( |
|
|
|
'global' => array( |
|
|
|
'global_id' => 'access-counter', |
|
|
|
'folder' => '/data/repo/access-counter/', |
|
|
|
), |
|
|
|
'log' => array( |
|
|
|
'open' => true, |
|
|
|
'address' => '/data/repo/access-counter/logs/', // the '/' at end of line is necessary |
|
|
|
'split' => 'day', // options: month, day, hour, minute |
|
|
|
), |
|
|
|
'cache' => array( |
|
|
|
'access_counter_cache' => array( |
|
|
|
'host' => 'ac-counter-rds', |
|
|
|
// 'host' => '127.0.0.1', |
|
|
|
'port' => 6379, |
|
|
|
'pass' => null, |
|
|
|
'database' => 0, |
|
|
|
'timeout' => 0, |
|
|
|
), |
|
|
|
), |
|
|
|
'suid_generator_api' => 'http://suid-generator/v1/gen_suid?src=%', |
|
|
|
// 'suid_generator_api' => 'http://suid-generator-api-ms.juejin.im/v1/gen_suid?src=%', |
|
|
|
); |
|
|
|
``` |
|
|
|
|
|
|
|
- php 代码调用集群外服务问题. |
|
|
|
|
|
|
|
同样, 写 service name, 不过我们要建立一个代理用的 service, 我们在下面的小节讲述这个问题. |
|
|
|
|
|
|
|
|
|
|
|
- 生产环境问题 |
|
|
|
|
|
|
|
我们尽量遵循原则 "不在代码中内嵌环境信息" 的原则. 所以我们需要通过外部配置文件来根据环境来进行配置. |
|
|
|
我们来列一下我们需要根据生产环境来切换的资源 |
|
|
|
- 接口 |
|
|
|
- 内部接口 |
|
|
|
好说, 生产环境内的接口就应该是你想要的, 直接调用 service name. |
|
|
|
- 外部接口 |
|
|
|
代理 service, 通过 jenkinsfile 来控制具体映射关系 |
|
|
|
|
|
|
|
- 数据库 |
|
|
|
同接口 |
|
|
|
|
|
|
|
- 逻辑 |
|
|
|
这个是最难的, 比如我们想在接口中输出当前环境是 test, beta 还是 prod. 这时候就必须让代码(逻辑)感知到环境. |
|
|
|
注意, 这种能没有就不要有. 他破坏了我们代码的可部署性, 试想一下哪天我们多了个环境叫beta2, 你没准就要痛苦的修改1000多个repo的代码. |
|
|
|
因此, 良好的设计是, 获取当前环境的名称, 然后打印出来, 这样逻辑只是"获取环境参数, 并打印"跟环境无关. |
|
|
|
针对 php-fpm 的场景, 我们有几种方式获取当前环境. |
|
|
|
- 将环境写入nginx, 用fcgi-param获取 |
|
|
|
- 将 www.conf 配置文件的 clear_env = no 取消注释, 添加环境变量例如 |
|
|
|
``` |
|
|
|
env["SERVER_ENV"] = $SERVER_ENV |
|
|
|
``` |
|
|
|
然后 我们在 deployments 文件中设置环境变量 |
|
|
|
``` |
|
|
|
# ac-counter-deployment.yaml |
|
|
|
# |
|
|
|
# @version 180806:2 |
|
|
|
# @author zhangxuhong <zhangxuhong@xitu.io> |
|
|
|
|
|
|
|
kind: Deployment |
|
|
|
apiVersion: apps/v1 |
|
|
|
metadata: |
|
|
|
name: ac-counter |
|
|
|
labels: |
|
|
|
name: ac-counter |
|
|
|
role: backend |
|
|
|
pl: php |
|
|
|
application: php |
|
|
|
version: 7.2.9 |
|
|
|
division: infrastructure |
|
|
|
spec: |
|
|
|
replicas: 3 |
|
|
|
selector: |
|
|
|
matchLabels: |
|
|
|
name: ac-counter |
|
|
|
strategy: |
|
|
|
type: RollingUpdate |
|
|
|
rollingUpdate: |
|
|
|
maxUnavailable: 25% |
|
|
|
maxSurge: 25% |
|
|
|
template: |
|
|
|
metadata: |
|
|
|
labels: |
|
|
|
name: ac-counter |
|
|
|
spec: |
|
|
|
containers: |
|
|
|
- name: ac-counter |
|
|
|
image: __IMAGE__ |
|
|
|
imagePullPolicy: Always |
|
|
|
ports: |
|
|
|
- name: ac-counter |
|
|
|
containerPort: 9000 |
|
|
|
protocol: TCP |
|
|
|
env: |
|
|
|
- name: SERVER_ENV |
|
|
|
value: "test" |
|
|
|
``` |
|
|
|
- 针对 php-cli 直接使用 getenv, 或者获取命令行参数都可以. |
|
|
|
|
|
|
|
|
|
|
|
Load Repo Into Container |
|
|
|
------------------------ |
|
|
|
|
|
|
|
- Dockerfile |
|
|
|
|
|
|
|
首先我们改造好repo后, 接下来就可以开始装入容器了. 下面开始编写Dockerfile. |
|
|
|
|
|
|
|
构建镜像的注意事项详见 [build-a-docker-image.md](./build-a-docker-image.md) |
|
|
|
|
|
|
|
- nginx docker file |
|
|
|
|
|
|
|
``` |
|
|
|
# ac-counter-ngx.dockerfile |
|
|
|
# Dockerfile for demo ac-counter |
|
|
|
# This docker file base on harbor02.juejin.id/lib/php:7.2.9-fpm-alpine3.8 |
|
|
|
# @version 180719:2 |
|
|
|
# @author zhangxuhong <zhangxuhong@xitu.io> |
|
|
|
# |
|
|
|
|
|
|
|
# base info |
|
|
|
FROM harbor02.juejin.id/infrastructure/nginx-1.14.0-centos:latest |
|
|
|
MAINTAINER zhangxuhong <zhangxuhong@xitu.io> |
|
|
|
USER root |
|
|
|
|
|
|
|
# copy config to /data/apps/nginx/conf/vhost/ |
|
|
|
COPY ./config/nginx/ /data/apps/nginx/conf/vhost/ |
|
|
|
|
|
|
|
# define health check |
|
|
|
HEALTHCHECK --interval=5s --timeout=3s CMD curl -fs http://127.0.0.1:80/status?src=docker_health_check -H"Host:access-counter-api.juejin.im" || exit 1 |
|
|
|
|
|
|
|
# run php-fpm |
|
|
|
EXPOSE 80 |
|
|
|
ENTRYPOINT ["/data/apps/nginx/sbin/nginx", "-g", "daemon off;"] |
|
|
|
``` |
|
|
|
|
|
|
|
|
|
|
|
- access-counter docker file |
|
|
|
|
|
|
|
``` |
|
|
|
# ac-counter.dockerfile |
|
|
|
# Dockerfile for demo access-counter |
|
|
|
# This docker file base on harbor02.juejin.id/lib/php:7.2.9-fpm-alpine3.8 |
|
|
|
# @version 180719:2 |
|
|
|
# @author zhangxuhong <zhangxuhong@xitu.io> |
|
|
|
# |
|
|
|
|
|
|
|
# base info |
|
|
|
FROM harbor02.juejin.id/lib/php:7.2.9-fpm-alpine3.8 |
|
|
|
MAINTAINER zhangxuhong <zhangxuhong@xitu.io> |
|
|
|
USER root |
|
|
|
|
|
|
|
# init extension |
|
|
|
RUN apk add --update --no-cache --virtual .build-deps \ |
|
|
|
curl \ |
|
|
|
g++ \ |
|
|
|
gcc \ |
|
|
|
gnupg \ |
|
|
|
libgcc \ |
|
|
|
make \ |
|
|
|
alpine-sdk \ |
|
|
|
autoconf |
|
|
|
RUN pecl install redis-4.1.1 && docker-php-ext-enable redis |
|
|
|
|
|
|
|
# copy repo to /data/repo |
|
|
|
COPY . /data/repo/access-counter/ |
|
|
|
|
|
|
|
# define health check |
|
|
|
HEALTHCHECK --interval=5s --timeout=3s CMD netstat -an | grep 9000 > /dev/null; if [ 0 != $? ]; then exit 1; fi; |
|
|
|
|
|
|
|
# run php-fpm |
|
|
|
EXPOSE 9000 |
|
|
|
ENTRYPOINT ["php-fpm"] |
|
|
|
``` |
|
|
|
|
|
|
|
装入完毕后开始构建镜像准备本地测试. |
|
|
|
|
|
|
|
``` |
|
|
|
docker build ./ -t suid-generator |
|
|
|
``` |
|
|
|
|
|
|
|
然后运行镜像进行测试. |
|
|
|
|
|
|
|
``` |
|
|
|
docker run suid-generator |
|
|
|
|
|
|
|
docker exec -i -t {docker id} /bin/sh |
|
|
|
curl http://{docker-port-ip}/status?src=tester -H"Host: access-counter-api.juejin.im" |
|
|
|
``` |
|
|
|
|
|
|
|
如果curl正常返回结果就代表测试成功了. |
|
|
|
|
|
|
|
|
|
|
|
- 准备 kubernetes 配置文件 |
|
|
|
|
|
|
|
- Deployment |
|
|
|
|
|
|
|
Deployment文件负责描述整个部署的 Pods 和 ReplicaSets. |
|
|
|
|
|
|
|
- Service |
|
|
|
|
|
|
|
Service 负责映射配置, Service将服务名称与具体的Pod及暴露的端口映射到一起这个映射关系就叫endpoints. |
|
|
|
Service 映射外部IP或者域名见 [mapping-external-services.md](./mapping-external-services.md) |
|
|
|
|
|
|
|
- Ingress |
|
|
|
|
|
|
|
Ingress 负责配置负载均衡, 根据提供的域名和path将业务路由到指定的service. |
|
|
|
|
|
|
|
- Endpoints |
|
|
|
|
|
|
|
endpoint 用来描述 service 对应的流量关系. |
|
|
|
|
|
|
|
- 具体的配置文件请结合 access-counter 项目学习. |
|
|
|
|
|
|
|
|
|
|
|
Configure CI/CD |
|
|
|
--------------- |
|
|
|
|
|
|
|
- Jenkinsfile |
|
|
|
|
|
|
|
Jenkinsfile 其实是Groovy脚本, 通过配置来描述部署过程和配置. |
|
|
|
|
|
|
|
具体编写和注意事项见 [jenkins-pipline-usage.md](./jenkins-pipline-usage.md) |
|
|
|
|
|
|
|
- 创建 pipline jenkins 任务 |
|
|
|
|
|
|
|
注意最好按照我们的命名规则 {repoName}.{clusterNamespace}.{clusterName} 来给jenkins 任务命名. |
|
|
|
|
|
|
|
- 选中 Build when a change is pushed to GitLab. 注意这段话后面的就是webhook地址 |
|
|
|
- 点击 Advanced 按钮, 勾选下面的 Allowed branches, 选择 Filter branches by regex 然后填写^{yourBranchName}$ |
|
|
|
- 点击下面的 Generate 按钮生成 webhook token |
|
|
|
- Pipeline -> Definition -> Pipline Script from SCM |
|
|
|
- SCM 选择 Git |
|
|
|
- 填写git中的地址到 Repository URL. 注意这里有个坑, 需要把我们的gitlab服务器的域名换成IP, 至于为什么, 我弄了12小时也没弄清楚... |
|
|
|
- Credentials 选择已经填写好的 jenkins01 |
|
|
|
- Branches to build 修改成上面Build Triggers填写的一样的 \*/{yourBranchName} |
|
|
|
- Script Path 填写 config/jenkins/{yourJenkinsFile} |
|
|
|
- 最后点击左下角的 save |
|
|
|
|
|
|
|
- 配置 gitlab trigger |
|
|
|
|
|
|
|
- 进入到repo, 选择左侧的 Settings -> Integrations |
|
|
|
- 填写上面得到的 webhook地址 和生成的 token. |
|
|
|
- 点击 Add webhook. |
|
|
|
|
|
|
|
- 多生产环境问题 |
|
|
|
|
|
|
|
- 没错, 我们有test, beta, prod 三个环境, 因此每个repo你都要重复上面无聊的工作3次. (暂时还没想好解决方案) |
|
|
|
|
|
|
|
|
|
|
|
Release ! |
|
|
|
--------- |
|
|
|
|
|
|
|
- 立刻推送到你想要发版的分支出发 webhook 来体验一下 CI/CD 吧 |
|
|
|
|
|
|
|
Panic |
|
|
|
----- |
|
|
|
|
|
|
|
- 如何debug? |
|
|
|
- 容器内bug |
|
|
|
- kubectl 直接查看日志 |
|
|
|
- 接执行 kubectl logs {podName} --namespace={yourDeploymentNamespace} 查看日志, -f 参数可以监听日志. |
|
|
|
- 定位机器进入node节点查看docker日志 |
|
|
|
- 在 kubernetes master 执行 kubectl get pods --namespace={yourDeploymentNamespace} 查找当前 pods 位于哪台机器上 |
|
|
|
- kubectl describe pod {podName} 查看 pod 所在机器(node). |
|
|
|
- 在目标机器执行 docker ps -a 查看进程名称. |
|
|
|
- 最后通过 docker logs {process_name} 查看进程日志. |
|
|
|
- 进入容器debug |
|
|
|
- 进入容器所在节点主机. |
|
|
|
- 执行 docker exec -i -t {CONTAINER_ID} /bin/bash |
|
|
|
- 集群bug |
|
|
|
- 使用 journalctl -u {unit_name} -t 查看想查看的集群进城日志, 例如: docker, kubelet. |
|
|
|
|
|
|
|
|
|
|
|
- 如何扩容? |
|
|
|
|
|
|
|
- 直接修改Deployment文件中的replica数量, 然后CI流程重新部署. |
|
|
|
|
|
|
|
- 用kubectl命令 |
|
|
|
|
|
|
|
``` |
|
|
|
kubectl autoscale deployment {deploymentName} --min=2 --max=10 |
|
|
|
kubectl scale --replicas=3 -f {deploymentFile} |
|
|
|
``` |
|
|
|
|
|
|
|
- 建议除了测试以外用第一种进行扩容, 否则线上与git中的 deployment 文件不一致, 再次发办可能会面临风险. |
|
|
|
|
|
|
|
|
|
|
|
Remove |
|
|
|
------ |
|
|
|
|
|
|
|
- 如何删除? |
|
|
|
|
|
|
|
我们部署了总计三个 resource: deployment, service, ingress. |
|
|
|
那么直接执行kubectl delete {resourceName} {repoName} --namespace={yourDeploymentNamespace} 即可. |
|
|
|
例如: |
|
|
|
|
|
|
|
``` |
|
|
|
kubectl delete deployment {repoName} --namesapce=test |
|
|
|
kubectl delete service {repoName} --namesapce=test |
|
|
|
kubectl delete ingress {repoName} --namesapce=test |
|
|
|
``` |
|
|
|
|
|
|
|
Tips & Reference |
|
|
|
---------------- |
|
|
|
|
|
|
|
如果感兴趣可以阅读其他参考资料书籍(按推荐程度排序): |
|
|
|
|
|
|
|
- [Kubernetes Handbook](https://jimmysong.io/kubernetes-handbook/) |
|
|
|
- Docker 容器与容器云(第2版) |
|
|
|
- Cloud Native Go - 基于Go和React的web云原生应用构建指南 |
|
|
|
- [Kubernetes官方文档](https://kubernetes.io/docs/home/?path=users&persona=app-developer&level=foundational) |
|
|
|
- [Traefik 官方文档](https://docs.traefik.io/) |
|
|
|
- Cloud Native Java |
|
|
|
- Cloud Native Python |
|
|
|
|
|
|
|
|
|
|
|
### docker-client 机器 |
|
|
|
|
|
|
|
- 用于制作docker镜像和测试镜像 |
|
|
|
|
|
|
|
| name | ip address | location | description | |
|
|
|
|----------------------------------|----------------|----------|---------------------------| |
|
|
|
| docker-client01v.lobj.juejin.id | 192.168.0.233 | lobj | 本地测试集群01 | |
|
|
|
|
|
|
|
|
|
|
|
### kubernetes cluster list |
|
|
|
|
|
|
|
| name | location | description | |
|
|
|
|---------------------------------------|----------|---------------------------| |
|
|
|
| test.kube01.lobj.juejin.id | lobj | 本地测试集群01 | |
|
|
|
| beta.kube01.lobj.juejin.id | lobj | 本地beta测试集群01 | |
|
|
|
| prod.kube01.qcbj3b.juejin.id | qcbj3b | 青云北京3B区线上集群01 | |
|
|
|
|
|
|
|
|
|
|
|
### ingress 出口设置 |
|
|
|
|
|
|
|
注意要严格按照列表中的IP和PORT的对应关系来调用PORT, 否则可能会发生ingress流量调度会不起作用或大量流量打到同一个IP上的问题, 业务就无法访问了. |
|
|
|
|
|
|
|
- test |
|
|
|
|
|
|
|
| type | ip | port | cluster | instance | |
|
|
|
|----------|-----------------|------|------------------------------|----------| |
|
|
|
| test | 192.168.0.159 | 80 | test.kube01.lobj.juejin.id | traefik | |
|
|
|
| test | 192.168.0.158 | 8000 | test.kube01.lobj.juejin.id | traefik | |
|
|
|
| test | 192.168.0.157 | 8080 | test.kube01.lobj.juejin.id | traefik | |
|
|
|
|
|
|
|
- beta |
|
|
|
|
|
|
|
| type | ip | port | cluster | instance | |
|
|
|
|----------|-----------------|------|------------------------------|----------| |
|
|
|
| beta | 192.168.0.99 | 80 | beta.kube01.lobj.juejin.id | traefik | |
|
|
|
| beta | 192.168.0.98 | 8000 | beta.kube01.lobj.juejin.id | traefik | |
|
|
|
| beta | 192.168.0.97 | 8080 | beta.kube01.lobj.juejin.id | traefik | |
|
|
|
|
|
|
|
- prod |
|
|
|
|
|
|
|
| type | ip | port | cluster | instance | comment | |
|
|
|
|----------|-----------------|--------|------------------------------|----------|---------------| |
|
|
|
| prod | 172.16.0.199 | 80 | prod.kube01.qcbj3b.juejin.id | traefik | | |
|
|
|
| prod | 172.16.0.198 | 8000 | prod.kube01.qcbj3b.juejin.id | traefik | | |
|
|
|
| prod | 172.16.0.197 | 8080 | prod.kube01.qcbj3b.juejin.id | traefik | | |
|
|
|
| prod | 139.198.15.232 | 80/443 | prod.kube01.qcbj3b.juejin.id | traefik | 线上外网出口 | |
|
|
|
| prod | 139.198.14.107 | 80/443 | prod.kube01.qcbj3b.juejin.id | traefik | 线上外网出口 | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### test.kube01.lobj.juejin.id 集群设置 |
|
|
|
|
|
|
|
| ip | hostname | role | disk | |
|
|
|
|-----------------|-------------------------------------------|--------------------|--------------| |
|
|
|
| 192.168.0.157 | ingress-8080.test.kube01.lobj.juejin.id | ingress-vip | | |
|
|
|
| 192.168.0.158 | ingress-8000.test.kube01.lobj.juejin.id | ingress-vip | | |
|
|
|
| 192.168.0.159 | ingress-80.test.kube01.lobj.juejin.id | ingress-vip | | |
|
|
|
| 192.168.0.160 | test.kube01.lobj.juejin.id | master-vip | | |
|
|
|
| 192.168.0.171 | etcd01v.lobj.juejin.id | etcd | 40GB iSCSI | |
|
|
|
| 192.168.0.172 | etcd02v.lobj.juejin.id | etcd | 40GB iSCSI | |
|
|
|
| 192.168.0.173 | etcd03v.lobj.juejin.id | etcd | 40GB iSCSI | |
|
|
|
| 192.168.0.161 | kubernetes-master01v.lobj.juejin.id | kubernetes-master | 100GB iSCSI | |
|
|
|
| 192.168.0.162 | kubernetes-master02v.lobj.juejin.id | kubernetes-master | 100GB iSCSI | |
|
|
|
| 192.168.0.163 | kubernetes-master03v.lobj.juejin.id | kubernetes-master | 100GB iSCSI | |
|
|
|
| 192.168.0.164 | kubernetes-node01v.lobj.juejin.id | kubernetes-node | 100GB iSCSI | |
|
|
|
| 192.168.0.165 | kubernetes-node02v.lobj.juejin.id | kubernetes-node | 100GB iSCSI | |
|
|
|
| 192.168.0.166 | kubernetes-node03v.lobj.juejin.id | kubernetes-node | 100GB iSCSI | |
|
|
|
| 192.168.0.167 | kubernetes-node04v.lobj.juejin.id | kubernetes-node | 100GB iSCSI | |
|
|
|
| 192.168.0.168 | kubernetes-node05v.lobj.juejin.id | kubernetes-node | 100GB iSCSI | |
|
|
|
|
|
|
|
|
|
|
|
### beta.kube01.lobj.juejin.id 集群设置 |
|
|
|
|
|
|
|
| ip | hostname | role | disk | |
|
|
|
|-----------------|-------------------------------------------|--------------------|--------------| |
|
|
|
| 192.168.0.97 | ingress-8080.beta.kube01.lobj.juejin.id | ingress-vip | | |
|
|
|
| 192.168.0.98 | ingress-8000.beta.kube01.lobj.juejin.id | ingress-vip | | |
|
|
|
| 192.168.0.99 | ingress-80.beta.kube01.lobj.juejin.id | ingress-vip | | |
|
|
|
| 192.168.0.100 | beta.kube01.lobj.juejin.id | master-vip | | |
|
|
|
| 192.168.0.121 | etcd04v.lobj.juejin.id | etcd | 40GB iSCSI | |
|
|
|
| 192.168.0.122 | etcd05v.lobj.juejin.id | etcd | 40GB iSCSI | |
|
|
|
| 192.168.0.123 | etcd06v.lobj.juejin.id | etcd | 40GB iSCSI | |
|
|
|
| 192.168.0.101 | kubernetes-master04v.lobj.juejin.id | kubernetes-master | 100GB iSCSI | |
|
|
|
| 192.168.0.102 | kubernetes-master05v.lobj.juejin.id | kubernetes-master | 100GB iSCSI | |
|
|
|
| 192.168.0.103 | kubernetes-master06v.lobj.juejin.id | kubernetes-master | 100GB iSCSI | |
|
|
|
| 192.168.0.104 | kubernetes-node06v.lobj.juejin.id | kubernetes-node | 100GB iSCSI | |
|
|
|
| 192.168.0.105 | kubernetes-node07v.lobj.juejin.id | kubernetes-node | 100GB iSCSI | |
|
|
|
| 192.168.0.106 | kubernetes-node08v.lobj.juejin.id | kubernetes-node | 100GB iSCSI | |
|
|
|
| 192.168.0.107 | kubernetes-node09v.lobj.juejin.id | kubernetes-node | 100GB iSCSI | |
|
|
|
| 192.168.0.108 | kubernetes-node10v.lobj.juejin.id | kubernetes-node | 100GB iSCSI | |
|
|
|
|
|
|
|
|
|
|
|
### prod.kube01.qcbj3b.juejin.id 集群设置 |
|
|
|
|
|
|
|
| ip | hostname | role | disk | |
|
|
|
|-----------------|---------------------------------------------|--------------------|------------| |
|
|
|
| 172.16.0.197 | ingress-8080.prod.kube01.qcbj3b.juejin.id | ingress-vip | | |
|
|
|
| 172.16.0.198 | ingress-8000.prod.kube01.qcbj3b.juejin.id | ingress-vip | | |
|
|
|
| 172.16.0.199 | ingress-80.prod.kube01.qcbj3b.juejin.id | ingress-vip | | |
|
|
|
| 172.16.0.200 | prod.kube01.qcbj3b.juejin.id | master-vip | | |
|
|
|
| 172.16.0.11 | etcd01v.qcbj3b.juejin.id | etcd | 40GB SSD | |
|
|
|
| 172.16.0.12 | etcd02v.qcbj3b.juejin.id | etcd | 40GB SSD | |
|
|
|
| 172.16.0.13 | etcd03v.qcbj3b.juejin.id | etcd | 40GB SSD | |
|
|
|
| 172.16.0.14 | kubernetes-master01v.qcbj3b.juejin.id | kubernetes-master | 100GB SSD | |
|
|
|
| 172.16.0.15 | kubernetes-master02v.qcbj3b.juejin.id | kubernetes-master | 100GB SSD | |
|
|
|
| 172.16.0.16 | kubernetes-master03v.qcbj3b.juejin.id | kubernetes-master | 100GB SSD | |
|
|
|
| 172.16.0.17 | kubernetes-node01v.qcbj3b.juejin.id | kubernetes-node | 100GB SSD | |
|
|
|
| 172.16.0.18 | kubernetes-node02v.qcbj3b.juejin.id | kubernetes-node | 100GB SSD | |
|
|
|
| 172.16.0.19 | kubernetes-node03v.qcbj3b.juejin.id | kubernetes-node | 100GB SSD | |
|
|
|
| 172.16.0.20 | kubernetes-node04v.qcbj3b.juejin.id | kubernetes-node | 100GB SSD | |
|
|
|
| 172.16.0.21 | kubernetes-node05v.qcbj3b.juejin.id | kubernetes-node | 100GB SSD | |