システム部第一課の小浜です。
今回はシングルノードのKubernetesクラスタをWindows上で作成しみてます。
KubernetesはDockerコンテナ実行環境のオーケストレーションツールです。
管弦楽団のオーケストラを想像してもらうとわかりやすいかもしれません。
Dockerコンテナが100個近く走行するようになると、人力では管理しきれなくなってきます。
そこで、コンテナの管理を手助けするツールがKubernetesです。
参考文献
- Kubernetes 1.10をスクラッチから全手動で構築 | To Be Decided
https://www.kaitoy.xyz/2018/04/17/kubernetes110-from-scratch/
基本的にこのサイトの内容を踏襲しています。 -
Kubernetes1.8のクラスタを構築する。kubeadmで。 | To Be Decided
https://www.kaitoy.xyz/2017/10/21/build-kubernetes-cluster-by-kubeadm/ -
kubeadmが何をやっているのかみてみた – Qiita
https://qiita.com/helix_kaz/items/9c4a83532f949d8a94ef -
kubeadm で kubernetes v1.8 + Flannel をインストール – Qiita
https://qiita.com/hichihara/items/79ef6613026f8c13eb99 -
ansibleでkubernetes環境の構築 1 – Qiita
https://qiita.com/tsukasa1301/items/56516f4cf7855308d259 -
ansibleでkubernetes環境の構築 2 – Qiita
https://qiita.com/tsukasa1301/items/d0eae29be0a4cd34a740 -
VagrantでVirtualBoxのデフォルトNATネットワークのアドレスを変更する – 雑記帳(2015-11-02)
http://muramasa64.fprog.org/diary/?date=20151102#p01 -
Get Docker CE for CentOS #INSTALL DOCKER CE
https://docs.docker.com/install/linux/docker-ce/centos/#install-docker-ce-1
前提とする構成
- Windows 10 Pro version 1803 Hyper-Vは使用しません。
- Vagrant 2.1.2
- VirtualBox バージョン 5.2.18 r124319 (Qt5.6.2)
- Windows機のメモリは6GB以上を想定しています。
作成する構成
- Docker version 18.03.1-ce
- Kubernetes version 1.10.6
- VirtualBoxで作成する仮想マシンのうち、一つ目のネットワークアダプタは通常のNAT型とし、2つ目のネットワークアダプタをホストオンリー型に設定して、ホストPCからIPアドレス192.168.33.10を宛先とすると仮想マシンに通信可能なように構成します。(NATのみだと、仮想マシン内部に建てたサービスにアクセスするためにポートフォワード設定が必要になるため)
セットアップ準備
Windowsにvagrantをインストールするため、chocolateyを使用します。
Chocolatey – The package manager for Windows
https://chocolatey.org/
chocolateyはWindows向けのパッケージマネージャーです。
Mac OS Xで言うところのHomebrewのようなもので、Microsoft公式提供ではありませんが、色々なソフトウェアをコマンドラインからインストールすることができます。
chocolateyのインストールを行います。管理者権限で起動したWindows PowerShellから以下を入力します。
1
|
Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
|
次に、VirtualBoxをインストールします。
VirtualBoxはOracle提供のOS仮想化アプリケーションです。
WindowsのHyper-Vはハイパーバイザー型でホストOSのWindowsも内部的には仮想化されますが、VirtualBoxは仮想化アプリケーションなのでホストOS上で動作します。
vagrantもインストールします。
vagrantはコマンドラインからVirtualBoxなどの仮想化アプリケーションを操作するラッパーコマンドです。
最新版の2.1.5は動作がおかしいようなので、ここではバージョン2.1.2を指定してインストールします。
管理者権限で起動したWindows PowerShellから以下を入力します。
1
2
|
choco install -y virtualbox
choco install -y vagrant --version 2.1.2 --force
|
ここで、管理者権限で起動したWindows PowerShellを終了します。
ここで、新しくWindows PowerShellを起動します。
vagrantを使用して、CentOS7イメージを取得します。
参考: Documentation – Vagrant by HashiCorp
https://www.vagrantup.com/docs/index.html
vagrant + virtualbox を使う場合は、ユーザー権限で起動したPowerShellから操作を行います。
1
2
3
4
5
|
# centos7イメージの取得
vagrant box add centos/7 --provider virtualbox
# Vagrantfileの初期作成
vagrant init centos/7
|
初期作成されたVagrantファイルを以下のように作成します。
sakuraエディタ等を用いて、改行コードはCRLFのファイルとして保存してください。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
# master node
config.vm.define "master" do |machine|
machine.vm.hostname = "master"
# machine.vm.network "forwarded_port", id: "ssh", guest: 22, host: 2222
# machine.vm.network "forwarded_port", guest: 6443, host: 6443, host_ip: "127.0.0.1"
machine.vm.network :private_network, ip: "192.168.33.10", private_network: "intnet"
machine.vm.provider "virtualbox" do |vb|
vb.memory = "2800"
vb.cpus = "2"
vb.customize ["modifyvm", :id, "--natnet1", "172.16.1.0/24"]
end
machine.vm.provision "shell", path: "setupscripts.sh"
end
end
|
setupscripts.shファイルを以下のように新規作成します。
sakuraエディタ等を用いて、改行コードはLFのみ、文字コードはUTF-8(BOMなし)のファイルとして作成してください。
PROXY環境下で実行する場合は、setupscripts.shファイルのPROXY_SERV_PORTにプロキシの「サーバ名:ポート番号」を記入してください。
認証付きのPROXY環境下で実行する場合は、setupscripts.shファイルのPROXY_USER、PROXY_PASSにPROXYのユーザー名とパスワードを記入してください。
ダウンロード: setupscripts.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
|
#!/bin/bash
#
# setupscript for kubernetes 1.10 1 node cluster
# see https://www.kaitoy.xyz/2018/04/17/kubernetes110-from-scratch/ Kubernetes 1.10をスクラッチから全手動で構築 | To Be Decided
# 2018.08.21
#
function f_log() {
echo "■" "$@"
}
#
# proxy setting
#
PROXY_SERV_PORT=
PROXY_USER=
PROXY_PASS=
if [ ! -z "$PROXY_USER" ]; then
PROXY_USER_PASS="${PROXY_USER}:${PROXY_PASS}@"
fi
PROXY_SKIP_PROXY="localhost,master,127.0.0.1,192.168.33.10"
# setup proxy , if needed
if [ ! -z "${PROXY_SERV_PORT}" ]; then
f_log "setup proxy"
# edit /etc/profile
cat >> /etc/profile << EOF
PROXY=http://${PROXY_USER_PASS}${PROXY_SERV_PORT}
export http_proxy=$PROXY
export https_proxy=$PROXY
export no_proxy=${PROXY_SKIP_PROXY}
EOF
. /etc/profile
# edit /etc/bashrc
cat >> /etc/bashrc << EOF
PROXY=http://${PROXY_USER_PASS}${PROXY_SERV_PORT}
export http_proxy=$PROXY
export https_proxy=$PROXY
export no_proxy=${PROXY_SKIP_PROXY}
EOF
. /etc/bashrc
# edit /etc/yum.conf
cat >> /etc/yum.conf << EOF
proxy=http://${PROXY_SERV_PORT}
proxy_username=${PROXY_USER}
proxy_password=${PROXY_PASS}
EOF
fi
# 日本語化
f_log "日本語化設定"
sudo yum install -y glibc-common
sudo localedef -f UTF-8 -i ja_JP ja_JP.UTF-8
sudo bash -c "echo 'export LANG=ja_JP.UTF-8' >> /etc/bashrc"
sudo bash -c "echo 'export LANG=ja_JP.UTF-8' >> /etc/profile"
. /etc/bashrc
# kubeletの動作条件にあるので、swapをoffにする
f_log "kubeletの動作条件にあるので、swapをoffにする"
sed -i -e 's/^.*swap.*$//g'/etc/fstab
swapoff -a
# SELinuxを無効にする
f_log "SELinuxを無効にする"
sed -i -e 's/SELINUX=enforcing/SELINUX=disabled/g'/etc/selinux/config
getenforce
setenforce 0
getenforce
# firewallを無効にする
f_log "firewallを無効にする"
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld
# Bridge netfilterモジュールをロードする。
f_log "Bridge netfilterモジュールをロードする。"
modprobe br_netfilter
echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf
# Bridge netfilterとIP forwardingを有効化する。
f_log "Bridge netfilterとIP forwardingを有効化する。"
cat > /etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/kubernetes.conf
# 確認
lsmod |grep br_netfilter
sysctl -a | grep -E "net.bridge.bridge-nf-call-|net.ipv4.ip_forward"
ls /proc/sys/net/bridge
### # x509証明書生成
f_log "x509証明書生成"
#### # opensslの設定作成
f_log "openssl設定ファイル作成"
mkdir -p /etc/kubernetes/pki
HOSTNAME=master
K8S_SERVICE_IP=10.0.0.1
MASTER_IP=192.168.33.10
cat > /etc/kubernetes/pki/openssl.cnf << EOF
[ req ]
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_ca ]
basicConstraints = critical, CA:TRUE
keyUsage = critical, digitalSignature, keyEncipherment, keyCertSign
[ v3_req_client ]
basicConstraints = CA:FALSE
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
[ v3_req_apiserver ]
basicConstraints = CA:FALSE
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names_cluster
[ v3_req_etcd ]
basicConstraints = CA:FALSE
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names_etcd
[ alt_names_cluster ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = ${HOSTNAME}
IP.1 = ${MASTER_IP}
IP.2 = ${K8S_SERVICE_IP}
[ alt_names_etcd ]
DNS.1 = ${HOSTNAME}
IP.1 = ${MASTER_IP}
EOF
#### # Kubernetes CA証明書生成
f_log "Kubernetes CA証明書生成"
groupadd -r kubernetes
adduser -r -g kubernetes -M -s /sbin/nologin kubernetes
CA_DAYS=5475
openssl ecparam -name secp521r1 -genkey -noout -out /etc/kubernetes/pki/ca.key
chown kubernetes:kubernetes /etc/kubernetes/pki/ca.key
chmod 0600 /etc/kubernetes/pki/ca.key
openssl req -x509 -new -sha256 -nodes -key /etc/kubernetes/pki/ca.key -days $CA_DAYS -out /etc/kubernetes/pki/ca.crt -subj "/CN=kubernetes-ca"-extensions v3_ca -config /etc/kubernetes/pki/openssl.cnf
#### # kube-apiserver証明書生成
APISERVER_DAYS=5475
openssl ecparam -name secp521r1 -genkey -noout -out /etc/kubernetes/pki/kube-apiserver.key
chown kubernetes:kubernetes /etc/kubernetes/pki/kube-apiserver.key
chmod 0600 /etc/kubernetes/pki/kube-apiserver.key
openssl req -new -sha256 -key /etc/kubernetes/pki/kube-apiserver.key -subj "/CN=kube-apiserver" | openssl x509 -req -sha256 -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out /etc/kubernetes/pki/kube-apiserver.crt -days $APISERVER_DAYS -extensions v3_req_apiserver -extfile /etc/kubernetes/pki/openssl.cnf
#### # kube-apiserver-kubelet証明書生成
APISERVER_KUBELET_CLIENT_DAYS=5475
openssl ecparam -name secp521r1 -genkey -noout -out /etc/kubernetes/pki/apiserver-kubelet-client.key
chown kubernetes:kubernetes /etc/kubernetes/pki/apiserver-kubelet-client.key
chmod 0600 /etc/kubernetes/pki/apiserver-kubelet-client.key
openssl req -new -key /etc/kubernetes/pki/apiserver-kubelet-client.key -subj "/CN=kube-apiserver-kubelet-client/O=system:masters" | openssl x509 -req -sha256 -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out /etc/kubernetes/pki/apiserver-kubelet-client.crt -days $APISERVER_KUBELET_CLIENT_DAYS -extensions v3_req_client -extfile /etc/kubernetes/pki/openssl.cnf
#### # adminクライアント証明書生成
groupadd -r kube-admin
adduser -r -g kube-admin -M -s /sbin/nologin kube-admin
ADMIN_DAYS=5475
openssl ecparam -name secp521r1 -genkey -noout -out /etc/kubernetes/pki/admin.key
chown kube-admin:kube-admin /etc/kubernetes/pki/admin.key
chmod 0600 /etc/kubernetes/pki/admin.key
openssl req -new -key /etc/kubernetes/pki/admin.key -subj "/CN=kubernetes-admin/O=system:masters" | openssl x509 -req -sha256 -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out /etc/kubernetes/pki/admin.crt -days $ADMIN_DAYS -extensions v3_req_client -extfile /etc/kubernetes/pki/openssl.cnf
#### # kube-controller-managerのクライアント証明書生成
CONTROLLER_MANAGER_DAYS=5475
openssl ecparam -name secp521r1 -genkey -noout -out /etc/kubernetes/pki/kube-controller-manager.key
openssl ec -in /etc/kubernetes/pki/kube-controller-manager.key -outform PEM -pubout -out /etc/kubernetes/pki/kube-controller-manager.pub
chown kubernetes:kubernetes /etc/kubernetes/pki/kube-controller-manager.key
chmod 0600 /etc/kubernetes/pki/kube-controller-manager.key
openssl req -new -sha256 -key /etc/kubernetes/pki/kube-controller-manager.key -subj "/CN=system:kube-controller-manager" | openssl x509 -req -sha256 -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out /etc/kubernetes/pki/kube-controller-manager.crt -days $CONTROLLER_MANAGER_DAYS -extensions v3_req_client -extfile /etc/kubernetes/pki/openssl.cnf
#### # kube-schedulerクライアント証明書生成
SCHEDULER_DAYS=5475
openssl ecparam -name secp521r1 -genkey -noout -out /etc/kubernetes/pki/kube-scheduler.key
chown kubernetes:kubernetes /etc/kubernetes/pki/kube-scheduler.key
chmod 0600 /etc/kubernetes/pki/kube-scheduler.key
openssl req -new -sha256 -key /etc/kubernetes/pki/kube-scheduler.key -subj "/CN=system:kube-scheduler" | openssl x509 -req -sha256 -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out /etc/kubernetes/pki/kube-scheduler.crt -days $SCHEDULER_DAYS -extensions v3_req_client -extfile /etc/kubernetes/pki/openssl.cnf
#### # kube-proxyクライアント証明書生成
PROXY_DAYS=5475
openssl ecparam -name secp521r1 -genkey -noout -out /etc/kubernetes/pki/kube-proxy.key
chown kubernetes:kubernetes /etc/kubernetes/pki/kube-proxy.key
chmod 0600 /etc/kubernetes/pki/kube-proxy.key
openssl req -new -sha256 -key /etc/kubernetes/pki/kube-proxy.key -subj "/CN=system:kube-proxy" | openssl x509 -req -sha256 -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out /etc/kubernetes/pki/kube-proxy.crt -days $PROXY_DAYS -extensions v3_req_client -extfile /etc/kubernetes/pki/openssl.cnf
#### # front proxy CA証明書生成
FRONT_PROXY_CA_DAYS=5475
openssl ecparam -name secp521r1 -genkey -noout -out /etc/kubernetes/pki/front-proxy-ca.key
chown kubernetes:kubernetes /etc/kubernetes/pki/front-proxy-ca.key
chmod 0600 /etc/kubernetes/pki/front-proxy-ca.key
openssl req -x509 -new -sha256 -nodes -key /etc/kubernetes/pki/front-proxy-ca.key -days $FRONT_PROXY_CA_DAYS -out /etc/kubernetes/pki/front-proxy-ca.crt -subj "/CN=front-proxy-ca" -extensions v3_ca -config /etc/kubernetes/pki/openssl.cnf
#### # front proxyクライアント証明書
FRONT_PROXY_CLIENT_DAYS=5475
openssl ecparam -name secp521r1 -genkey -noout -out /etc/kubernetes/pki/front-proxy-client.key
chown kubernetes:kubernetes /etc/kubernetes/pki/front-proxy-client.key
chmod 0600 /etc/kubernetes/pki/front-proxy-client.key
openssl req -new -sha256 -key /etc/kubernetes/pki/front-proxy-client.key -subj "/CN=front-proxy-client" | openssl x509 -req -sha256 -CA /etc/kubernetes/pki/front-proxy-ca.crt -CAkey /etc/kubernetes/pki/front-proxy-ca.key -CAcreateserial -out /etc/kubernetes/pki/front-proxy-client.crt -days $FRONT_PROXY_CLIENT_DAYS -extensions v3_req_client -extfile /etc/kubernetes/pki/openssl.cnf
#### # etcd CA証明書
groupadd -r etcd
adduser -r -g etcd -M -s /sbin/nologin etcd
ETCD_CA_DAYS=5475
openssl ecparam -name secp521r1 -genkey -noout -out /etc/kubernetes/pki/etcd-ca.key
chown etcd:etcd /etc/kubernetes/pki/etcd-ca.key
chmod 0600 /etc/kubernetes/pki/etcd-ca.key
openssl req -x509 -new -sha256 -nodes -key /etc/kubernetes/pki/etcd-ca.key -days $ETCD_CA_DAYS -out /etc/kubernetes/pki/etcd-ca.crt -subj "/CN=etcd-ca" -extensions v3_ca -config /etc/kubernetes/pki/openssl.cnf
#### # etcd証明書
ETCD_DAYS=5475
openssl ecparam -name secp521r1 -genkey -noout -out /etc/kubernetes/pki/etcd.key
chown etcd:etcd /etc/kubernetes/pki/etcd.key
chmod 0600 /etc/kubernetes/pki/etcd.key
openssl req -new -sha256 -key /etc/kubernetes/pki/etcd.key -subj "/CN=etcd" | openssl x509 -req -sha256 -CA /etc/kubernetes/pki/etcd-ca.crt -CAkey /etc/kubernetes/pki/etcd-ca.key -CAcreateserial -out /etc/kubernetes/pki/etcd.crt -days $ETCD_DAYS -extensions v3_req_etcd -extfile /etc/kubernetes/pki/openssl.cnf
#### # etcdクライアント証明書
ETCD_CLIENT_DAYS=5475
openssl ecparam -name secp521r1 -genkey -noout -out /etc/kubernetes/pki/etcd-client.key
chown kubernetes:kubernetes /etc/kubernetes/pki/etcd-client.key
chmod 0600 /etc/kubernetes/pki/etcd-client.key
openssl req -new -sha256 -key /etc/kubernetes/pki/etcd-client.key -subj "/CN=kube-apiserver" | openssl x509 -req -sha256 -CA /etc/kubernetes/pki/etcd-ca.crt -CAkey /etc/kubernetes/pki/etcd-ca.key -CAcreateserial -out /etc/kubernetes/pki/etcd-client.crt -days $ETCD_CLIENT_DAYS -extensions v3_req_client -extfile /etc/kubernetes/pki/openssl.cnf
#### # etcd peer証明書
ETCD_PEER_DAYS=5475
openssl ecparam -name secp521r1 -genkey -noout -out /etc/kubernetes/pki/etcd-peer.key
chown etcd:etcd /etc/kubernetes/pki/etcd-peer.key
chmod 0600 /etc/kubernetes/pki/etcd-peer.key
openssl req -new -sha256 -key /etc/kubernetes/pki/etcd-peer.key -subj "/CN=etcd-peer" | openssl x509 -req -sha256 -CA /etc/kubernetes/pki/etcd-ca.crt -CAkey /etc/kubernetes/pki/etcd-ca.key -CAcreateserial -out /etc/kubernetes/pki/etcd-peer.crt -days $ETCD_PEER_DAYS -extensions v3_req_etcd -extfile /etc/kubernetes/pki/openssl.cnf
#### # 証明書の確認
for i in /etc/kubernetes/pki/*crt; do
echo $i:;
openssl x509 -subject -issuer -noout -in $i;
echo;
done
### # Kubernetesバイナリインストール
if [ -r /vagrant/kubernetes-server-linux-amd64.tar.gz ]; then
/bin/cp /vagrant/kubernetes-server-linux-amd64.tar.gz .
else
curl -L -O https://dl.k8s.io/v1.10.6/kubernetes-server-linux-amd64.tar.gz
fi
tar xvzf kubernetes-server-linux-amd64.tar.gz kubernetes/server/bin/hyperkube
mv kubernetes/server/bin/hyperkube/usr/bin
chmod +x /usr/bin/hyperkube
ln -s /usr/bin/hyperkube /usr/bin/kube-apiserver
ln -s /usr/bin/hyperkube /usr/bin/kube-controller-manager
ln -s /usr/bin/hyperkube /usr/bin/kube-scheduler
ln -s /usr/bin/hyperkube /usr/bin/kube-proxy
ln -s /usr/bin/hyperkube /usr/bin/kubelet
ln -s /usr/bin/hyperkube /usr/bin/kubectl
mkdir -p /var/lib/{kubelet,kube-proxy}
### # 接続するためのkubeconfigファイルを生成する
#### # kube-controller-managerのkubeconfig
MASTER_IP=192.168.33.10
KUBERNETES_PUBLIC_ADDRESS=$MASTER_IP
CLUSTER_NAME="k8s"
KCONFIG=/etc/kubernetes/kube-controller-manager.kubeconfig
KUSER="system:kube-controller-manager"
kubectl config set-cluster ${CLUSTER_NAME} --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 --kubeconfig=${KCONFIG}
kubectl config set-credentials ${KUSER} --client-certificate=/etc/kubernetes/pki/kube-controller-manager.crt --client-key=/etc/kubernetes/pki/kube-controller-manager.key --embed-certs=true --kubeconfig=${KCONFIG}
kubectl config set-context ${KUSER}@${CLUSTER_NAME} --cluster=${CLUSTER_NAME} --user=${KUSER} --kubeconfig=${KCONFIG}
kubectl config use-context ${KUSER}@${CLUSTER_NAME} --kubeconfig=${KCONFIG}
chown kubernetes:kubernetes ${KCONFIG}
chmod 0600 ${KCONFIG}
kubectl config view --kubeconfig=${KCONFIG}
#### # kube-schedulerのkubeconfig
MASTER_IP=192.168.33.10
KUBERNETES_PUBLIC_ADDRESS=$MASTER_IP
CLUSTER_NAME="k8s"
KCONFIG=/etc/kubernetes/kube-scheduler.kubeconfig
KUSER="system:kube-scheduler"
kubectl config set-cluster ${CLUSTER_NAME} --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 --kubeconfig=${KCONFIG}
kubectl config set-credentials ${KUSER} --client-certificate=/etc/kubernetes/pki/kube-scheduler.crt --client-key=/etc/kubernetes/pki/kube-scheduler.key --embed-certs=true --kubeconfig=${KCONFIG}
kubectl config set-context ${KUSER}@${CLUSTER_NAME} --cluster=${CLUSTER_NAME} --user=${KUSER} --kubeconfig=${KCONFIG}
kubectl config use-context ${KUSER}@${CLUSTER_NAME} --kubeconfig=${KCONFIG}
chown kubernetes:kubernetes ${KCONFIG}
chmod 0600 ${KCONFIG}
kubectl config view --kubeconfig=${KCONFIG}
#### # adminのkubeconfig(ユーザーが使うkubectlコマンド用)
MASTER_IP=192.168.33.10
KUBERNETES_PUBLIC_ADDRESS=$MASTER_IP
CLUSTER_NAME="k8s"
KCONFIG=/etc/kubernetes/admin.kubeconfig
KUSER="kubernetes-admin"
kubectl config set-cluster ${CLUSTER_NAME} --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 --kubeconfig=${KCONFIG}
kubectl config set-credentials ${KUSER} --client-certificate=/etc/kubernetes/pki/admin.crt --client-key=/etc/kubernetes/pki/admin.key --embed-certs=true --kubeconfig=${KCONFIG}
kubectl config set-context ${KUSER}@${CLUSTER_NAME} --cluster=${CLUSTER_NAME} --user=${KUSER} --kubeconfig=${KCONFIG}
kubectl config use-context ${KUSER}@${CLUSTER_NAME} --kubeconfig=${KCONFIG}
chown kube-admin:kube-admin ${KCONFIG}
chmod 0600 ${KCONFIG}
ln -s ${KCONFIG} ~/.kube/config
kubectl config view --kubeconfig=${KCONFIG}
### # etcd デプロイ
if [ -r /vagrant/etcd-v3.1.12-linux-amd64.tar.gz ]; then
/bin/cp /vagrant/etcd-v3.1.12-linux-amd64.tar.gz .
else
curl -L -O https://github.com/coreos/etcd/releases/download/v3.1.12/etcd-v3.1.12-linux-amd64.tar.gz
fi
tar xvzf etcd-v3.1.12-linux-amd64.tar.gzetcd-v3.1.12-linux-amd64/etcdetcd-v3.1.12-linux-amd64/etcdctl
mv etcd-v3.1.12-linux-amd64/etcd/usr/bin
mv etcd-v3.1.12-linux-amd64/etcdctl/usr/bin
chown root:root /usr/bin/etcd*
chmod 0755 /usr/bin/etcd*
mkdir -p /var/lib/etcd
chown etcd:etcd /var/lib/etcd
MASTER_IP=192.168.33.10
ETCD_MEMBER_NAME=etcd1
CLUSTER_NAME="k8s"
ETCD_TOKEN=$(openssl rand -hex 5)
ETCD_CLUSTER_TOKEN=$CLUSTER_NAME-$ETCD_TOKEN
cat > /etc/systemd/system/etcd.service << EOF
[Unit]
Description=etcd
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
NotifyAccess=all
User=etcd
Group=etcd
ExecStart=/usr/bin/etcd \\
--name ${ETCD_MEMBER_NAME} \\
--listen-client-urls https://${MASTER_IP}:2379 \\
--advertise-client-urls https://${MASTER_IP}:2379 \\
--data-dir=/var/lib/etcd \\
--cert-file=/etc/kubernetes/pki/etcd.crt \\
--key-file=/etc/kubernetes/pki/etcd.key \\
--peer-cert-file=/etc/kubernetes/pki/etcd-peer.crt \\
--peer-key-file=/etc/kubernetes/pki/etcd-peer.key \\
--trusted-ca-file=/etc/kubernetes/pki/etcd-ca.crt \\
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd-ca.crt \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${MASTER_IP}:2380 \\
--listen-peer-urls https://${MASTER_IP}:2380 \\
--initial-cluster-token ${ETCD_CLUSTER_TOKEN} \\
--initial-cluster ${ETCD_MEMBER_NAME}=https://${MASTER_IP}:2380 \\
--initial-cluster-state new
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd -l
MASTER_IP=192.168.33.10
# API version 2
unset ETCDCTL_API
etcdctl --version
# API version 3
env ETCDCTL_API=3 etcdctl version
env ETCDCTL_API=3 etcdctl --endpoints ${MASTER_IP}:2379 --cacert=/etc/kubernetes/pki/etcd-ca.crt --cert=/etc/kubernetes/pki/etcd-client.crt --key=/etc/kubernetes/pki/etcd-client.key member list
env ETCDCTL_API=3 etcdctl --endpoints ${MASTER_IP}:2379 --cacert=/etc/kubernetes/pki/etcd-ca.crt --cert=/etc/kubernetes/pki/etcd-client.crt --key=/etc/kubernetes/pki/etcd-client.key endpoint health
### # マスターノードのコンポーネントデプロイ
#### # kube-apiserver
f_log kube-apiserver
mkdir -p /var/log/kubernetes
chown kubernetes:kubernetes /var/log/kubernetes
chmod 0700 /var/log/kubernetes
MASTER_IP=192.168.33.10
SERVICE_CLUSTER_IP_RANGE="10.0.0.0/16"
SECRET_ENC_KEY=$(echo -n 'your_32_bytes_secure_private_key' | base64)
cat > /etc/kubernetes/encryption.conf << EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${SECRET_ENC_KEY}
- identity: {}
EOF
cat > /etc/kubernetes/audit-policy.conf << EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"
EOF
cat > /etc/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
User=kubernetes
Group=kubernetes
ExecStart=/usr/bin/kube-apiserver \\
--feature-gates=RotateKubeletServerCertificate=true \\
--apiserver-count=1 \\
--allow-privileged=true \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,DenyEscalatingExec,StorageObjectInUseProtection \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--advertise-address=${MASTER_IP} \\
--client-ca-file=/etc/kubernetes/pki/ca.crt \\
--etcd-cafile=/etc/kubernetes/pki/etcd-ca.crt \\
--etcd-certfile=/etc/kubernetes/pki/etcd-client.crt \\
--etcd-keyfile=/etc/kubernetes/pki/etcd-client.key \\
--etcd-servers=https://${MASTER_IP}:2379 \\
--service-account-key-file=/etc/kubernetes/pki/kube-controller-manager.pub \\
--service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE} \\
--tls-cert-file=/etc/kubernetes/pki/kube-apiserver.crt \\
--tls-private-key-file=/etc/kubernetes/pki/kube-apiserver.key \\
--kubelet-certificate-authority=/etc/kubernetes/pki/ca.crt \\
--enable-bootstrap-token-auth=true \\
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt \\
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key \\
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt \\
--requestheader-username-headers=X-Remote-User \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-allowed-names=front-proxy-client \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt \\
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key \\
--experimental-encryption-provider-config=/etc/kubernetes/encryption.conf \\
--v=2 \\
--tls-min-version=VersionTLS12 \\
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 \\
--anonymous-auth=false \\
--audit-log-format=json \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kubernetes/kube-audit.log \\
--audit-policy-file=/etc/kubernetes/audit-policy.conf
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver -l
journalctl -u kube-apiserver
kubectl cluster-info
kubectl cluster-info dump
#### # kube-controller-manager
f_log kube-controller-manager
CLUSTER_CIDR="10.244.0.0/16"
SERVICE_CLUSTER_IP_RANGE="10.0.0.0/16"
CLUSTER_NAME="k8s"
cat > /etc/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
User=kubernetes
Group=kubernetes
ExecStart=/usr/bin/kube-controller-manager \\
--feature-gates=RotateKubeletServerCertificate=true \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--bind-address=0.0.0.0 \\
--controllers=*,bootstrapsigner,tokencleaner \\
--service-account-private-key-file=/etc/kubernetes/pki/kube-controller-manager.key \\
--allocate-node-cidrs=true \\
--cluster-cidr=${CLUSTER_CIDR} \\
--node-cidr-mask-size=24 \\
--cluster-name=${CLUSTER_NAME} \\
--service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE} \\
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt \\
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key \\
--root-ca-file=/etc/kubernetes/pki/ca.crt \\
--use-service-account-credentials=true \\
--v=2 \\
--experimental-cluster-signing-duration=8760h0m0s
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager -l
kubectl cluster-info
kubectl cluster-info dump
#### # kube-scheduler
f_log kube-scheduler
cat > /etc/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
User=kubernetes
Group=kubernetes
ExecStart=/usr/bin/kube-scheduler \\
--feature-gates=RotateKubeletServerCertificate=true \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--address=0.0.0.0 \\
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler -l
kubectl cluster-info
kubectl cluster-info dump
#### # マスタコンポーネント状態確認
f_log "マスタコンポーネント状態確認"
kubectl version
kubectl get componentstatuses
### TLS Bootstrapping の設定
f_log "TLS Bootstrapping の設定"
#### # Bootstrap TokenのSecret生成
TOKEN_PUB=$(openssl rand -hex 3)
TOKEN_SECRET=$(openssl rand -hex 8)
BOOTSTRAP_TOKEN="${TOKEN_PUB}.${TOKEN_SECRET}"
kubectl -n kube-system create secret generic bootstrap-token-${TOKEN_PUB} --type 'bootstrap.kubernetes.io/token' --from-literal description="cluster bootstrap token" --from-literal token-id=${TOKEN_PUB} --from-literal token-secret=${TOKEN_SECRET} --from-literal usage-bootstrap-authentication=true --from-literal usage-bootstrap-signing=true --from-literal auth-extra-groups=system:bootstrappers:worker,system:bootstrappers:ingress
TOKEN_PUB=$(echo $BOOTSTRAP_TOKEN | sed -e s/\\..*//)
kubectl -n kube-system get secret/bootstrap-token-${TOKEN_PUB} -o yaml
#### # Bootstrap kubeconfig作成
mkdir -p /etc/kubernetes/manifests
MASTER_IP=192.168.33.10
KUBERNETES_PUBLIC_ADDRESS=$MASTER_IP
CLUSTER_NAME="k8s"
KCONFIG="/etc/kubernetes/bootstrap.kubeconfig"
KUSER="kubelet-bootstrap"
kubectl config set-cluster ${CLUSTER_NAME} --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 --kubeconfig=${KCONFIG}
kubectl config set-context ${KUSER}@${CLUSTER_NAME} --cluster=${CLUSTER_NAME} --user=${KUSER} --kubeconfig=${KCONFIG}
kubectl config use-context ${KUSER}@${CLUSTER_NAME} --kubeconfig=${KCONFIG}
chown kubernetes:kubernetes ${KCONFIG}
chmod 0600 ${KCONFIG}
kubectl config view --kubeconfig=${KCONFIG}
#### # CA証明書とbootstrap kubeconfigをConfigMap(cluster-info)で公開
# kubeletはこのConfigMapを見てクラスタに参加する。
kubectl -n kube-public create configmap cluster-info --from-file /etc/kubernetes/pki/ca.crt --from-file /etc/kubernetes/bootstrap.kubeconfig
# anonymousユーザにcluster-infoへのアクセスを許可する。
kubectl -n kube-public create role system:bootstrap-signer-clusterinfo --verb get --resource configmaps
kubectl -n kube-public create rolebinding kubeadm:bootstrap-signer-clusterinfo --role system:bootstrap-signer-clusterinfo --user system:anonymous
# system:bootstrappersグループにsystem:node-bootstrapperロールを紐づける。
kubectl create clusterrolebinding kubeadm:kubelet-bootstrap --clusterrole system:node-bootstrapper --group system:bootstrappers
#### # bootstrap.kubeconfigにトークンを追記
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=/etc/kubernetes/bootstrap.kubeconfig
### # yum update
f_log "yum udpate"
yum -y update
### # Docker インストール
f_log "Docker インストール"
yum install -y yum-utils
# devicemapper storage driverを使う場合は、以下が必要
yum install -y device-mapper-persistent-datalvm2
# リポジトリに追加
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# 昔のバージョンも表示する
yum list docker-ce --showduplicates | sort -r
# 18.03を明示的に指定してインストール
yum install -y docker-ce-18.03.1.ce-1.el7.centos
# docker-ce rpmのsystemd設定ファイル位置を確認 ( /usr/lib/systemd/system/docker.service のはず )
rpm -ql docker-ce | grep docker.service
# docker service systemdファイルの編集
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
# PROXY設定があるなら追加
if [ ! -z "$PROXY_SERV_PORT" ]; then
f_log "PROXY設定追加 /usr/lib/systemd/system/docker.service"
awk '{ if ( index($0, "[Service]") == 1 ) { print $0 ; print "Environment=\"http_proxy=http://${PROXY_USER_PASS}${PROXY_SERV_PORT}\" \"https_proxy=http://${PROXY_USER_PASS}${PROXY_SERV_PORT}\" \"no_proxy=${PROXY_SKIP_PROXY}\"" } else { print $0 } }' < /usr/lib/systemd/system/docker.service> /usr/lib/systemd/system/docker.service.new
/bin/mv /usr/lib/systemd/system/docker.service.new /usr/lib/systemd/system/docker.service
fi
systemctl daemon-reload
systemctl show docker --property Environment
systemctl enable docker
systemctl start docker
cat /proc/$(pidof dockerd)/environ
systemctl status docker -l
docker version
kubectl version
docker run hello-world
### # CNI インストール
f_log "CNI インストール"
mkdir -p /etc/cni/net.d /opt/cni/bin/
cd /tmp
if [ -r /vagrant/cni-amd64-v0.6.0.tgz ]; then
cp /vagrant/cni-amd64-v0.6.0.tgz .
else
curl -OL https://github.com/containernetworking/cni/releases/download/v0.6.0/cni-amd64-v0.6.0.tgz
fi
if [ -r /vagrant/cni-plugins-amd64-v0.7.1.tgz ]; then
cp /vagrant/cni-plugins-amd64-v0.7.1.tgz .
else
curl -OL https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
fi
cd /opt/cni/bin
tar zxf /tmp/cni-amd64-v0.6.0.tgz
tar zxf /tmp/cni-plugins-amd64-v0.7.1.tgz
chmod +x /opt/cni/bin/*
cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
"type": "loopback"
}
EOF
### # Node CSR 自動承認
f_log "Node CSR 自動承認"
cat <<EOF | kubectl create -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
EOF
HOSTNAME=master
cat <<EOF | kubectl create -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ${HOSTNAME}-node-client-cert-renewal
subjects:
- kind: User
name: system:node:${HOSTNAME}
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
EOF
cat <<EOF | kubectl create -f -
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/selfnodeserver"]
verbs: ["create"]
EOF
HOSTNAME=master
cat <<EOF | kubectl create -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ${HOSTNAME}-server-client-cert-renewal
subjects:
- kind: User
name: system:node:${HOSTNAME}
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: approve-node-server-renewal-csr
apiGroup: rbac.authorization.k8s.io
EOF
### # kubelet インストール
f_log "kubelet インストール"
yum -y install conntrack-tools
DNS_SERVER_IP=10.0.0.10
PAUSE_IMAGE=k8s.gcr.io/pause-amd64:3.1
DNS_DOMAIN="cluster.local"
cat > /etc/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
User=root
Group=root
ExecStart=/usr/bin/kubelet \\
--feature-gates=RotateKubeletServerCertificate=true \\
--address=0.0.0.0 \\
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--pod-manifest-path=/etc/kubernetes/manifests \\
--network-plugin=cni \\
--cni-conf-dir=/etc/cni/net.d \\
--cni-bin-dir=/opt/cni/bin \\
--cluster-dns=${DNS_SERVER_IP} \\
--cluster-domain=${DNS_DOMAIN} \\
--authorization-mode=Webhook \\
--client-ca-file=/etc/kubernetes/pki/ca.crt \\
--cert-dir=/etc/kubernetes/pki \\
--rotate-certificates=true \\
--v=2 \\
--cgroup-driver=cgroupfs \\
--pod-infra-container-image=${PAUSE_IMAGE} \\
--tls-min-version=VersionTLS12 \\
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 \\
--allow-privileged=true \\
--anonymous-auth=false
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet -l
kubectl get node
### # kube-proxy
f_log "kube-proxy "
MASTER_IP=192.168.33.10
KUBERNETES_PUBLIC_ADDRESS=$MASTER_IP
CLUSTER_NAME="k8s"
KCONFIG="/etc/kubernetes/kube-proxy.kubeconfig"
KUSER="system:kube-proxy"
kubectl config set-cluster ${CLUSTER_NAME} --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 --kubeconfig=${KCONFIG}
kubectl config set-credentials ${KUSER} --client-certificate=/etc/kubernetes/pki/kube-proxy.crt --client-key=/etc/kubernetes/pki/kube-proxy.key --embed-certs=true --kubeconfig=${KCONFIG}
kubectl config set-context ${KUSER}@${CLUSTER_NAME} --cluster=${CLUSTER_NAME} --user=${KUSER} --kubeconfig=${KCONFIG}
kubectl config use-context ${KUSER}@${CLUSTER_NAME} --kubeconfig=${KCONFIG}
chown kubernetes:kubernetes ${KCONFIG}
chmod 0600 ${KCONFIG}
kubectl config view --kubeconfig=${KCONFIG}
# Service Accountのkube-proxyにsystem:node-proxierというClusterRoleを付ける。
kubectl create clusterrolebinding kubeadm:node-proxier --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
CLUSTER_CIDR="10.244.0.0/16"
cat > /etc/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
User=root
Group=root
ExecStart=/usr/bin/kube-proxy \\
--feature-gates=RotateKubeletServerCertificate=true \\
--bind-address 0.0.0.0 \\
--cluster-cidr=${CLUSTER_CIDR} \\
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy -l
### # ネットワークプロバイダ (flannel)
f_log "ネットワークプロバイダ (flannel)"
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl -n kube-system get po
### # CoreDNS
f_log "CoreDNS"
f_log "Kubernetes 1.10からは、サービスディスカバリに(kube-dnsの代わりに)CoreDNSを使うのが標準になった。"
f_log "jqコマンドのインストール"
yum -y install epel-release
yum -y install jq
cd /tmp
curl -LO https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
curl -LO https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh
chmod +x deploy.sh
DNS_SERVER_IP="10.0.0.10"
SERVICE_CLUSTER_IP_RANGE="10.0.0.0/16"
DNS_DOMAIN="cluster.local"
./deploy.sh -r $SERVICE_CLUSTER_IP_RANGE -i $DNS_SERVER_IP -d $DNS_DOMAIN > coredns.yaml
kubectl apply -f coredns.yaml
kubectl -n kube-system get svc,pod
f_log "setup completed"
|
セットアップ実行
以下のコマンドでvagrantを起動します。セットアップスクリプトが自動実行されます。
大量にダウンロードするので、しばらく待ちます…。
1
|
vagrant up--provider=virtualbox
|
以下のような表示になれば、セットアップは完了です。
1
|
master: ■ setup completed
|
それではちょっと使ってみましょう。
一般ユーザー権限のPowerShellから操作します。
1
2
|
# Power Shell の文字コードをUTF-8に変更します。
chcp 65001
|
vagrantマシンにsshで入ります。
1
|
vagrant ssh
|
vagrantの場合、最初はvagrantというユーザー名でログインすることになります。
kubernetesやdockerは、rootになって操作します。
1
|
sudo bash
|
以下のコマンドを入力し、kubernetesのシステムが動作しているか確認します。
1
|
kubectl -n kube-system get svc,pod
|
表示結果は以下のようになります。
1
2
3
4
5
6
7
8
9
|
[root@master vagrant]# kubectl -n kube-system get svc,pod
NAME TYPECLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.0.0.1053/UDP,53/TCP 1m
NAMEREADY STATUSRESTARTS AGE
pod/coredns-794b5f8589-tj6hl1/1 Running 01m
pod/coredns-794b5f8589-vqr4x1/1 Running 01m
pod/kube-flannel-ds-amd64-mp8vc 1/1 Running 022m
[root@master vagrant]#
|
以上です。