- Ingress
- Ingress Controller (Edge Proxy, Pods): make Ingress resource to work, the cluster must have an ingress controller running. Ingress controllers are not started automatically with a cluster.
- AWS: Ingress = Application Load Balancers (L7 HTTP), Service-LoadBalancer = Network Load Balancers (L4). AWS Ingress Controller is a k8s Deployment of Pod
- Nginx: It is also pods. In AWS we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of
Type=LoadBalancer
. - Use Service.Type=LoadBalancer
- The big downside is that each service you expose with a LoadBalancer will get its own IP address, and you have to pay for a LoadBalancer per exposed service, which can get expensive!
- Use Service.Type=NodePort
Networking
CNI primarily works at L4 layer whereas service mesh works at L7 layer.
There are lots of different kinds of CNI plugins, but the two main ones are:
- Network plugins, which are responsible for connecting pods to the network
- IPAM (IP Address Management) plugins, which are responsible for allocating pod IP addresses
Service Mesh
Service Mesh implementation: Before the sidecar proxy container and application container are started, the Init container started firstly. The Init container is used to set iptables (the default traffic interception method in Istio, and can also use BPF, IPVS, etc.) to Intercept traffic entering the pod to Envoy sidecar Proxy. All TCP traffic (Envoy currently only supports TCP traffic) will be Intercepted by sidecar, and traffic from other protocols will be requested as originally
Comparison: sidecar proxy vs per node vs per service account per node vs shared remote proxy with micro proxy: https://www.solo.io/blog/ebpf-for-service-mesh/
- consider resource overhead / feature isolation / security granularity / upgrade impact
- For Linkerd: Per-host proxies are significantly worse than sidecars https://buoyant.io/2022/06/07/ebpf-sidecars-and-the-future-of-the-service-mesh
kube-proxy
kube-proxy is responsible for updating the iptables rules on each node of the cluster. https://betterprogramming.pub/k8s-a-closer-look-at-kube-proxy-372c4e8b090
eBPF & io_uring
eBPF is a virtual machine embedded within the Linux kernel. It allows small programs to be loaded into the kernel, and attached to hooks, which are triggered when some event occurs. This allows the behaviour of the kernel to be (sometimes heavily) customised. While the eBPF virtual machine is the same for each type of hook, the capabilities of the hooks vary considerably. Since loading programs into the kernel could be dangerous; the kernel runs all programs through a very strict static verifier; the verifier sandboxes the program, ensuring it can only access allowed parts of memory and ensuring that it must terminate quickly. https://projectcalico.docs.tigera.io/about/about-ebpf
io_uring supports linking operations, but there is no way to generically pass the result of one system call to the next. With a simple bpf program, the application can tell the kernel how the result of
open
is to be passed to read
— including the error handling, which then allocates its own buffers and keeps reading until the entire file is consumed and finally closed: we can checksum, compress, or search an entire file with a single system call.Routing
- Cloudflare --proxied--> AWS Route 53 --> ELB (Ingress-managed HA Cloud LoadBalancer) --> EC2 instances (Target Group nodes) --> Ingress Controller Pods (can be deployment or DaemonSet) -> Actual backend pods
- Don't use DaemonSet when cluster size is too big - extra burden as each DaemonSet need to connect to k8s API
没有评论:
发表评论