
.Net6 Ocelot 与 Kubernetes
前言
这玩意太坑人了。浪费了我一天的时间。
先看我们想实现的效果流程:
首先我们请求svc/ocelot
的服务请求,转发到我们ocelot
的Pod中,然后ocelot
通过转发给svc/ocelotapi
服务实现ocelotapi
的负载均衡。
部署API项目
创建dev
名称空间。
kubectl create ns dev
我这儿容器里面就一个默认的天气接口,以及80服务,负载均衡2个。然后通过kubectl apply -f api.yml
部署到k8s集群中。
apiVersion: apps/v1
kind: Deployment
metadata:
name: ocelotapi
namespace: dev
labels:
name: ocelotapi
spec:
replicas: 2
selector:
matchLabels:
name: ocelotapi
template:
metadata:
labels:
name: ocelotapi
spec:
containers:
- name: ocelotapi
image: aidasi/ocelotapi:v1
ports:
- containerPort: 80
imagePullPolicy: Always
---
kind: Service
apiVersion: v1
metadata:
name: ocelotapi
namespace: dev
spec:
ports:
- port: 80
targetPort: 80
selector:
name: ocelotapi
创建K8sOcelot项目
创建一个环境为.net6的K8sOcelot API 项目,并添加相关依赖,添加Docker的支持。
<ItemGroup>
<PackageReference Include="Microsoft.VisualStudio.Azure.Containers.Tools.Targets" Version="1.15.1" />
<PackageReference Include="Ocelot" Version="18.0.0" />
<PackageReference Include="Ocelot.Provider.Kubernetes" Version="18.0.0" />
<PackageReference Include="Ocelot.Provider.Polly" Version="18.0.0" />
</ItemGroup>
接着创建我们的ocelot.json
,当我们通过Get请求访问默认目录时,将会转发给ocelotapi
服务的/weatherforecast
接口。
注意服务发现类型这里选择kube
,并且这里我设置的命名空间为dev
,你也可以根据服务所在的命名空间来进行指定。
{
"Routes": [
{
"UpstreamPathTemplate": "/",
"UpstreamHttpMethod": [ "Get" ],
"DownstreamPathTemplate": "/weatherforecast",
"DownstreamScheme": "http",
"ServiceName": "ocelotapi"
}
],
"GlobalConfiguration": {
"ServiceDiscoveryProvider": {
"Namespace": "dev",
"Type": "kube"
}
}
}
按照官方的说法,按照以下的代码就可以在Kubernetes中跑了;其实不行。
using K8sOcelot;
using Ocelot.DependencyInjection;
using Ocelot.Middleware;
using Ocelot.Provider.Kubernetes;
var builder = WebApplication
.CreateBuilder(args)
;
builder.Configuration
.SetBasePath(builder.Environment.ContentRootPath)
.AddJsonFile("ocelot.json")
.AddEnvironmentVariables();
builder.Services
.AddOcelot()
.AddKubernetes();
var app = builder.Build();
app.UseOcelot().Wait();
app.Run();
当我们通过这样的方式进行访问的时候,会抛异常,错误如下。
no previous request id, message: Error Code: UnableToFindServiceDiscoveryProviderError Message: Unable to find service discovery provider for type: kube errors found in ResponderMiddleware. Setting error response for request path:/, request method: GET
原因是由于14版本之后作者做了重大的更新,导致这玩意的出现。你可以降级到14版本以下。
当然作者也提供了一个新的解决方式。添加一个新的类OcelotBuilderExtensions
。并且在引用的时候,在Program中添加上AddKubernetesFixed
。
public static class OcelotBuilderExtensions
{
private static readonly ServiceDiscoveryFinderDelegate FixedKubernetesProviderFactoryGet = (provider, config, reroute) =>
{
var serviceDiscoveryProvider = KubernetesProviderFactory.Get(provider, config, reroute);
if (serviceDiscoveryProvider is KubernetesServiceDiscoveryProvider)
{
serviceDiscoveryProvider = new Kube(serviceDiscoveryProvider);
}
else if (serviceDiscoveryProvider is PollKubernetes)
{
serviceDiscoveryProvider = new PollKube(serviceDiscoveryProvider);
}
return serviceDiscoveryProvider;
};
public static IOcelotBuilder AddKubernetesFixed(this IOcelotBuilder builder, bool usePodServiceAccount = true)
{
builder.Services.AddSingleton(FixedKubernetesProviderFactoryGet);
builder.Services.AddKubeClient(usePodServiceAccount);
return builder;
}
private class Kube : IServiceDiscoveryProvider
{
private readonly IServiceDiscoveryProvider serviceDiscoveryProvider;
public Kube(IServiceDiscoveryProvider serviceDiscoveryProvider)
{
this.serviceDiscoveryProvider = serviceDiscoveryProvider;
}
public Task<List<Service>> Get()
{
return this.serviceDiscoveryProvider.Get();
}
}
private class PollKube : IServiceDiscoveryProvider
{
private readonly IServiceDiscoveryProvider serviceDiscoveryProvider;
public PollKube(IServiceDiscoveryProvider serviceDiscoveryProvider)
{
this.serviceDiscoveryProvider = serviceDiscoveryProvider;
}
public Task<List<Service>> Get()
{
return this.serviceDiscoveryProvider.Get();
}
}
}
builder.Services
.AddOcelot()
.AddKubernetesFixed();
接着我们,将它Builder成容器,并在添加好RBAC 角色绑定后,通过ocelot.yml
部署到集群中。
下面是相关命令:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ocelot
namespace: dev
labels:
name: ocelot
spec:
replicas: 1
selector:
matchLabels:
name: ocelot
template:
metadata:
labels:
name: ocelot
spec:
containers:
- name: ocelot
image: aidasi/ocelot:v1
ports:
- containerPort: 80
imagePullPolicy: Always
---
kind: Service
apiVersion: v1
metadata:
name: ocelot
namespace: dev
spec:
ports:
- port: 80
targetPort: 80
selector:
name: ocelot
# 打包
docker build -t aidasi/ocelot:v1 -f ../K8sOcelot/Dockerfile ..
# 上传
docker push aidasi/ocelot:v1
# 绑定角色权限(一般不建议这样做,建议单独创建一个角色权限给它使用)
kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts
如果你对权限要求得比较严格的话,可以通过创建一个ServiceAccount
并绑定cluster-admin
的集群角色。例如:
kind: ServiceAccount
apiVersion: v1
metadata:
name: ocelot-pod
namespace: 你的名称空间
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-operator-role
namespace: 你的名称空间
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: ocelot-pod
namespace: 你的名称空间
# 创建该ServiceAccount
kubectl apply -f .\OcelotRoleBind.yaml.yml
然后在pod中添加好serviceAccountName
。
spec:
template:
serviceAccountName: ocelot-pod
metadata:
labels:
name: ocelot
spec:
containers:
# 部署
kubectl apply -f .\ocelot.yml
部署完成后,你会看到我们相关的服务与Pod。
接下来我们通过kubectl port-forward --address 0.0.0.0 -n dev svc/ocelot 5190:80
命令端口转发ocelot
服务到本地,测试ocelot是否能正常转发到svc/ocelotapi
服务。
我们可以看到有数据返回是没问题的。
配置https
生成证书
mkdir -p ${HOME}/.aspnet/https/
dotnet dev-certs https -ep ${HOME}/.aspnet/https/aspnetapp.pfx -p 123456
chmod 777 /root/.aspnet/https/
# 分配给子节点
ssh node01
mkdir -p ${HOME}/.aspnet/https/
chmod 777 /root/.aspnet/https/
scp /root/.aspnet/https/aspnetapp.pfx node01:/root/.aspnet/https/aspnetapp.pfx
配置api.yml
与ocelot.yml
。
apiVersion: apps/v1
kind: Deployment
metadata:
name: ocelotapi
namespace: dev
labels:
name: ocelotapi
spec:
replicas: 2
selector:
matchLabels:
name: ocelotapi
template:
metadata:
labels:
name: ocelotapi
spec:
containers:
- name: ocelotapi
image: aidasi/ocelotapi:v1
ports:
- containerPort: 80
imagePullPolicy: Always
env:
- name: ASPNETCORE_URLS
value: "https://+:443;http://+:80"
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: "/https/aspnetapp.pfx"
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: "123456"
volumeMounts:
- mountPath: /https/
name: httpsfile
volumes:
- name: httpsfile
hostPath:
path: /root/.aspnet/https/
type: Directory
---
kind: Service
apiVersion: v1
metadata:
name: ocelotapi
namespace: dev
spec:
ports:
- port: 80
targetPort: 80
selector:
name: ocelotapi
apiVersion: apps/v1
kind: Deployment
metadata:
name: ocelot
namespace: dev
labels:
name: ocelot
spec:
replicas: 1
selector:
matchLabels:
name: ocelot
template:
metadata:
labels:
name: ocelot
spec:
containers:
- name: ocelot
image: aidasi/ocelot:v1
ports:
- containerPort: 80
imagePullPolicy: Always
env:
- name: ASPNETCORE_URLS
value: "https://+:443;http://+:80"
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: "/https/aspnetapp.pfx"
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: "123456"
volumeMounts:
- mountPath: /https/
name: httpsfile
volumes:
- name: httpsfile
hostPath:
path: /root/.aspnet/https/
type: Directory
---
kind: Service
apiVersion: v1
metadata:
name: ocelot
namespace: dev
spec:
ports:
- port: 80
targetPort: 80
selector:
name: ocelot
随后执行如下命令进行更新。
kubectl apply -f api.yml
kubectl apply -f ocelot.yml
测试没有问题。
注意不要加ASPNETCORE_ENVIRONMENT
环境变量为Development
。【【x3】】
Ingress配置
编写ocelotIngress.yaml
Ingress的配置,通过请求/testpath
路径转发到ocelot
中。
内容如下:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ocelot-ingress
namespace: dev
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: ocelot
port:
number: 80
然后我们通过如下命令进行部署与查看创建情况。
kubectl apply -f ocelotIngress.yaml
kubectl describe ingress ocelot-ingress -n dev
我们发现它并没有生成Address,并且还报错了;是因为ingresses组件不能单独存在,依赖ingresses Controller组件。而创建ingresses Controller的过程,需要配置一个Default Backend。可以通过如下命令创建一个nginx ingress。
kubectl apply -f https://gitee.com/idcf-devops-on-kubernetes/workshop-assets/raw/master/chapter2/assets/ingress-controller-mandatory.yaml
https://gitee.com/idcf-devops-on-kubernetes/workshop-assets/raw/master/chapter2/assets/ingress-controller-cloud-generic.yaml
可以通过如下命令查看创建资源的情况。
kubectl get pod,svc,deploy -n ingress-nginx
当你发现service/ingress-nginx
的EXTERNAL-IP
一直处于Pending状态时,可以通过externalIPs
手动配置主机和外网地址。
kubectl edit service/ingress-nginx -n ingress-nginx
...
spec:
clusterIP: 10.96.81.230
clusterIPs:
- 10.96.81.230
externalIPs:
- 10.9.2.98
...
然后再次重新创建我们的ocelot就可以了。
kubectl delete -f ocelotIngress.yaml
kubectl create -f ocelotIngress.yaml
如果你遇到了service "ingress-nginx-controller-admission" not found
问题。
这是由于你使用了比较新的ingress-nginx,在删除时没有删除掉validatingwebhookconfigurations
里面的资源所造成的。
解决方式:
kubectl get validatingwebhookconfigurations
# 删除相关的 ingress-nginx-admission
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
接下来我们来看配置信息,我们发现它还是报错<error: endpoints "default-http-backend" not found>
的错,原因是因为我这里是内网,你们如果是腾讯云或者阿里云是不会有这个问题的。
所以我这里只好改成NodePoint的方式。创建ingress-controller-cloud-generic-node.yaml
文件内容如下:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: 30080
- name: https
port: 443
protocol: TCP
targetPort: https
nodePort: 30443
---
kubectl delete -f https://gitee.com/idcf-devops-on-kubernetes/workshop-assets/raw/master/chapter2/assets/ingress-controller-cloud-generic.yaml
# 更新
kubectl apply -f .\ingress-controller-cloud-generic-node.yaml
再次看到我们发现没有问题了。在外网通过nodeport进行访问,也是没有问题的。
欢迎加群讨论技术,1群:677373950(满了,可以加,但通过不了),2群:656732739

