Using ArgoCD to find Kubernetes resources using deprecated API versions

While Kubernetes generally strives for backward-compatibility between minor releases, every now and then there are large changes in the Kubernetes API versions (with API versions I’m referring to the resources inside the cluster, not external language-bindings). For example, in Kubernetes 1.22 a large swath of v1beta APIs groups have been removed.

For us as clusters administrators, this means we need to update all our components to the newer APIs, since usually there are replacement APIs available. Usually, the difficult part is not the update, but figuring out which resources are affected. In full-fledged Kubernetes clusters with Ingresses, Policies, RBAC, Monitoring, Logging etc. there are many resources even when the cluster is “empty”!

Since we deploy almost all components of our clusters with ArgoCD – which keeps track of the GVK (GroupVersionKind) of all deployed resources in each Application’s status – we can query ArgoCD applications for resources still using deprecated APIs. The following script does exactly that for each API listed in the DEPRECATED_APIS variable and prints out the affected resource. I compiled this list from the v1.22 deprecation guide.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#!/bin/bash

# https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22
DEPRECATED_APIS="
apiextensions.k8s.io/v1beta1
admissionregistration.k8s.io/v1beta1
apiregistration.k8s.io/v1beta1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1beta1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1beta1
"

# get all ArgoCD applications across all namespaces
apps_json="$(oc get applications -A -o json)"

for api in $DEPRECATED_APIS; do
    echo "${api}:"
    group=$(echo "$api" | cut -d/ -f1)
    version=$(echo "$api" | cut -d/ -f2)
    echo "$apps_json" | jq -r --arg group "$group" --arg version "$version" \
    '.items[].status.resources[] | select( (.group == $group) and (.version == $version)) | .name'
    echo "---"
done

For one of our clusters, the output looks like this:

apiextensions.k8s.io/v1beta1:
dnsendpoints.dns-manager.webservices.cern.ch
dnsendpoints.dns-manager.webservices.cern.ch
backups.velero.io
backupstoragelocations.velero.io
deletebackuprequests.velero.io
downloadrequests.velero.io
podvolumebackups.velero.io
podvolumerestores.velero.io
resticrepositories.velero.io
restores.velero.io
schedules.velero.io
serverstatusrequests.velero.io
volumesnapshotlocations.velero.io
---
admissionregistration.k8s.io/v1beta1:
automate-eos-mounts-opa-policy
cephfs-opa-policy
ingress-opa-policy
---
apiregistration.k8s.io/v1beta1:
---
authentication.k8s.io/v1beta1:
---
authorization.k8s.io/v1beta1:
---
certificates.k8s.io/v1beta1:
---
coordination.k8s.io/v1beta1:
---
extensions/v1beta1:
---
networking.k8s.io/v1beta1:
---
rbac.authorization.k8s.io/v1beta1:
automate-eos-mounts-opa-policy-mgmt
cephfs-opa-policy-mgmt
external-dns
ingress-opa-policy-mgmt
landb-operator-landb-operator
---
scheduling.k8s.io/v1beta1:
---
storage.k8s.io/v1beta1:
---

Once we have identified the resources using deprecated APIs, we need to update their corresponding sources (in our case Helm charts) to use the new version. For example, we can see that our custom operator dns-manager as well as the velero Helm chart are still using the outdated apiextensions.k8s.io/v1beta APIs.

For now we just checked if a resource currently deployed in our cluster is based on a deprecated API. One additional aspect that needs to be considered is whether any clients are still using the deprecated API. For example, a client might be querying the Kubernetes API to list Ingresses in networking.k8s.io/v1beta1. After installing the new Kubernetes control-plane, this API call will break and it is unclear how individual clients handle this (hint: probably not gracefully).

To identify clients using APIs which will be removed, you need to enable Audit Logging on your API server.

This is not only enabled by default in OpenShift (RedHat’s Kubernetes Distribution), but it also provides a super-helpful API extensions to query API-call statistics directly from the CLI! The APIRequestCounts resource aggregates the API calls for a particular GVK received by the Kubernetes API.

# Note: oc is the OpenShift-equivalent of kubectl
$ oc get APIRequestCounts
NAME                                                          REMOVEDINRELEASE   REQUESTSINCURRENTHOUR   REQUESTSINLAST24H
...
customresourcedefinitions.v1.apiextensions.k8s.io                                1814                    11939
customresourcedefinitions.v1beta1.apiextensions.k8s.io        1.22               905                     4465
daemonsets.v1.apps                                                               507                     3712
...

This makes it not only trivial to find out which APIs are in-use, but also find out which client is making these API requests:

$ oc get apirequestcounts ingresses.v1beta1.networking.k8s.io \
  -o jsonpath='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{","}{.username}{"\n"}{end}' \
  | sort -k 2 -t, -u | column -t -s, -NVERBS,USERNAME

VERBS  USERNAME
watch  system:serviceaccount:openshift-cern-cert-manager:cert-manager

From this example we can see that we need to upgrade our cert-manager deployment, because cert-manager is still querying for the v1beta1 version of Ingresses.

Happy upgrading!

#  References