Skip to content

Commit

Permalink
restore-crd: use dynamic api to get k8s resource
Browse files Browse the repository at this point in the history
currently, we are directly using `kubectl` command to get crds,
crd name and also for other ops. Since, we now have code to get k8s
resource using dynamic api, let's use that instead of `kubectl` command.

One thing to notice, now it more restrict when passing the ceph crd
types. For example, earlier `cephcluster` used to work now we need to be
specific `cephclusters`.

Signed-off-by: subhamkrai <srai@redhat.com>
  • Loading branch information
subhamkrai committed Jan 25, 2024
1 parent a519901 commit f4144de
Show file tree
Hide file tree
Showing 8 changed files with 121 additions and 130 deletions.
10 changes: 5 additions & 5 deletions .github/workflows/go-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -130,15 +130,15 @@ jobs:
# First let's delete the cephCluster
kubectl -n rook-ceph delete cephcluster my-cluster --timeout 3s --wait=false
kubectl rook-ceph -n rook-ceph restore-deleted cephcluster
kubectl rook-ceph -n rook-ceph restore-deleted cephclusters
tests/github-action-helper.sh wait_for_crd_to_be_ready_default
- name: Restore CRD with CRName
run: |
# First let's delete the cephCluster
kubectl -n rook-ceph delete cephcluster my-cluster --timeout 3s --wait=false
kubectl rook-ceph -n rook-ceph restore-deleted cephcluster my-cluster
kubectl rook-ceph -n rook-ceph restore-deleted cephclusters my-cluster
tests/github-action-helper.sh wait_for_crd_to_be_ready_default
- name: Show Cluster State
Expand All @@ -153,7 +153,7 @@ jobs:
set -ex
kubectl rook-ceph destroy-cluster
sleep 1
kubectl get deployments -n rook-ceph --no-headers| wc -l | (read n && [ $n -le 1 ] || { echo "the crs could not be deleted"; exit 1;})
kubectl get deployments -n rook-ceph --no-headers| wc -l | (read n && [ $n -le 1 ] || { echo "the crs could not be deleted"; exit 1;})
- name: collect common logs
if: always()
Expand Down Expand Up @@ -286,15 +286,15 @@ jobs:
# First let's delete the cephCluster
kubectl -n test-cluster delete cephcluster my-cluster --timeout 3s --wait=false
kubectl rook-ceph --operator-namespace test-operator -n test-cluster restore-deleted cephcluster
kubectl rook-ceph --operator-namespace test-operator -n test-cluster restore-deleted cephclusters
tests/github-action-helper.sh wait_for_crd_to_be_ready_custom
- name: Restore CRD with CRName
run: |
# First let's delete the cephCluster
kubectl -n test-cluster delete cephcluster my-cluster --timeout 3s --wait=false
kubectl rook-ceph --operator-namespace test-operator -n test-cluster restore-deleted cephcluster my-cluster
kubectl rook-ceph --operator-namespace test-operator -n test-cluster restore-deleted cephclusters my-cluster
tests/github-action-helper.sh wait_for_crd_to_be_ready_custom
- name: Show Cluster State
Expand Down
3 changes: 1 addition & 2 deletions cmd/commands/root.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,14 @@ import (
"regexp"
"strings"

"k8s.io/client-go/dynamic"

"github.com/rook/kubectl-rook-ceph/pkg/exec"
"github.com/rook/kubectl-rook-ceph/pkg/k8sutil"
"github.com/rook/kubectl-rook-ceph/pkg/logging"
rookclient "github.com/rook/rook/pkg/client/clientset/versioned"
"github.com/spf13/cobra"

v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/dynamic"
k8s "k8s.io/client-go/kubernetes"
_ "k8s.io/client-go/plugin/pkg/client/auth"
"k8s.io/client-go/tools/clientcmd"
Expand Down
91 changes: 16 additions & 75 deletions docs/crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,102 +11,43 @@ While the underlying Ceph data and daemons continue to be available, the CRs wil

The `restore-deleted` command has one required and one optional parameter:

- `<CRD>`: The CRD type that is to be restored, such as CephCluster, CephFilesystem, CephBlockPool and so on.
- `[CRName]`: The name of the specific CR which you want to restore since there can be multiple instances under the same CRD. For example, if there are multiple CephFilesystems stuck in deleting state, a specific filesystem can be restored: `restore-deleted cephfilesystem filesystem-2`.
- `<CRD>`: The CRD type that is to be restored, such as CephClusters, CephFilesystems, CephBlockPools and so on.
- `[CRName]`: The name of the specific CR which you want to restore since there can be multiple instances under the same CRD. For example, if there are multiple CephFilesystems stuck in deleting state, a specific filesystem can be restored: `restore-deleted cephfilesystems filesystem-2`.

```bash
kubectl rook-ceph restore-deleted <CRD> [CRName]
```

## CephCluster Restore Example
## CephClusters Restore Example

```bash
kubectl rook-ceph restore-deleted cephcluster
kubectl rook-ceph restore-deleted cephclusters

Info: Detecting which resources to restore for crd "cephclusters"

Info: Detecting which resources to restore for crd "cephcluster"
Info: Restoring CR my-cluster
Warning: The resource my-cluster was found deleted. Do you want to restore it? yes | no

Info: skipped prompt since ROOK_PLUGIN_SKIP_PROMPTS=true
Info: Scaling down the operator to 0
Info: Backing up kubernetes and crd resources
Info: Backed up crd cephcluster/my-cluster in file cephcluster-my-cluster.yaml
Info: Proceeding with restoring deleting CR
Info: Scaling down the operator
Info: Deleting validating webhook rook-ceph-webhook if present
Info: Fetching the UID for cephcluster/my-cluster
Info: Successfully fetched uid 8366f79a-ae1f-4679-a62b-8abc6e1528fa from cephcluster/my-cluster
Info: Removing ownerreferences from resources with matching uid 8366f79a-ae1f-4679-a62b-8abc6e1528fa
Info: Removing ownerreferences from resources with matching uid 92c0e549-44fd-43db-80ba-5473db996208
Info: Removing owner references for secret cluster-peer-token-my-cluster
Info: Removed ownerReference for Secret: cluster-peer-token-my-cluster

Info: Removing owner references for secret rook-ceph-admin-keyring
Info: Removed ownerReference for Secret: rook-ceph-admin-keyring

Info: Removing owner references for secret rook-ceph-config
Info: Removed ownerReference for Secret: rook-ceph-config

Info: Removing owner references for secret rook-ceph-crash-collector-keyring
Info: Removed ownerReference for Secret: rook-ceph-crash-collector-keyring

Info: Removing owner references for secret rook-ceph-mgr-a-keyring
Info: Removed ownerReference for Secret: rook-ceph-mgr-a-keyring

Info: Removing owner references for secret rook-ceph-mons-keyring
Info: Removed ownerReference for Secret: rook-ceph-mons-keyring

Info: Removing owner references for secret rook-csi-cephfs-node
Info: Removed ownerReference for Secret: rook-csi-cephfs-node

Info: Removing owner references for secret rook-csi-cephfs-provisioner
Info: Removed ownerReference for Secret: rook-csi-cephfs-provisioner

Info: Removing owner references for secret rook-csi-rbd-node
Info: Removed ownerReference for Secret: rook-csi-rbd-node

Info: Removing owner references for secret rook-csi-rbd-provisioner
Info: Removed ownerReference for Secret: rook-csi-rbd-provisioner

Info: Removing owner references for configmaps rook-ceph-mon-endpoints
Info: Removed ownerReference for configmap: rook-ceph-mon-endpoints

Info: Removing owner references for service rook-ceph-exporter
Info: Removed ownerReference for service: rook-ceph-exporter

Info: Removing owner references for service rook-ceph-mgr
Info: Removed ownerReference for service: rook-ceph-mgr
---
---
---

Info: Removing owner references for service rook-ceph-mgr-dashboard
Info: Removed ownerReference for service: rook-ceph-mgr-dashboard

Info: Removing owner references for service rook-ceph-mon-a
Info: Removed ownerReference for service: rook-ceph-mon-a

Info: Removing owner references for service rook-ceph-mon-d
Info: Removed ownerReference for service: rook-ceph-mon-d

Info: Removing owner references for service rook-ceph-mon-e
Info: Removed ownerReference for service: rook-ceph-mon-e

Info: Removing owner references for deployemt rook-ceph-mgr-a
Info: Removed ownerReference for deployment: rook-ceph-mgr-a

Info: Removing owner references for deployemt rook-ceph-mon-a
Info: Removed ownerReference for deployment: rook-ceph-mon-a

Info: Removing owner references for deployemt rook-ceph-mon-d
Info: Removed ownerReference for deployment: rook-ceph-mon-d

Info: Removing owner references for deployemt rook-ceph-mon-e
Info: Removed ownerReference for deployment: rook-ceph-mon-e

Info: Removing owner references for deployemt rook-ceph-osd-0
Info: Removing owner references for deployment rook-ceph-osd-0
Info: Removed ownerReference for deployment: rook-ceph-osd-0

Info: Removing finalizers from cephcluster/my-cluster
Info: cephcluster.ceph.rook.io/my-cluster patched

Info: Re-creating the CR cephcluster from file cephcluster-my-cluster.yaml created above
Info: cephcluster.ceph.rook.io/my-cluster created

Info: Scaling up the operator to 1
Info: Removing finalizers from cephclusters/my-cluster
Info: Re-creating the CR cephclusters from dynamic resource
Info: Scaling up the operator
Info: CR is successfully restored. Please watch the operator logs and check the crd
```
25 changes: 13 additions & 12 deletions pkg/crds/crds.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@ import (
"context"
"encoding/json"
"fmt"
"time"

"github.com/rook/kubectl-rook-ceph/pkg/k8sutil"
"github.com/rook/kubectl-rook-ceph/pkg/logging"
corev1 "k8s.io/api/core/v1"
Expand All @@ -28,7 +30,6 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes"
"time"
)

var cephResources = []string{
Expand All @@ -52,8 +53,8 @@ var cephResources = []string{
}

const (
cephRookIoGroup = "ceph.rook.io"
cephRookResourcesVersion = "v1"
CephRookIoGroup = "ceph.rook.io"
CephRookResourcesVersion = "v1"
)

const (
Expand All @@ -73,7 +74,7 @@ var (
},
}

defaultResourceRemoveFinalizers = map[string]interface{}{
DefaultResourceRemoveFinalizers = map[string]interface{}{
"metadata": map[string]interface{}{
"finalizers": nil,
},
Expand All @@ -93,7 +94,7 @@ func DeleteCustomResources(ctx context.Context, clientsets k8sutil.ClientsetsInt
func deleteCustomResources(ctx context.Context, clientsets k8sutil.ClientsetsInterface, clusterNamespace string) error {
for _, resource := range cephResources {
logging.Info("getting resource kind %s", resource)
items, err := clientsets.ListResourcesDynamically(ctx, cephRookIoGroup, cephRookResourcesVersion, resource, clusterNamespace)
items, err := clientsets.ListResourcesDynamically(ctx, CephRookIoGroup, CephRookResourcesVersion, resource, clusterNamespace)
if err != nil {
if k8sErrors.IsNotFound(err) {
logging.Info("the server could not find the requested resource: %s", resource)
Expand All @@ -109,7 +110,7 @@ func deleteCustomResources(ctx context.Context, clientsets k8sutil.ClientsetsInt

for _, item := range items {
logging.Info(fmt.Sprintf("removing resource %s: %s", resource, item.GetName()))
err = clientsets.DeleteResourcesDynamically(ctx, cephRookIoGroup, cephRookResourcesVersion, resource, clusterNamespace, item.GetName())
err = clientsets.DeleteResourcesDynamically(ctx, CephRookIoGroup, CephRookResourcesVersion, resource, clusterNamespace, item.GetName())
if err != nil {
if k8sErrors.IsNotFound(err) {
logging.Info(err.Error())
Expand All @@ -118,7 +119,7 @@ func deleteCustomResources(ctx context.Context, clientsets k8sutil.ClientsetsInt
return err
}

itemResource, err := clientsets.GetResourcesDynamically(ctx, cephRookIoGroup, cephRookResourcesVersion, resource, item.GetName(), clusterNamespace)
itemResource, err := clientsets.GetResourcesDynamically(ctx, CephRookIoGroup, CephRookResourcesVersion, resource, item.GetName(), clusterNamespace)
if err != nil {
if !k8sErrors.IsNotFound(err) {
return err
Expand All @@ -136,15 +137,15 @@ func deleteCustomResources(ctx context.Context, clientsets k8sutil.ClientsetsInt
return err
}

err = clientsets.DeleteResourcesDynamically(ctx, cephRookIoGroup, cephRookResourcesVersion, resource, clusterNamespace, item.GetName())
err = clientsets.DeleteResourcesDynamically(ctx, CephRookIoGroup, CephRookResourcesVersion, resource, clusterNamespace, item.GetName())
if err != nil {
if !k8sErrors.IsNotFound(err) {
return err
}
}
}

itemResource, err = clientsets.GetResourcesDynamically(ctx, cephRookIoGroup, cephRookResourcesVersion, resource, item.GetName(), clusterNamespace)
itemResource, err = clientsets.GetResourcesDynamically(ctx, CephRookIoGroup, CephRookResourcesVersion, resource, item.GetName(), clusterNamespace)
if err != nil {
if !k8sErrors.IsNotFound(err) {
return err
Expand All @@ -160,14 +161,14 @@ func deleteCustomResources(ctx context.Context, clientsets k8sutil.ClientsetsInt
func updatingFinalizers(ctx context.Context, clientsets k8sutil.ClientsetsInterface, itemResource *unstructured.Unstructured, resource, clusterNamespace string) error {
if resource == CephResourceCephClusters {
jsonPatchData, _ := json.Marshal(clusterResourcePatchFinalizer)
err := clientsets.PatchResourcesDynamically(ctx, cephRookIoGroup, cephRookResourcesVersion, resource, clusterNamespace, itemResource.GetName(), types.MergePatchType, jsonPatchData)
err := clientsets.PatchResourcesDynamically(ctx, CephRookIoGroup, CephRookResourcesVersion, resource, clusterNamespace, itemResource.GetName(), types.MergePatchType, jsonPatchData)
if err != nil {
return err
}
}

jsonPatchData, _ := json.Marshal(defaultResourceRemoveFinalizers)
err := clientsets.PatchResourcesDynamically(ctx, cephRookIoGroup, cephRookResourcesVersion, resource, clusterNamespace, itemResource.GetName(), types.MergePatchType, jsonPatchData)
jsonPatchData, _ := json.Marshal(DefaultResourceRemoveFinalizers)
err := clientsets.PatchResourcesDynamically(ctx, CephRookIoGroup, CephRookResourcesVersion, resource, clusterNamespace, itemResource.GetName(), types.MergePatchType, jsonPatchData)
if err != nil {
return err
}
Expand Down
25 changes: 25 additions & 0 deletions pkg/k8sutil/dynamic.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ package k8sutil

import (
"context"

metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
Expand Down Expand Up @@ -119,3 +120,27 @@ func (c *Clientsets) GetResourcesDynamically(

return item, nil
}

func (c *Clientsets) CreateResourcesDynamically(
ctx context.Context,
group string,
version string,
resource string,
name *unstructured.Unstructured,
namespace string,
) (*unstructured.Unstructured, error) {
resourceId := schema.GroupVersionResource{
Group: group,
Version: version,
Resource: resource,
}

item, err := c.Dynamic.Resource(resourceId).Namespace(namespace).
Create(ctx, name, metav1.CreateOptions{})

if err != nil {
return nil, err
}

return item, nil
}
2 changes: 2 additions & 0 deletions pkg/k8sutil/interface.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,14 @@ package k8sutil

import (
"context"

"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
)

//go:generate mockgen -package=k8sutil --build_flags=--mod=mod -destination=mocks.go github.com/rook/kubectl-rook-ceph/pkg/k8sutil ClientsetsInterface
type ClientsetsInterface interface {
CreateResourcesDynamically(ctx context.Context, group string, version string, resource string, name *unstructured.Unstructured, namespace string) (*unstructured.Unstructured, error)
ListResourcesDynamically(ctx context.Context, group string, version string, resource string, namespace string) ([]unstructured.Unstructured, error)
GetResourcesDynamically(ctx context.Context, group string, version string, resource string, name string, namespace string) (*unstructured.Unstructured, error)
DeleteResourcesDynamically(ctx context.Context, group string, version string, resource string, namespace string, resourceName string) error
Expand Down
15 changes: 15 additions & 0 deletions pkg/k8sutil/mocks.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading

0 comments on commit f4144de

Please sign in to comment.