Because of issues discovered with AzureFile and RabbitMQ, it is necessary to create a new custom StorageClass when upgrading to Black Duck 2021.2.1.

For customers who have installs managed by synopsysctl, a fresh install is required, targeting existing Persistent Volumes.

1. If your StorageClass doesn't set retention by default, set the retention policy to retain for all required PersistentVolumes if this hasn't already been done:

kubectl get pv | awk '/pvc/{print$1}' | xargs -I{} kubectl patch pv {} -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

2. Create the required StorageClass to facilitate the new pod

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: bd-rabbitmq-azurefile
provisioner: kubernetes.io/azure-file
reclaimPolicy: Retain
mountOptions:
  - dir_mode=0700
  - file_mode=0700
  - uid=100
  - gid=101
  - mfsymlinks
  - cache=strict
  - actimeo=30
parameters:
  skuName: Standard_LRS

3. Scale the RabbitMQ deployment to 0 so that the existing PersistentVolume can be removed

kubectl scale deploy <release-name>-blackduck-rabbitmq --replicas=0

4. Remove the current PVC for rabbitmq

kubectl delete pvc <pvc-id>

5. Remove the current PV for rabbitmq

kubectl delete pv <pv-id>

6. Create the securityContext.json and pvcResources.json files, which will do the following:

a. Generates a podSecurityContext for the RabbitMQ pod

b. Generates a PVC Resources file which re-binds to existing PersistentVolumes and creates a new PV with the custom StorageClass

securityContext.json

{
    "rabbitmq": {
        "fsGroup": 101,
        "runAsUser": 100
    }
}

pvcResources.json

[
       
  {           
    "name": "rabbitmq",
    "size": "2Gi",
    "storageClass": "bd-rabbitmq-azurefile"
},
  {
    "name": "blackduck-cfssl",
    "size": "2Gi",
    "storageClass": "azurefile",
    "volumeName": "pvc-d774108d-f0f9-4848-8149-48954bc5fffc"
},
{
    "name": "blackduck-registration",
    "size": "2Gi",
    "storageClass": "azurefile",
    "volumeName": "pvc-c965680b-c8dd-4a3d-b06c-23c73c6f7fac"
},
{
    "name": "blackduck-authentication",
    "size": "2Gi",
    "storageClass": "azurefile",
    "volumeName": "pvc-9cda7de0-f49a-45b4-98b4-1d8ec5de95a0"
},
{
    "name": "blackduck-webapp",
    "size": "2Gi",
    "storageClass": "azurefile",
    "volumeName": "pvc-98b9e63e-335e-447a-bfb0-60edf013b54f"
},
{
    "name": "blackduck-logstash",
    "size": "20Gi",
    "storageClass": "azurefile",
    "volumeName": "pvc-f38c8486-3d8e-4918-a29e-173d1e4eb746"
},
{
    "name": "blackduck-uploadcache-data",
    "size": "100Gi",
    "storageClass": "azurefile",
    "volumeName": "pvc-aa539cd4-3045-4817-a8cf-03a1a3a4e1a2"
}
 
]

Note that the volumeName in the above code block is the name of the current PV bound on your system and must match the value of the claim column when you execute kubectl get pv.

The RabbitMQ element does not have a volumeName because the StorageClass will dynamically provision a new one under bd-rabbitmq-azurefile

7. Execute a delete blackduck command on your current install

./synopsysctl delete blackduck <release-name> -n <namespace>

8. Your PV's will now show up with a status of RELEASED. Run the following command to put them into an AVAILABLE state so that we can rebind them.

kubectl get pv | awk '/pvc/{print$1}' | xargs -I{} kubectl patch pv {} -p '{"spec":{"claimRef": null}}'

9. Run the synopsysctl install command to bind the changes, making sure to configure the parameters to your existing configuration.

./synopsysctl create blackduck <release-name> \
...
...
--version 2021.2.1 \
--security-context-file-path securityContext.json \
--pvc-file-path pvcResources.json \

10. Verify that a new PV and PVC have been created for rabbitmq with a storageClass of bd-rabbitmq-azurefile

kubectl get pvc

11. Scale the rabbitmq deployment to 0 and back to 1 as shown in step 3.
The bomengine should reconnect successfully. This can be verified by running the following command to interrogate rabbitmqctl

kubectl get po -o=name | awk '/blackduck-rabbitmq/{print$0}' | xargs -I{} kubectl exec {} -- sh -c "export RABBITMQ_CTL_ERL_ARGS=\"-proto_dist inet_tls\" && rabbitmqctl list_queues --vhost blackduck"

You should see an output similar to the following:

Listing queues for vhost blackduck ...
name    messages
bd-data-bom-bomengine-hub-k8s-blackduck-bomengine-7c9dcf7f6f-jj8l5-1    0
bd-data-bom-bomengine-hub-k8s-blackduck-bomengine-7c9dcf7f6f-jj8l5-3    0
bd-data-scan-bomengine-hub-k8s-blackduck-bomengine-7c9dcf7f6f-jj8l5-1   0
bd-data-bom-bomengine-hub-k8s-blackduck-bomengine-7c9dcf7f6f-jj8l5-5    0
bd-data-bom-bomengine-hub-k8s-blackduck-bomengine-7c9dcf7f6f-jj8l5-2    0
bd-data-bom-bomengine-hub-k8s-blackduck-bomengine-7c9dcf7f6f-jj8l5-4    0
bd-data-scan-bomengine-hub-k8s-blackduck-bomengine-7c9dcf7f6f-jj8l5-2   0
bd-dead-letter  0