Deploying Black Duck in Azure using Synopsysctl when Upgrading to Black Duck 2021.2.1

Because of issues discovered with AzureFile and RabbitMQ, it is necessary to create a new custom StorageClass when upgrading to Black Duck 2021.2.1.

For customers who have installs managed by synopsysctl, a fresh install is required, targeting existing Persistent Volumes.

1. If your StorageClass doesn't set retention by default, set the retention policy to retain for all required PersistentVolumes if this hasn't already been done:

kubectl get pv | awk '/pvc/{print$1}' | xargs -I{} kubectl patch pv {} -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

2. Create the required StorageClass to facilitate the new pod

kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: bd-rabbitmq-azurefile provisioner: kubernetes.io/azure-file reclaimPolicy: Retain mountOptions: - dir_mode=0700 - file_mode=0700 - uid=100 - gid=101 - mfsymlinks - cache=strict - actimeo=30 parameters: skuName: Standard_LRS

3. Scale the RabbitMQ deployment to 0 so that the existing PersistentVolume can be removed

kubectl scale deploy <release-name>-blackduck-rabbitmq --replicas=0

4. Remove the current PVC for rabbitmq

5. Remove the current PV for rabbitmq

6. Create the securityContext.json and pvcResources.json files, which will do the following:

a. Generates a podSecurityContext for the RabbitMQ pod

b. Generates a PVC Resources file which re-binds to existing PersistentVolumes and creates a new PV with the custom StorageClass

securityContext.json

pvcResources.json

 

Note that the volumeName in the above code block is the name of the current PV bound on your system and must match the value of the claim column when you execute kubectl get pv.

The RabbitMQ element does not have a volumeName because the StorageClass will dynamically provision a new one under bd-rabbitmq-azurefile

7. Execute a delete blackduck command on your current install

8. Your PV's will now show up with a status of RELEASED. Run the following command to put them into an AVAILABLE state so that we can rebind them.

9. Run the synopsysctl install command to bind the changes, making sure to configure the parameters to your existing configuration.

10. Verify that a new PV and PVC have been created for rabbitmq with a storageClass of bd-rabbitmq-azurefile

11. Scale the rabbitmq deployment to 0 and back to 1 as shown in step 3.
The bomengine should reconnect successfully. This can be verified by running the following command to interrogate rabbitmqctl

You should see an output similar to the following:

 

©2020 Synopsys, Inc. All Rights Reserved