The recommended approach to deploying Black Duck on Azure Kubernetes Service (AKS) is to use an external database (Azure Database for PostgreSQL) and Azure File for persistent volumes within the Kubernetes cluster for the following reasons:The reason for this is:
PostgreSQL as a container cannot run on Azure File storage because Azure File does not support symbolic links.
Azure Disk storage would allow the PostgreSQL container but is tied to the node in the AKS cluster, so if a pod moves to another node it cannot access the data volume and the database will fail. When a node is lost all data is lost.
Azure File storage (which is available to the cluster and not the node) is recommended but prevents the use of the internal PostgreSQL container database, so you must use an external database.
Synopsys recommends using an external database for production Kubernetes deployments.
The following instructions describe deploying Black Duck using native Synopsysctl commands on AKS using an external database (Azure Database for PostgreSQL).
This document assumes you have followed these steps as a prerequesite.
Created an Azure Resource Group for your Kuberenetes Service and Azure Database for PostgreSQL.
Created the Azure Kubernetes Service cluster.
Connected kubectl to your Azure Kubernetes Cluster ('
az aks install-cli
' and 'az aks get-credentials
')Created the Azure Database for PostgreSQL instance. Ensure it is in the same region as your Kubernetes Service and is version 11 (for Black Duck 2020.6.0 or later).
Configured Dynamic persistent volumes for Azure File. https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
Initialise the external database ready. To do so you will need to be able to connect to your Azure database via psql in the console. This requires you to:
Have psql installed
Not be on a VPN network as port 5432 is blocked.
Have configured your IP address in Connection Security of the Azure Database for PostgreSQL.
Connect to your database via psql (in Azure console you can get the connection string under the Connection strings page under the database:
psql "host=synopsys-blackduck-db.postgres.database.azure.com port=5432 dbname=postgres user=blackduck@synopsys-blackduck-db password={your password here} sslmode=require"
4. When you connect, run the following SQL statements one at a time so that we modify template1 to change the database encoding to SQL_ASCII:
ALTER DATABASE template1 is_template false; DROP DATABASE template1; CREATE DATABASE template1 WITH template=template0 LC_COLLATE 'C' LC_CTYPE 'C' encoding 'SQL_ASCII'; ALTER DATABASE template1 is_template true;
5. Exit (
\quit
) out of psql.
6. Create a file called initdb.psql with the following contents and modify the XXXXX for your password choices.
Note this has an additional line to usual versions of this file 'GRANT blackduck_user to blackduck;' and also uses template1:CREATE DATABASE bds_hub owner blackduck TEMPLATE template1 ENCODING SQL_ASCII lc_collate='C' lc_ctype='C'; CREATE DATABASE bds_hub_report owner blackduck TEMPLATE template1 ENCODING SQL_ASCII lc_collate='C' lc_ctype='C'; CREATE USER blackduck_user WITH NOCREATEDB NOSUPERUSER NOREPLICATION NOBYPASSRLS; CREATE USER blackduck_reporter; ALTER USER blackduck_user WITH password 'XXXXX'; GRANT blackduck_user to blackduck; \c bds_hub CREATE EXTENSION pgcrypto; CREATE SCHEMA st AUTHORIZATION blackduck; GRANT USAGE ON SCHEMA st TO blackduck_user; GRANT SELECT, INSERT, UPDATE, TRUNCATE, DELETE, REFERENCES ON ALL TABLES IN SCHEMA st TO blackduck_user; GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA st to blackduck_user; ALTER DEFAULT PRIVILEGES IN SCHEMA st GRANT SELECT, INSERT, UPDATE, TRUNCATE, DELETE, REFERENCES ON TABLES TO blackduck_user; ALTER DEFAULT PRIVILEGES IN SCHEMA st GRANT ALL PRIVILEGES ON SEQUENCES TO blackduck_user; ALTER DATABASE bds_hub SET standard_conforming_strings TO OFF; \c bds_hub_report GRANT SELECT ON ALL TABLES IN SCHEMA public TO blackduck_reporter; ALTER DEFAULT PRIVILEGES FOR ROLE blackduck IN SCHEMA public GRANT SELECT ON TABLES TO blackduck_reporter; GRANT SELECT, INSERT, UPDATE, TRUNCATE, DELETE, REFERENCES ON ALL TABLES IN SCHEMA public TO blackduck_user; GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public to blackduck_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, TRUNCATE, DELETE, REFERENCES ON TABLES TO blackduck_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL PRIVILEGES ON SEQUENCES TO blackduck_user; ALTER DATABASE bds_hub_report SET standard_conforming_strings TO OFF;
7. Execute the following file against the database:
psql -h synopsys-blackduck-db.postgres.database.azure.com -p 5432 -d postgres -u blackduck@synopsys-blackduck-db -f initdb.psql
8. Download and install synopsysctl (see Installing synopsysctl .On Windows download the zip and extract synopsysctl.exe to a folder of choice, open a command prompt and cd to that folder.
The folloiwing example commands install 1.1.0 on Linux:
wget https://github.com/blackducksoftware/synopsysctl/releases/download/v1.1.0/synopsysctl-linux-amd64-1.1.0.tar.gz tar -xvzf synopsysctl-linux-amd64-1.1.0.tar.gz ./synopsysctl --version
10. Deploy Black Duck using synopsysctl:
a. Create the namespace
C:\Users\dnichol\azure>kubectl create ns test-blackduck namespace/test-blackduck created |
b. Create the deployment yaml file for Black Duck and replace:
Name and namespace with your choice.
Database Passwords (XXXXX).
Seal-key of your choice - this is a 32 character key that you should keep in a safe place.
Version of Black Duck (2020.8.0 in this example).
Size of Black Duck (small in this example).
Database host and connection user (note the connection user is defined in different ways, the environs should refer to the full Azure database user with the hostname included).
Note this uses Azure File and assumes you have configured dynamic persistent volumes for azurefile. We recommend Azure File as it is storage available to all nodes in the cluster.
./synopsysctl create blackduck native test-blackduck -n test-blackduck \ --expose-ui LOADBALANCER \ --external-postgres-host synopsys-blackduck-db.postgres.database.azure.com \ --external-postgres-port 5432 \ --external-postgres-admin blackduck \ --external-postgres-user blackduck_user \ --external-postgres-admin-password XXXXX \ --external-postgres-user-password XXXXX \ --external-postgres-ssl=true \ --persistent-storage=true \ --pvc-storage-class azurefile \ --environs HUB_POSTGRES_CONNECTION_USER:blackduck_user@synopsys-blackduck-db,HUB_POSTGRES_CONNECTION_ADMIN:blackduck@synopsys-blackduck-db \ --version 2020.8.0 \ --size small \ --seal-key <32-character-key> \ --target KUBERNETES > blackduck.yaml |
c. Deploy the Black Duck yaml file
C:\Users\dnichol\azure>kubectl create -f blackduck.yaml -n test-blackduck serviceaccount/test-blackduck-blackduck-service-account created secret/test-blackduck-blackduck-upload-cache created configmap/test-blackduck-blackduck-config created persistentvolumeclaim/test-blackduck-blackduck-authentication created persistentvolumeclaim/test-blackduck-blackduck-cfssl created persistentvolumeclaim/test-blackduck-blackduck-registration created persistentvolumeclaim/test-blackduck-blackduck-uploadcache-data created persistentvolumeclaim/test-blackduck-blackduck-webapp created persistentvolumeclaim/test-blackduck-blackduck-logstash created service/test-blackduck-blackduck-authentication created service/test-blackduck-blackduck-cfssl created service/test-blackduck-blackduck-documentation created service/test-blackduck-blackduck-registration created service/test-blackduck-blackduck-scan created service/test-blackduck-blackduck-uploadcache created service/test-blackduck-blackduck-webapp created service/test-blackduck-blackduck-logstash created service/test-blackduck-blackduck-webserver created service/test-blackduck-blackduck-webserver-exposed created deployment.apps/test-blackduck-blackduck-authentication created deployment.apps/test-blackduck-blackduck-cfssl created deployment.apps/test-blackduck-blackduck-documentation created deployment.apps/test-blackduck-blackduck-jobrunner created deployment.apps/test-blackduck-blackduck-redis created deployment.apps/test-blackduck-blackduck-registration created deployment.apps/test-blackduck-blackduck-scan created deployment.apps/test-blackduck-blackduck-uploadcache created deployment.apps/test-blackduck-blackduck-webapp-logstash created deployment.apps/test-blackduck-blackduck-webserver created configmap/test-blackduck-blackduck-db-config created secret/test-blackduck-blackduck-db-creds created configmap/test-blackduck-blackduck-postgres-init-config created job.batch/test-blackduck-blackduck-postgres-init created |
11. Notice that your persistent volumes are bound, pods all start and the external IP is provisioned:
$ kubectl get pvc -n test-blackduck NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-blackduck-blackduck-authentication Bound pvc-2eccbffa-c4aa-461f-8915-3b07d98b5f9c 2Gi RWO azurefile 48s test-blackduck-blackduck-cfssl Bound pvc-d12e9c1b-776e-4f06-b80c-f7f9e16fa83a 2Gi RWO azurefile 48s test-blackduck-blackduck-logstash Bound pvc-2c1e7c05-cced-4134-94c9-484915913b06 20Gi RWO azurefile 48s test-blackduck-blackduck-registration Bound pvc-3d20d39b-4486-4434-9511-f1cf0073ff3a 2Gi RWO azurefile 48s test-blackduck-blackduck-uploadcache-data Bound pvc-4f4998f3-789a-40fc-bcbb-b9ab287fe745 100Gi RWO azurefile 48s test-blackduck-blackduck-webapp Bound pvc-2bcd04f4-32fd-4c93-8391-f90ac9034687 2Gi RWO azurefile 48s $ kubectl get pods -n test-blackduck NAME READY STATUS RESTARTS AGE test-blackduck-blackduck-authentication-6b6b64cd4d-74px4 1/1 Running 0 3m20s test-blackduck-blackduck-cfssl-7d677799db-ltrz2 1/1 Running 0 3m20s test-blackduck-blackduck-documentation-dd6cb45dd-rfm7x 1/1 Running 0 3m20s test-blackduck-blackduck-jobrunner-7549cbb547-bn98q 1/1 Running 0 3m20s test-blackduck-blackduck-postgres-init-zk9w2 0/1 Completed 0 3m19s test-blackduck-blackduck-redis-59dcd7bc94-mcb5l 1/1 Running 0 3m20s test-blackduck-blackduck-registration-55d78fdf8-v74bb 1/1 Running 0 3m20s test-blackduck-blackduck-scan-57f854fd9c-45jn7 1/1 Running 0 3m20s test-blackduck-blackduck-uploadcache-74987bf785-xx6rn 1/1 Running 0 3m20s test-blackduck-blackduck-webapp-logstash-569cc8cd89-lkth5 2/2 Running 1 3m20s test-blackduck-blackduck-webserver-66d5769f84-fc256 1/1 Running 0 3m19s $ kubectl get svc -n test-blackduck NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-blackduck-blackduck-authentication ClusterIP 10.0.48.69 <none> 8443/TCP 3m45s test-blackduck-blackduck-cfssl ClusterIP 10.0.234.9 <none> 8888/TCP 3m44s test-blackduck-blackduck-documentation ClusterIP 10.0.2.57 <none> 8443/TCP 3m44s test-blackduck-blackduck-logstash ClusterIP 10.0.143.235 <none> 5044/TCP 3m44s test-blackduck-blackduck-registration ClusterIP 10.0.131.171 <none> 8443/TCP 3m44s test-blackduck-blackduck-scan ClusterIP 10.0.152.49 <none> 8443/TCP 3m44s test-blackduck-blackduck-uploadcache ClusterIP 10.0.177.100 <none> 9443/TCP,9444/TCP 3m44s test-blackduck-blackduck-webapp ClusterIP 10.0.226.81 <none> 8443/TCP 3m44s test-blackduck-blackduck-webserver ClusterIP 10.0.79.228 <none> 443/TCP 3m44s test-blackduck-blackduck-webserver-exposed LoadBalancer 10.0.182.15 40.76.174.153 443:30947/TCP 3m44s |
12. Black Duck should now be available at the external IP listed in the kubectl get svc
output.