In the First two parts of this series , We talked about installing BBL , bosh and concourse. Lets dive right into the main course :) .. Pivotal Cloud Foundry ( PCF)
using the platform automation pipelines also called PCF Pipelines. ( Links to individual posts are at the bottom)
I would download Pcf-Pileines from the pivotal network
which is a tested and curated version.
Once you download the tar.gz file from pivnet, you should extract the compressed file. Out Focus will be the folder called "install-pcf/azure" folder on the main path.
You will need to edit the params.yml file on that path. I would suggest using the sample from my sample here
as a starting point.
You will need to do some Azure Setup needed before you can upload and kick off the pipeline. This is focussed on creation of the storage account and containers for the terraform state to be stored. Remember, the pipelines use terraform to create the initial infrastructure aka.. Ops manager etc ..
The best place to do this will be the Azure cloud shell. Login to the Azure portal click on the cloud shell button. Once in the Cloud Shell , you will need to:
- Create a new Resource Group to store the terraform files
- Create a new Storage account in that resource group
- Create a container inside using that storage account
Have a look at my example on the Gist
. Once you are done with this , open the portal and make your way to the container that you just created and copy one of the keys to that container.
You will update these values ( the storage account , container name and the key value on the params.yml). See the last section of my sample in the gist.
Another Pre-req would be the setup of a delegated DNS Zone pointing to the subdomain PCF will reside. For example : my domain for PCF on Azure is : cf.az.clue2solve.com
, with apps domain being apps.cf.az.clue2solve.com and sys being on sys.cf.az.clue2solve.com. I created the new DNS Zone for az.clue2solve.com and created the delegation from my primary DNS provider ( happens to be AWS Route53). See my post on DNS Delegation
for details. This delegation into the Azure DNS is needed as the pipelines create subdomains under the base cf domain and attach them.
Few key items in the params file to be careful when filling are:
- The domain names for the ops manager, sys and apps domains.
- The load-balancer name
- pivnet token
- terraform storage info ..
Once the parameters file is ready , we are going to create the pipeline on Concourse.
anandrao at Anands-MBP in ~/pivotal/repos/pcf-pipelines-023.6-pivnet/install-pcf/azure
$ fly -t az set-pipeline -p install-pcf-az -c pipeline.yml -l params.yml
Next , we will login to the concourse UI and un-pause the pipeline.
You may observe that the terraform states will be in error. This is because, all e did in the prep step was to create the container. We now need to initialize the terraform.state file on that container. We don't have to do it manually. This is done via a job on the pipeline called "bootstrap Terraform".
Once this completed , should see the terraform state steps in black instead of orange. This tells us we are ready for the next stage.
Now its time to let it all Fly through the pipeline. Click into the "create-infrastrucutre" step and then the plus button there to initiate it. the Dotted lines between Jobs tells that they are not auto triggered and thus, this step has to be manually triggered. All subsequet jobs are automatically triggered when the prior dependencies are met.
After seeing green on every stage , you should now be able to login to the ops manager using the ops manager url as below.
Congratulations , you now have a cloud Foundry installation ready to use. Let me walk you through some basic sample apps deployment into this foundation to wrap up the install process in another blog entry.
Don't forget to read my post on the cleanup of PCF if you are into playing with these installations for short durations. Links is below.
Quick links to the "The PCF on Azure" Blog Series :