DevSecOps and Operationalizing Kubernetes

Containers and specifically Kubernetes, are abuzz in the world of cloud technology. The buzz has led to a tremendous rush to adopt Kubernetes, and rightfully so. Much too often, you are in a state where one can ask you, you just installed/provisioned Kubernetes - “Now what? “. The Challenge of operationalizing Kubernetes is not tiny. There is a lot that goes into making Kubernetes useful for developers and operators alike.

Read More

Launching toolshell.dev

I am a person who is used to having my tools close to me. Specially when you are in a troubleshooting session, having the right tools for the right job makes problem solving very easy. I compare this to driving a car not knowing your hands are turning the steering wheel. This can happen when you are so used to driving that the wheel becomes an extension of your body.

Read More

Install Container Services Manager(KSM) and configure Tanzu Application Service for K8s - marketplace

A Test Drive of KSM , the Tanzu Container Services Manager

Any Application journey is intwined with the data and services it uses. When it comes to running Apps seamlessly on k8s, Tanzu application service leads the way and KSM comes in as a companion to TAS as a way to bring the services needed by the apps running on TAS. Follow the instructions on the Documentation for the install here. You will need to be on the beta program for this product to get access to the docs and the bits for this.

Read More

The New Home

I have been dabbling with moving my web presense to a more long term home. Jekyll based home on GitHub Pages. Thank you Github for this amazing service and feature that makes it easy to blog and maintain some web content.

Read More

Cleanup Azure PCF install using BBL

The Power of using automation and starting to look at the platform as a product is that you can wipe out everything and be sure that you can get back to the same state you were in. To Test out that Theory, I decided to use the bbl tool to blow the platform off and then rerun the pipeline and get back to this state. Thanks to my good friend Zack Bergquist for pointing me in this direction. bbl has this very powerful ( I mean, be careful when you run it ) feature that connects to your IAAS using your credentials provided.

Read More

Installing PCF 2.X on Azure using Concourse - Part - 3 , Installing Pivotal Cloud Foundry using a Pipeline

In the First two parts of this series ,  We talked about installing BBL , bosh and concourse. Lets dive right into the main course :) .. Pivotal Cloud Foundry ( PCF)  using the platform automation pipelines also called PCF Pipelines. ( Links to individual posts are at the bottom)

I would download Pcf-Pileines from the pivotal network which is a tested and curated version.


Once you download the tar.gz file from pivnet, you should extract the compressed file. Out Focus will be the folder called "install-pcf/azure" folder on the main path. 

You will need to edit the params.yml file on that path. I would suggest using the sample from my sample here as a starting point.  

You will need to do some Azure Setup needed before you can upload and kick off the pipeline. This is focussed on creation of the storage account and containers for the terraform state to be stored. Remember,  the pipelines use terraform to create the initial infrastructure aka.. Ops manager etc .. 

The best place to do this will be the Azure cloud shell.  Login to the Azure portal click on the cloud shell button.  Once in the Cloud Shell ,  you will need to:
  • Create a new Resource Group to store the terraform files 
  • Create a new Storage account in that resource group
  • Create a container inside using that storage account 
Have a look at my example on the Gist. Once you are done with this ,  open the portal and make your way to the container that you just created and copy one of the keys to that container. 


You will update these values ( the storage account ,  container name and the key value on the params.yml). See the last section of my sample in the gist. 


Another Pre-req would be the setup of a delegated DNS Zone  pointing to the subdomain PCF will reside. For example : my domain for PCF on Azure is : cf.az.clue2solve.com ,  with apps domain being apps.cf.az.clue2solve.com and sys being on sys.cf.az.clue2solve.com. I created  the new DNS Zone for az.clue2solve.com and created the delegation from my primary DNS provider ( happens to be AWS Route53).  See my post on DNS Delegation for details. This delegation into the Azure DNS  is needed  as  the pipelines create subdomains under the base cf domain and attach them. 

Few key items in the params file to be careful when filling are: 
  • The domain names for the ops manager,  sys and apps domains. 
  • The load-balancer name 
  • pivnet token 
  • terraform storage info ..

Once the parameters file is ready ,  we are going to create the pipeline on Concourse. 

anandrao at Anands-MBP in ~/pivotal/repos/pcf-pipelines-023.6-pivnet/install-pcf/azure
$ fly -t az set-pipeline -p install-pcf-az -c pipeline.yml -l params.yml


Next ,  we will login to the concourse UI and un-pause the pipeline.





You may observe that the terraform states will be in error. This is  because,  all e did in the prep step was to create the container.  We now need to initialize the terraform.state file on that container.  We don't have to do it manually.  This is done via a job on the pipeline called "bootstrap Terraform".





Once this completed ,  should see the terraform state steps in black instead of orange. This tells us we are ready for the next stage. 





Now its time to let it all Fly through the pipeline. Click into the "create-infrastrucutre" step and then the plus button there to initiate it. the Dotted lines between Jobs tells that they are not auto triggered and thus,  this step has  to be manually triggered. All subsequet jobs are automatically triggered when the prior dependencies are met. 











After seeing green on every stage ,  you should now be able to login to the ops manager using the ops manager url as below.


Congratulations ,  you now have a cloud Foundry installation ready to use.  Let me walk you through some basic sample apps deployment into this foundation to wrap up the install process in another blog entry. 

Don't forget to read my post on the cleanup of PCF if you are into playing with these installations for short durations.  Links is below. 

Quick links to the "The PCF on Azure" Blog Series : 



Read More

Installing PCF 2.X on Azure using Concourse - Part - 2 , Concourse using bosh

This Blog Entry is a part 2 of a series. The Part - 1 covered the bbl and bosh installation.

When you go to the concourse-ci.org website and start looking at multiple ways to install concourse,  you start wondering whats the best practice to install concourse. Speaking to a many of the amazing folks at Pivotal I decided to follow the path of installing Bosh and having Bosh start and manage concourse. When it comes to installing Bosh, once again there are multiple ways to do this. The Bosh-Boot-loader project comes in very handy to get your bosh instance up and running in the quickest time.

With the successful install of bbl and bosh as part of the Part-1 of this series , lets dive into the installation of concourse using the bosh instance that we have running.

The first step in this process is downloading ( cloning from github) a copy of the concourse-bosh-deployment.

anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
$ cd ..

anandrao at Anands-MBP in ~/pivotal/repos/bbl-az
$ mkdir concourse ; cd concourse

anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/concourse
$ git clone https://github.com/concourse/concourse-bosh-deployment.git
Cloning into 'concourse-bosh-deployment'...
remote: Counting objects: 705, done.
remote: Compressing objects: 100% (79/79), done.
remote: Total 705 (delta 58), reused 58 (delta 26), pack-reused 600
Receiving objects: 100% (705/705), 121.01 KiB | 1.66 MiB/s, done.
Resolving deltas: 100% (384/384), done.
Next , we will need to find out the version of OS/Stemcell that is referenced in the concourse.yml file. On the Latest version as of today, here is what I found:

stemcells:
- alias: xenial
os: ubuntu-xenial
version: latest

Ok , ubuntu-xenial it is. Remember , it used to trusty before. Armed with this info, lets hunt down the stemcell for this. Best place to find this will be https://bosh.io/stemcells.


Using the command on that page ,  lets upload the stemcell to bosh.  

anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/concourse/concourse-bosh-deployment/cluster (master)
$ bosh upload-stemcell --sha1 1d660dcbb51a1c80914fbb8c11173e640014f6c5 \
https://bosh.io/d/stemcells/bosh-azure-hyperv-ubuntu-xenial-go_agent\?v\=97.3
Using environment 'https://10.0.0.6:25555' as client 'admin'

Task 9

Task 9 | 07:09:48 | Update stemcell: Downloading remote stemcell (00:00:15)
Task 9 | 07:10:03 | Update stemcell: Verifying remote stemcell (00:00:02)
Task 9 | 07:10:05 | Update stemcell: Extracting stemcell archive (00:00:04)
Task 9 | 07:10:10 | Update stemcell: Verifying stemcell manifest (00:00:00)
Task 9 | 07:11:20 | Update stemcell: Checking if this stemcell already exists (00:00:00)
Task 9 | 07:11:20 | Update stemcell: Uploading stemcell bosh-azure-hyperv-ubuntu-xenial-go_agent/97.3 to the cloud (00:02:54)
Task 9 | 07:14:14 | Update stemcell: Save stemcell bosh-azure-hyperv-ubuntu-xenial-go_agent/97.3 (bosh-stemcell-05475b0d-eca0-4666-839b-07b2db8a80c4) (00:00:00)

Task 9 Started Tue Jul 31 07:09:48 UTC 2018
Task 9 Finished Tue Jul 31 07:14:14 UTC 2018
Task 9 Duration 00:04:26
Task 9 done

Succeeded

Ok,  the Stemcell is in, lets run the bosh magic to get concourse running. One step we need to do as a bbl step here is to add a load-balancer for concourse. Remember,  we did not add this in our initial infra creation.  Its easy to update out bbl plan and re-apply. 

Switch back to the bbl folder and exec "bbl plan --lb-type concourse" as below and then run "bbl up".


anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
$ pwd
/Users/anandrao/pivotal/repos/bbl-az/bbl

anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
$ ls
bbl-all-exports.PS1 bosh-deployment create-director.sh delete-director.sh jumpbox-deployment vars
bbl-state.json cloud-config create-jumpbox.sh delete-jumpbox.sh terraform

anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
$ bbl plan --lb-type concourse
step: generating terraform template
step: generating terraform variables
step: terraform init

anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
$ bbl up
step: terraform init
step: terraform apply
step: creating jumpbox
Deployment manifest: '/Users/anandrao/pivotal/repos/bbl-az/bbl/jumpbox-deployment/jumpbox.yml'
Deployment state: '/Users/anandrao/pivotal/repos/bbl-az/bbl/vars/jumpbox-state.json'

Started validating
Downloading release 'os-conf'... Skipped [Found in local cache] (00:00:00)
Validating release 'os-conf'... Finished (00:00:00)
Downloading release 'bosh-azure-cpi'... Skipped [Found in local cache] (00:00:00)
Validating release 'bosh-azure-cpi'... Finished (00:00:00)
Validating cpi release... Finished (00:00:00)
Validating deployment manifest... Finished (00:00:00)
Downloading stemcell... Skipped [Found in local cache] (00:00:00)
Validating stemcell... Finished (00:00:01)
Finished validating (00:00:01)
No deployment, stemcell or release changes. Skipping deploy.

Succeeded
step: created jumpbox
step: creating bosh director
Deployment manifest: '/Users/anandrao/pivotal/repos/bbl-az/bbl/bosh-deployment/bosh.yml'
Deployment state: '/Users/anandrao/pivotal/repos/bbl-az/bbl/vars/bosh-state.json'

Started validating
Downloading release 'bosh'... Skipped [Found in local cache] (00:00:00)
Validating release 'bosh'... Finished (00:00:00)
Downloading release 'bpm'... Skipped [Found in local cache] (00:00:00)
Validating release 'bpm'... Finished (00:00:00)
Downloading release 'bosh-azure-cpi'... Skipped [Found in local cache] (00:00:00)
Validating release 'bosh-azure-cpi'... Finished (00:00:00)
Downloading release 'os-conf'... Skipped [Found in local cache] (00:00:00)
Validating release 'os-conf'... Finished (00:00:00)
Downloading release 'uaa'... Skipped [Found in local cache] (00:00:00)
Validating release 'uaa'... Finished (00:00:00)
Downloading release 'credhub'... Skipped [Found in local cache] (00:00:00)
Validating release 'credhub'... Finished (00:00:00)
Validating cpi release... Finished (00:00:00)
Validating deployment manifest... Finished (00:00:00)
Downloading stemcell... Skipped [Found in local cache] (00:00:00)
Validating stemcell... Finished (00:00:01)
Finished validating (00:00:04)
No deployment, stemcell or release changes. Skipping deploy.

Succeeded
step: created bosh director
step: generating cloud config
step: applying cloud config

The Load-Balancer is ready to go. You could check this by running "bbl lbs"
Next,  we will switch back to the concourse folder  and execute the command to deploy concourse. Make sure to be in the right folder when you run this command as there are file references  in the command.


anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/concourse
$ cd concourse-bosh-deployment/cluster

anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/concourse/concourse-bosh-deployment/cluster (master)
$ pwd
/Users/anandrao/pivotal/repos/bbl-az/concourse/concourse-bosh-deployment/cluster

Lets look at the bosh command that is used to deploy concourse as a bosh-deployment.

bosh deploy -d concourse concourse.yml \
-l ../versions.yml \
--vars-store cluster-creds.yml \
-o operations/basic-auth.yml \
-o operations/privileged-http.yml \
-o operations/privileged-https.yml \
-o operations/tls.yml \
-o operations/tls-vars.yml \
-o operations/web-network-extension.yml \
--var network_name=default \
--var external_url='https://wings.az.clue2solve.com' \
--var external_host='wings.az.clue2solve.com' \
--var web_vm_type=default \
--var db_vm_type=default \
--var db_persistent_disk_type=10GB \
--var worker_vm_type=default \
--var deployment_name=concourse \
--var web_network_name=private \
--var local_user.username=admin \
--var local_user.password='$2a$10$CjHFkbuSssxtqtBfqLZ.MuwA0BP589fUd4rXNIM4HwIQ4Dw00NoRq' \
--var web_network_vm_extension=lb


Few things we need to be careful about here:

  • The external_url should be a valid domain and make sure to include the https://  for this value
  • The external_host should be a common name ( dns that is valid )  without the https
  • the local_user.password should be a becrypted value of the password. There is a strong requirement that the password should "bcrypted" at least 10 rounds. I used the website https://www.browserling.com/tools/bcrypt and used the generated hash ( make sure you select 10 rounds there  ) as the password. 

Gist here shows the command execution and the output.

Hurray ,  we successfully deployed concourse. Lets get the DNS Entry for wings ( concourse ) updated with the load balancer IP. you could do this by

anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
$ bbl lbs
Concourse LB: bbl-env-ontario-2018-07-31t05-15z-concourse-lb (20.189.131.186)

Update (create one if not already there) the A-Record on your DNS Zone with the IP.

And Enjoy Concourse !


You can create a test pipeline and execute it to test. I suggest starting with https://concoursetutorial.com. It is an awesome tutorial to get into concourse.

A sample Screen with successful concourse pipelines are below.




In the Next Part ( Part 3 )  of this Series , I will cover the PCF installation using the concourse instance that we just installed.


Quick links to the "The PCF on Azure" Blog Series : 












Read More

Domain Name System (DNS) Delegation with Azure DNS Zones


In a lot of our cloud related journeys, Managing DNS and  sub-domains for specific needs is a common challenge. If you add to that the need for cross hosting DNS  for sub domains across multiple IAAS makes it a little challenging for starters.

In this post ,  I will cover my example of hosting my primary DNS  on AWS ,  creating a DNS Zone for a sub domain in Azure and pointing a single A-Record on the subdomain to my Concourse instance running on Azure ( actually to the load-balancer in front of Concourse). Typically ,  your primary Name servers will reside on the domain registrar you have used for your registration. a few examples are godaddy.com , domains.google.com , AWS route53  and more. It is very possible you can create zone based delegation on the service you registered. But,  when you use  the Automation created as part of PCF-Pipelines ,  there  are some zone ( sub zones ) created by the pipelines on the DNS  service  on the IAAS  you are installing. This has been the primary driver for me to write this post to explain cross service DNS delegation.

  1.  Lets create a new DNS  Zone in Azure for the subdomain - az.clue2solve.com. 
  2. Here, you can see that I created not only a Zone, but a A-Record pointing to my load balancer for concourse as well. We also should copy the Value field in the NS-Record.                                  
  3. Now I login to my AWS Route-53 Console where I registered my primary DNS. You could have done this in one of many Domain Name Registrars. All of them will allow you to do what I will explain here in this step.                                                                                                            
  4. Login to the AWS Console and Make your way to Route-53
      Click on the Zones link to get you to the Zone Listing.                                                                     
      On the listing , click on the record for the primary domain.                                                                 
      Now its time to link the Zone created in Azure to the primary domain. We do this by adding a NS-Record on the primary Zone and pointing the name servers to the list we copied from the Azure DNS Zone.                                                                                                                              
      Here we go ,  we have not successfully delegated our DNS for a subdomain.  

      As a test for this ,  I have created an entry on my sub-domain in Azure-DNS called an A-Record. This is a pointer to a specific IP Address for a given domain name.  In this case  I want all requests  for wings.az.clue2solve.com to go to the IP Address of the load-balancer in front of the concourse cluster we created  per this blog post
      You can extract the IP Address  of the load-balancer  with the sample  command below on the bbl folder and add the A-Record on the zone ( sample Screen shot below).  

      anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
      $ bbl lbs
      Concourse LB: bbl-env-ontario-2018-07-31t05-15z-concourse-lb (20.189.131.186).


      Open a browser and confirm that the link for the wings ( concourse on Azure )  works. 




      Quick links to the "The PCF on Azure" Blog Series : 

    Read More

    Installing PCF 2.X on Azure using Concourse - Part - 1 , Bosh Boot loader and bosh

    PCF on Azure with Platform Automation using PCF-Pipelines - PART 1

    Thats a loaded set of words. Many of our customers at Pivotal have started looking at Data Center augmentation, specially for burst needs using a public IAAS provider like AWS , GCP and Azure. In this edition of my blog , I wanted to go over the process of installing a full Pivotal Cloud Foundry Foundation.  Even though we at Pivotal provide detail instructions for a manual install of the foundation,  automation has always been a big need.

    Concourse and Platform Automation
    Concourse is the CI tool developed at Pivotal for all our automation needs during the Cloud foundry development. With the mindset of treating "Platform as a Product" and also eating our own dogfood ( Wait ,  should I say "drinking our own champagne"), we have a new product slug called PCF-Pipelines. It is a opinionated set of pipelines that help automate the whole process of installing, upgrading and maintaining all the tiles on the platform.

    About the Trail 
    Having gone through the install journey as I prep to work with multiple customers,  I thought I will throw the breadcrumbs to help document the Trail! In Part 1 of this multipart  blog, I will focus on installing bosh via bosh boot loader. I will come back and talk about installing and configuring concourse with a couple simple examples to see it working and  PCF installation using the pipelines on subsequent blogs.

    Lets install a copy of bosh that will control concourse and a jumpbox to access the components installed.

    The BBL docs explains how to do this for aws, azure and gcp.  I have picked up the info needed for azure below.
    1. Install bbl ( the boot loader ) 

    2. $ brew tap cloudfoundry/tap
      $ brew install bosh-cli
      $ brew install bbl


    3.  Setup the Environment variables via a .PS1 file as below to allow easy reuse. 
    4. (change the value that says "CHANGEME" to real values based on your IAAS info)

      anandrao at Anands-MBP in ~/pivotal/repos/bbl-az
      $ cat bbl-az-variables.PS1
      export BBL_IAAS=azure
      export BBL_AZURE_CLIENT_ID="CHANGEME"
      export BBL_AZURE_CLIENT_SECRET="CHANGEME"
      export BBL_AZURE_REGION=westus
      export BBL_AZURE_SUBSCRIPTION_ID="CHANGEME"
      export BBL_AZURE_TENANT_ID=
      "CHANGEME"

    5. Execute the below commands to setup your Bosh along with a jumpbox and yea a load balancer as well. "bbl" does it all in one command.  ( I would create a folder and run bbl inside that as  all the files bbl creates will be clearly in one distinct place).


      anandrao at Anands-MBP in ~/pivotal/repos/bbl-az
      $ mkdir bbl ; cd bbl

      anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
      $ bbl plan
      step: generating terraform template
      step: generating terraform variables
      step: terraform init



      anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
      $ bbl up
      step: terraform init
      step: terraform apply
      step: creating jumpbox
      Deployment manifest: '/Users/anandrao/pivotal/repos/bbl-az/bbl/jumpbox-deployment/jumpbox.yml'
      Deployment state: '/Users/anandrao/pivotal/repos/bbl-az/bbl/vars/jumpbox-state.json'

      Started validating
      Downloading release 'os-conf'... Skipped [Found in local cache] (00:00:00)
      Validating release 'os-conf'... Finished (00:00:00)
      Downloading release 'bosh-azure-cpi'... Skipped [Found in local cache] (00:00:00)
      Validating release 'bosh-azure-cpi'... Finished (00:00:00)
      Validating cpi release... Finished (00:00:00)
      Validating deployment manifest... Finished (00:00:00)
      Downloading stemcell... Skipped [Found in local cache] (00:00:00)
      Validating stemcell... Finished (00:00:01)
      Finished validating (00:00:01)

      Started installing CPI
      Compiling package 'ruby-2.4-r3/8471dec5da9ecc321686b8990a5ad2cc84529254'... Finished (00:02:33)
      Compiling package 'bosh_azure_cpi/ceb90b730e4e350787d1be2b81bb97b433549f3f'... Finished (00:01:01)
      Installing packages... Finished (00:00:00)
      Rendering job templates... Finished (00:00:00)
      Installing job 'azure_cpi'... Finished (00:00:00)
      Finished installing CPI (00:03:36)

      Starting registry... Finished (00:00:00)
      Uploading stemcell 'bosh-azure-hyperv-ubuntu-trusty-go_agent/3468.17'... Finished (00:01:42)

      Started deploying
      Creating VM for instance 'jumpbox/0' from stemcell 'bosh-stemcell-b6c82a4f-e14f-4ef9-91f6-f21339fc0b15'... Finished (00:01:36)
      Waiting for the agent on VM 'agent_id:22df9d2e-e05d-4ae6-5f5f-622d66debf6d;resource_group_name:bbl-env-ontario-2018-07-31t05-15z-bosh;storage_account_name:bblenvontario2018073jfde' to be ready... Finished (00:00:25)
      Rendering job templates... Finished (00:00:00)
      Updating instance 'jumpbox/0'... Finished (00:00:12)
      Waiting for instance 'jumpbox/0' to be running... Finished (00:00:00)
      Running the post-start scripts 'jumpbox/0'... Finished (00:00:00)
      Finished deploying (00:02:21)

      Stopping registry... Finished (00:00:00)
      Cleaning up rendered CPI jobs... Finished (00:00:00)

      Succeeded
      step: created jumpbox
      step: creating bosh director
      Deployment manifest: '/Users/anandrao/pivotal/repos/bbl-az/bbl/bosh-deployment/bosh.yml'
      Deployment state: '/Users/anandrao/pivotal/repos/bbl-az/bbl/vars/bosh-state.json'

      Started validating
      Downloading release 'bosh'... Finished (00:00:08)
      Validating release 'bosh'... Finished (00:00:00)
      Downloading release 'bpm'... Finished (00:00:11)
      Validating release 'bpm'... Finished (00:00:00)
      Downloading release 'bosh-azure-cpi'... Finished (00:00:05)
      Validating release 'bosh-azure-cpi'... Finished (00:00:00)
      Downloading release 'os-conf'... Skipped [Found in local cache] (00:00:00)
      Validating release 'os-conf'... Finished (00:00:00)
      Downloading release 'uaa'... Finished (00:01:16)
      Validating release 'uaa'... Finished (00:00:00)
      Downloading release 'credhub'... Finished (00:00:15)
      Validating release 'credhub'... Finished (00:00:00)
      Validating cpi release... Finished (00:00:00)
      Validating deployment manifest... Finished (00:00:00)
      Downloading stemcell... Finished (00:00:47)
      Validating stemcell... Finished (00:00:01)
      Finished validating (00:02:52)

      Started installing CPI
      Compiling package 'ruby-2.4-r3/8471dec5da9ecc321686b8990a5ad2cc84529254'... Finished (00:02:30)
      Compiling package 'bosh_azure_cpi/e83f4474b88d5f34304ce99a0e1cead2e2ae3627'... Finished (00:01:03)
      Installing packages... Finished (00:00:00)
      Rendering job templates... Finished (00:00:00)
      Installing job 'azure_cpi'... Finished (00:00:00)
      Finished installing CPI (00:03:35)

      Starting registry... Finished (00:00:00)
      Uploading stemcell 'bosh-azure-hyperv-ubuntu-trusty-go_agent/3586.24'... Finished (00:01:36)

      Started deploying
      Creating VM for instance 'bosh/0' from stemcell 'bosh-stemcell-c37f79e2-8cbc-459f-b2c7-8f099707ef8d'... Finished (00:02:38)
      Waiting for the agent on VM 'agent_id:7d2deaab-d7db-4adf-711d-f05f3adbfff2;resource_group_name:bbl-env-ontario-2018-07-31t05-15z-bosh;storage_account_name:bblenvontario2018073jfde' to be ready... Finished (00:00:24)
      Creating disk... Finished (00:00:36)
      Attaching disk 'caching:None;disk_name:bosh-data-9d98f6ea-399c-41b1-888c-86439cf86100;storage_account_name:bblenvontario2018073jfde' to VM 'agent_id:7d2deaab-d7db-4adf-711d-f05f3adbfff2;resource_group_name:bbl-env-ontario-2018-07-31t05-15z-bosh;storage_account_name:bblenvontario2018073jfde'... Finished (00:01:06)
      Rendering job templates... Finished (00:00:11)
      Compiling package 'ruby-2.4-r4/0cdc60ed7fdb326e605479e9275346200af30a25'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'mysql/898f50dde093c366a644964ccb308a5281c226de'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'libpq/e2414662250d0498c194c688679661e09ffaa66e'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'bpm-runc/c0b41921c5063378870a7c8867c6dc1aa84e7d85'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'golang/e3ca1c9440c29ad576d633e9ef6a2f7805a5e8b7'... Skipped [Package already compiled] (00:00:07)
      Compiling package 'ruby-2.4-r3/8471dec5da9ecc321686b8990a5ad2cc84529254'... Finished (00:02:12)
      Compiling package 'openjdk_1.8.0/c8846344bf802835ce8b1229de8fa2028d06f603'... Skipped [Package already compiled] (00:00:02)
      Compiling package 'golang-1.9-linux/8d6c67abda8684ce454f0bc74050a213456573ff'... Skipped [Package already compiled] (00:00:06)
      Compiling package 'gonats/73ec55f11c24dd7c02288cdffa24446023678cc2'... Skipped [Package already compiled] (00:00:00)
      Compiling package 's3cli/3097f27cb9356172c9ae52de945821c4e338c87a'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'uaa/cdb6217bd1b700002b9746c0b069d79480edb192'... Skipped [Package already compiled] (00:00:09)
      Compiling package 'verify_multidigest/8fc5d654cebad7725c34bb08b3f60b912db7094a'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'registry/a6daac4743749c70c2ae15e58170adb6b41a3a76'... Skipped [Package already compiled] (00:00:01)
      Compiling package 'health_monitor/251915bca2d42f06f4bbb1f5395afd1ae73cf681'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'director/db07ae48ea2963a9cdec4938a9522f85f718e672'... Skipped [Package already compiled] (00:00:01)
      Compiling package 'postgres-9.4/52b3a31d7b0282d342aa7a0d62d8b419358c6b6b'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'bosh-gcscli/fce60f2d82653ea7e08c768f077c9c4a738d0c39'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'bpm/3fe49cfa0140be3ebd8da4bdcadfa6b84d847e87'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'lunaclient/b922e045db5246ec742f0c4d1496844942d6167a'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'bosh_azure_cpi/e83f4474b88d5f34304ce99a0e1cead2e2ae3627'... Finished (00:00:46)
      Compiling package 'uaa_utils/90097ea98715a560867052a2ff0916ec3460aabb'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'nginx/d9f726bf0c5a38bad988e40cefb084c821e333cf'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'credhub/e3d60a289d5fd414e29ee06e7e5f1a6b3802c792'... Skipped [Package already compiled] (00:00:04)
      Compiling package 'configurator/0d632a3a9b06f3777bea07d61807ca06ece24dee'... Skipped [Package already compiled] (00:00:00)
      Compiling package 'davcli/f8a86e0b88dd22cb03dec04e42bdca86b07f79c3'... Skipped [Package already compiled] (00:00:00)
      Updating instance 'bosh/0'... Finished (00:01:48)
      Waiting for instance 'bosh/0' to be running... Finished (00:02:09)
      Running the post-start scripts 'bosh/0'... Finished (00:00:20)
      Finished deploying (00:13:02)

      Stopping registry... Finished (00:00:00)
      Cleaning up rendered CPI jobs... Finished (00:00:00)

      Succeeded
      step: created bosh director
      step: generating cloud config
      step: applying cloud config


      Very cool , you just installed bosh ,  a jump host built with all the security and controls.                
    6. Lets inspect what we just installed and verify it. "bbl" has a few options that help in verification..

      Environmental Detail Commands: Useful for automation and gaining access
      jumpbox-address Prints BOSH jumpbox address
      director-address Prints BOSH director address
      director-username Prints BOSH director username
      director-password Prints BOSH director password
      director-ca-cert Prints BOSH director CA certificate
      env-id Prints environment ID
      ssh-key Prints jumpbox SSH private key
      director-ssh-key Prints director SSH private key
      lbs Prints load balancer(s) and DNS records
      outputs Prints the outputs from terraform
      ssh Opens an SSH connection to the director or jumpbox

      anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
      $ bbl director-address
      https://10.0.0.6:25555

      anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
      $ bbl director-username
      admin.


      Azure Portal View
      Azure Portal View


    7. Lets run some bosh commands and check the environment. The first step I do after the install is extract all the exports commands provided by the "print-env" option of bbl. This will allow you to use bosh and other components installed ( bosh , credhub and the jumpbox). I would inspect the output of the "bbl print-env" .

      anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
      $ bbl print-env > bbl-all-exports.PS1

      anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
      $ source bbl-all-exports.PS1

      anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
      $ bosh vms
      Using environment 'https://10.0.0.6:25555' as client 'admin'

      Succeeded

      anandrao at Anands-MBP in ~/pivotal/repos/bbl-az/bbl
      $ bosh environment
      Using environment 'https://10.0.0.6:25555' as client 'admin'

      Name bosh-bbl-env-ontario-2018-07-31t05-15z
      UUID a03916c8-01f1-4cc2-abe5-93a546e86b79
      Version 266.4.0 (00000000)
      CPI azure_cpi
      Features compiled_package_cache: disabled
      config_server: enabled
      dns: disabled
      snapshots: disabled
      User admin

      Succeeded

    8. I would inspect some of the folders created under the bbl folder. You will see the bosh manifests created for the vms created as part of the bbl based install.

    Now that we have bosh up and running,  we could try out all of our typical bosh commands. Please refer to the bosh cli docs for this. 


    Read More

    The "all in" Kitchen Sink ...or Not !

    The Art of Org and Space Management in PCF.



    Image result for kitchen sink full
    We have all seen this in our kitchen 


    A Quick introduction: Pivotal Cloud Foundry provides a collaboration model with isolated workspaces for Application deployment and management. The virtual allocation of workspaces follows a hierarchy of Organizations and spaces inside them.

    The Org ( Organization)  is  controlled by a Role Based Access Control ( RBAC)  which is used to controls Quota management for each org by the operators as well as controlled Access to Users within the Org to perform specific actions.

    A Space is one of many isolated workspaces(space), created within the Orgs by the Org users (collaborators) to deploy and manage applications. By default the applications deployed in the spaces are isolated and so are the the services that are created within the spaces. This means, by default all services created inside a space are allowed to be bound to apps inside the same space. This is true even to instances of Spring Cloud Services, unless  you have adopted Spring Cloud Services 1.2 . SCS 1.2 comes with Multi-Site Service Discovery which can be applied across different Orgs on the same PFC foundation as well as across Foundations.

    Now , Lets look at this in a different perspective. Folks have followed the rule where, a particular team uses a single space for all its applications. This is where the kitchen sink analogy comes into picture. Having potentially 100s of apps in a single space becomes a little unmanageable. Even though this does not have a direct impact on app placement across AZs and the use of affinity rules,  the clutter of of too many criss-crossing apps that call one another all deployed within a single space is not a pretty picture.

    This is when I want to introduce the concept of Apps in a Box. Think about a set of microservices (apps) that form a logical application by interacting with one another and with the services that they potentially share. This, of course will need them to be able to discover each other's instances.

    Based on my experience and talking to multiple folks who have been there,  I think ,  the best logical organization of Orgs and Spaces as follows:

    • Real Organizations of teams that work on a platform ( group of common Applications) should be grouped into Orgs. 
      • The Collaborators in those Orgs should have access to see( read)  all the Spaces to help with cross team collaboration.  
    • Spaces would make perfect sense  for a Single App in a Box ( should we say App in a Space ? )
      • On the development foundations ( I have seen customers use  Orgs for Environment Separation like Dev vs QA vs Stage),  you would be giving the Space Developer and Manager access to the developers who are actively working on the Apps that form a logical group. 
      • In the spirit of being totally cloud native,  we should always insist on using a discovery process to find microservices that are needed as upstream services. 
      • Non-Development ( see how I don't talk about prod and non-prod here ) , should always get the app deployments via pipelines triggered by Code commits or  other triggers and should always  use a cloud foundry user ID  created specifically for use by the pipelines. 
      • The Developers should not be given access to the Orgs and Spaces running non Development workloads. 
      • Tests  should be automated and executed via pipelines. 
      • Multiple Data Flavors could be ingested for running different sets of focussed tests.  
      • With Autoscaling in place ,  we should execute our automated performance tests as part of the same pipeline as well. 
    With the above way of Organization ,  one would get a clear logical grouping.  This would also mean  that the same collaborators could be part of the development lifecycle of multiple Apps in a Box and thus have a role on multiple spaces. But from a Security and control perspective ,  the apps are built and deployed in the application perspective rather than the Team that builds them. 

    Another major advantage of using the pipelines approach is that it automates all kinds of tests be it functional and unit tests  or the performance tests  that trigger huge but short term application scaling allows you to convert all the infra used  for the pipeline execution as ephemeral.  Once the tests are completed,  you can and should do a
     cf delete
    on all the apps that form that box.  This approach has helped reduce substantial infra needs at customers and helped promote hands off pipelines and eliminate the Works on my computer syndrome. You leave behind a clean space and a well managed Org on Cloud Foundry.  The Operators will be pleased that their clean up scripts will not find forgotten ,  left behind apps and services. 

    This should leave your kitchen sink looking like this :) 

    Related image
    Who would not want this ??




    Read More

    The Trail .....

    Reviving my blogging habits. This time over,  I find a compelling motivation to journal my path through the amazing Technology Journey I have embarked on. One that helps Businesses change their way of building software and adopting the ever changing landscape of the Cloud.

    Working as an Enterprise Architect at Cengage Learning , I got this amazing opportunity to get involved in what I call a movement ! Pivotal. The wide spectrum of pivotal's offerings, all the way from the Spring Framework to the Pivotal Cloud Foundry where  Spring Apps run the best,  Pivotal's  products and the amazing people set me on the journey to help companies change the way they build software. After setting the Cengage Ship Sail on its Cloud native journey to adopt PCF,  I took on a position with Pivotal's Platform Architecture team on the west coast based our of the SF Bay Area.

    The technology challenges and solutions being worked on at multiple high quality amazing companies gives me a totally new perspective on how I want to help companies and businesses change the way they build,  deploy and manage software.  #disrupt #innovate #learn  #iterate

    I am planning to use this Clue trail as a way to chronicle my Trail through the Tech exploration as I find Clues to solve simple and complex technology problems.

    Read More

    subscribe via RSS