Deploying Identity and Access Management (IAM) Infrastructure in the Cloud - PART 4: DEPLOYMENT

Blog Series: Deploying Identity and Access Management (IAM) Infrastructure in the Cloud

Part 4: DEPLOYMENT

Standing Up Your Cloud Infrastructure

In the previous installment of this four part series on “Building IAM Infrastructure in the Public Cloud,” we discussed the development stage, where we created the objects and automation used to implement the cloud infrastructure design.

In part 4, we discuss applying all of the hard work we’ve done on research, design and development to deploy cloud infrastructure on AWS. Before moving forward, you might want to refer back to the previous installments to review the process. 

Part 1 - PLANNING

Part 2 - DESIGN

Part 3 - TOOLS, RESOURCES and AUTOMATION

Prerequisites

It is assumed at this point that:

  • You have an active AWS account

  • You have an IAM account in AWS with administrative privileges

  • You have confirmed that your AWS resource limits in the appropriate region(s) are sufficient to deploy all resources in your design

  • Your deployment artifacts such as cloudformation templates, shell scripts and customized config files have been staged in an S3 bucket in your account, code repository, etc.

  • Any resources that your cloudformation template has a dependency on will be in place before execution. For example, EC2 instance profiles, key pairs, AMI’s etc have been created or are otherwise available.

  • You have a list of parameter values ready, e.g.VPC name, VPC CIDR block, and the CIDR block(s) of external network(s) for your routing tables (essentially anything that your cloudformation template expects as input). 

  • You have been incrementally testing cloudformation templates and other deployment artifacts as you’ve been creating them, are satisfied they are error free, and confident they are in a state that will achieve desired results

Goals

Goals at this stage:

  1. Deploy the cloud infrastructure

  2. Validate the cloud network infrastructure

  3. Deploy EKS

  4. Integrate the cloud infrastructure with the on-prem network environment

  5. Configure DNS

  6. Prepare for the Forgerock implementation to commence


1 - Deploy the cloud infrastructure

A new Cloudformation stack can be created from the web interface. From this interface, you can specify the location of your template and enter parameter values. The parameter value fields, descriptions and constraints will vary between implementations, and are defined in your template.

As the stack is building, you can monitor the resources being created real-time. The events list can be quite lengthy, especially if you have created a template that supports multiple environments. That said, the template deployment can complete in a matter of minutes, and is successful if the final entry with the stack name listed in the Logical ID field returns a status of ‘CREATE_COMPLETE’. 

Any other status indicates failure, and depending on the options selected when you deployed the stack, it may automatically roll back the resources that were created or leave them in place to help in debugging your template. 



After all the work it took to get to this point, it might come as a surprise how quickly and easily this part of your deployment can be completed. By the same token, it also needs to be pointed out how quickly and easily it can be taken down--potentially by accident--without proper access controls and procedures in place. 

At the very least, we recommend that “Termination Protection” on the deployed stack be set to “enabled”. While it will not prevent someone with full access to the Cloudformation service to intentionally delete the stack, it will create the extra step of having to disable termination protection during the deletion process, and that could in some cases be enough to avert disaster.


2 - Validate the cloud network infrastructure

The list of items to check may change depending on your design, but in most cases, you should go thought your deployment and validate:

  • Naming conventions and other tags / values on all resources

  • Internet gateway

  • NAT gateways

  • VPC CIDR, subnets, availability zones, routing tables, security groups and ACLs

  • Jump box instances (and your ability to SSH to them, starting with a public facing jump box if you’ve created one)

  • Tests of outbound internet connectivity from both public and private subnets

  • EC2 instances such as EKS console instances (discussed shortly), and instances that will be used to deploy DS components that are not containerized

  • If applicable, the AWS connection objects to the on-prem network, like VPN gateways, customer gateways and VPN connections

3 - Deploy EKS

One approach we’ve used to deploy EKS clusters is to create an instance during the cloudformation deployment of the VPC that has tools like kubectl, and seed it with scripts / configuration files that are VPC specific. We call this an “EKS console”. During the VPC deployment, parameters are passed to a config file on the instance via the userdata script on the EC2 object, which specifies deployment settings for the associated EKS cluster. 

After the VPC is deployed, the EKS console instance serves the following purposes:

  1. You can simply launch a shell script from it that will execute AWS CLI commands, properly deploying the cluster to correct subnets, etc.

  2. You can manage the cluster from this instance as well as launch application deployments, services, ingress controllers, secrets, etc.

If you’ve taken a similar approach, you can now launch the EKS deployment script. When it completes, you can test out connecting to and managing the cluster.


4 - Integrate the cloud infrastructure with the on-prem network environment

If your use case includes connectivity to an on-prem network, you have likely already created or will shortly create the AWS objects necessary to establish connection.

If, for example, you are creating a point-to-point VPN connection to a supported internet-facing endpoint on the on-prem network, you can now generate a downloadable config file from within the AWS VPC web interface. This file contains critical device specific parameters, in addition to encryption settings, endpoint IPs on the AWS side, and pre-shared keys. This file should be shared with your networking team in a secure fashion. 

While the file has much of the configuration pre-configured, it is incumbent on the networking team to make any necessary modifications to it before applying it to the on-prem VPN gateway. This is to ensure that the configuration meets your use case, and does not adversely conflict with other settings already present on the device. 

Once your networking team has completed the on-prem configuration, and you have once again confirmed routing tables in the VPC are configured correctly, the tunnel can be activated by initiating traffic from the on-prem side. After an initial time-out, the tunnel should come up and traffic should be able to flow. If the VPN and routing appears to be configured correctly on both sides, and traffic still is not flowing, firewall settings should be checked.

5- Configure DNS

One frequent use case we encounter is the need for DNS resolvers in the VPC to be able to resolve domain names that exist in zones on private DNS servers in the customer’s on-prem network. This sometimes includes the ability for DNS resolvers on the on-prem network to resolve names in Route53 private hosted zones associated with the VPC. 

Once you have successfully established connectivity between the VPC and the on-prem network, you can now support this functionality via Route53 “inbound endpoints” and “outbound endpoints”. Outbound endpoints are essentially forwarders in Route53 that specify the on-prem domain name and DNS server(s) that can fulfill the request. Inbound endpoints provide addresses that can be configured in forwarders on the on-prem DNS servers. 

If after configuring these forwarders and endpoints resolution is not working as expected, you should check your firewall settings, ACLs, etc.


6- Prepare for the
ForgeRock implementation to commence

Congratulations! You’ve reached the very significant milestone of deploying your cloud infrastructure, validating it, integrating it with your on-prem network, and perhaps adding additional functionality like Route53 resolver endpoints. You have also deployed at least one EKS cluster, presumably in the Dev environment (if you designed your VPC to support multiple environments), and you are prepared to deploy clusters for the other lower environments when ready. 

It’s now time to bring in your developers / Identity Management team so they can proceed with getting Forgerock staged, tested and implemented.

  • Demo your cloud infrastructure so everyone has an operational understanding of what it looks like and where applications and related components need to be deployed

  • Utilize AWS IAM as needed to create accounts that only have the privileges necessary for team members to do their jobs. Some team members may not even need console or command line API access at all, and only will need SSH access to EC2 resources.

  • Be prepared to provide cloud infrastructure support to your team members

  • Start planning out your production cloud infrastructure, as you now should have the insight and skills needed to do it successfully



Next Steps

In this fourth installment of “Building IAM Infrastructure in Public Cloud,” we’ve discussed performing the actual deployment of your cloud infrastructure, validating it, and getting it connected to your on-prem network.

In future blogs, we will explore the process of planning and executing the deployment of ForgeRock itself into your cloud infrastructure. We hope you’ve enjoyed this series so far, and that you’ve gained some valuable insights on your way to the cloud.


CONTACT US for an introductory meeting with one of our networking experts, where we can apply this information to your unique system.

Previous
Previous

Solving the Challenges of an Identity Governance and Administration (IGA) Deployment

Next
Next

Deploying Identity and Access Management (IAM) Infrastructure in the Cloud - PART 3: TOOLS, RESOURCES and AUTOMATION