Access Granted

HUB CITY MEDIA EMPLOYEE BLOG

Technology Blog Robert Miranda Technology Blog Robert Miranda

Solving the Challenges of an Identity Governance and Administration (IGA) Deployment

Hub City Media’s (HCM) experience helping clients achieve their IGA goals shaped our efforts to “build a better mousetrap”…

Identity Governance and Administration (IGA) has been part of the workforce reality for decades, but many organizations still struggle to escape the tedious, error-prone manual processes used to demonstrate basic compliance. Hub City Media’s (HCM) experience helping clients achieve their IGA goals shaped our efforts to “build a better mousetrap”. 


Our goals were simple: make something easy to implement, smooth to operate and a pleasure to use.  


We collaborate with clients everyday and find that governance programs usually fall into one of these broad categories: 

  1. Ad-hoc Manual - Having little to no governance in place, this client is unable to meet basic compliance requirements.

  2. Multi-Spreadsheet - Governance cycle consists of a series of fractured, usually manual processes. Collecting data from disparate system extracts, massaging extracts into spreadsheets, distributing spreadsheets via email, following up with certifiers throughout the campaign, and ultimately ending with messy, manual remediation.

  3. Quasi-automation - Usually either a set of scripts to process data input and / or remediation or spreadsheets are replaced with an online Governance tool that is fed access data and automates notifications for manual remediations.

  4. Shelf-ware or unused IGA capabilities - We have seen cases of clients with existing IGA capabilities in-house that, due to complexity or cost, are underutilized or never deployed. Frustration with some IGA platform implementation effort and complexity may force clients to reevaluate their software vendor in order to realize their expected return on investment.


When solving these issues, clients often face one or more common challenges: 

  • Designing IGA platforms without expertise

  • Implementing legacy processes

  • Insufficient planning and data modeling for modern IGA needs

  • Requirements based on current tools and products

  • Requiring heavy customization of new IGA tools and products 


In designing and developing IGA tools and solutions, HCM focuses on simple, menu-driven usability over custom code. It’s a balancing act, but one we’ve found great success with when providing business value to our clients.

With these challenges in mind, here are some guiding principles to help increase successful IGA implementations. The below outlines an end to end approach we used to help a client successfully navigate through a set of typical IGA use cases.

Guidelines for IGA deployments:


1

team.jpeg

Experienced IGA Project Team - Many organizations try to staff IGA projects with existing IT people. Oftentimes they may have little to no IGA experience. Subject Matter and product knowledge are critical requirements to planning, designing and building an IGA platform that meets security, compliance and usability requirements necessary, thus providing value to the organization.

IGA resources with specific knowledge can be difficult to acquire in the job market, so clients often engage System Integrators (like Hub City Media) for this expertise.

2

Business-Foundation.jpeg

Build a strong foundation - As tempting as it is to “skip to the end” when reading a good story, it’s important to follow the natural progression to make that ending more meaningful and satisfying. This analogy holds true for IGA projects also. Of course it’s important to show value, but the challenge is to take the necessary time to build a winning strategy and then execute it. Short-changing this phase of the project often leaves the implementation team building on-the-fly without any real guidance or understanding of the end-goals for the business or impacts to end-users.  Maintain discipline, educate stakeholders on IGA processes and terminology, align team members expectations and invest in a thorough requirements gathering and design with full participation from key stakeholders in IT and business roles. Agreeing on the “blueprint” will allow everyone to envision the endstate. 

3

techbis.jpeg

Adapt technology and business processes - Change is uncomfortable.  Oftentimes initial conversations start off with clients asking to “automate our existing processes” or migrate from one product to another “without changing the experience.”  While challenging, we advise taking a more strategic view. Look for opportunities for process improvement, re-align business and technology to take advantage of IGA standards, automation and native capabilities. Try not to to force an uncomfortable union between less compatible parts.

4

wins.jpeg

Define small wins - Aside from the pre-requisite infrastructure for deploying your IGA products, the next goal of your strategic planning efforts should be defining reasonable scope and schedule for addressing critical IGA needs, and delivering successful value to the business.  Maintaining manageable objectives will help keep expectations in line with delivery timetables, helping to build confidence in the IGA platform and further driving adoption. Too often, especially in platform migrations, we see overambitious goals and lengthy project schedules that can be derailed by scope creep, misaligned expectations and an “all or nothing” success criteria.

5

custom.png

Customize only when necessary and within supported frameworks - IGA standards have matured considerably over the last decade.  Improved protocols and security, based on industry experience and analysis, have closed the gap between many IGA capabilities. While product vendors still offer some unique capabilities or approaches, especially in User Experience (UX), the focus has definitely emphasized standards over custom solutions. Following our advice on #3, Adaptation should be a first priority, but there are certainly situations that call for some enhancement or personalization of IGA product capabilities.  We recommend a thorough examination of the underlying requirements and goals before choosing this approach, but in cases where it is deemed necessary, make sure to leverage the product vendor’s supported methods for achieving this advanced level of complexity. Always view the requirement in terms of perceived business value, time / cost to implement, maintainability, upgradeability and supportability. A decision made with these perspectives in mind are less likely to choose customization unnecessarily. In most every case, the outcome will be far more successful when choosing a standards-based approach, leveraging the selected technology tools capabilities as strengths rather than weaknesses.

6

taregt.jpeg

Setting goals for the future - Building a strong foundation and achieving small wins helps lay the essential groundwork for some really exciting developments in modern Governance practices, namely adapting Artificial Intelligence and Machine Learning to drive value. These futuristic technologies are a current day reality and can provide several benefits to the IGA space, including:

  • Data-support of certification and access request decisions 

  • Complete automation of low-risk decisions through business policies and rulesets

  • Enhanced security by identifying patterns and redundancies in Identity dataset(s)


We hope this exploration of our experience assisting and advising IGA projects provided valuable insights and tips to get you started, no matter where you are on the IGA journey. 

CONTACT US for an introductory meeting with one of our IGA experts, where we can apply this information to your unique organization.

Read More
Technology Blog Robert Miranda Technology Blog Robert Miranda

Deploying Identity and Access Management (IAM) Infrastructure in the Cloud - PART 4: DEPLOYMENT

Explore critical concepts (planning, design, development and implementation) in detail to help achieve a successful deployment of ForgeRock in the cloud. In Part 4 of the series, we discuss applying all of the hard work we’ve done on research, design and development to deploy cloud infrastructure on AWS.

Blog Series: Deploying Identity and Access Management (IAM) Infrastructure in the Cloud

Part 4: DEPLOYMENT

Standing Up Your Cloud Infrastructure

In the previous installment of this four part series on “Building IAM Infrastructure in the Public Cloud,” we discussed the development stage, where we created the objects and automation used to implement the cloud infrastructure design.

In part 4, we discuss applying all of the hard work we’ve done on research, design and development to deploy cloud infrastructure on AWS. Before moving forward, you might want to refer back to the previous installments to review the process. 

Part 1 - PLANNING

Part 2 - DESIGN

Part 3 - TOOLS, RESOURCES and AUTOMATION

Prerequisites

It is assumed at this point that:

  • You have an active AWS account

  • You have an IAM account in AWS with administrative privileges

  • You have confirmed that your AWS resource limits in the appropriate region(s) are sufficient to deploy all resources in your design

  • Your deployment artifacts such as cloudformation templates, shell scripts and customized config files have been staged in an S3 bucket in your account, code repository, etc.

  • Any resources that your cloudformation template has a dependency on will be in place before execution. For example, EC2 instance profiles, key pairs, AMI’s etc have been created or are otherwise available.

  • You have a list of parameter values ready, e.g.VPC name, VPC CIDR block, and the CIDR block(s) of external network(s) for your routing tables (essentially anything that your cloudformation template expects as input). 

  • You have been incrementally testing cloudformation templates and other deployment artifacts as you’ve been creating them, are satisfied they are error free, and confident they are in a state that will achieve desired results

Goals

Goals at this stage:

  1. Deploy the cloud infrastructure

  2. Validate the cloud network infrastructure

  3. Deploy EKS

  4. Integrate the cloud infrastructure with the on-prem network environment

  5. Configure DNS

  6. Prepare for the Forgerock implementation to commence


1 - Deploy the cloud infrastructure

A new Cloudformation stack can be created from the web interface. From this interface, you can specify the location of your template and enter parameter values. The parameter value fields, descriptions and constraints will vary between implementations, and are defined in your template.

As the stack is building, you can monitor the resources being created real-time. The events list can be quite lengthy, especially if you have created a template that supports multiple environments. That said, the template deployment can complete in a matter of minutes, and is successful if the final entry with the stack name listed in the Logical ID field returns a status of ‘CREATE_COMPLETE’. 

Any other status indicates failure, and depending on the options selected when you deployed the stack, it may automatically roll back the resources that were created or leave them in place to help in debugging your template. 



After all the work it took to get to this point, it might come as a surprise how quickly and easily this part of your deployment can be completed. By the same token, it also needs to be pointed out how quickly and easily it can be taken down--potentially by accident--without proper access controls and procedures in place. 

At the very least, we recommend that “Termination Protection” on the deployed stack be set to “enabled”. While it will not prevent someone with full access to the Cloudformation service to intentionally delete the stack, it will create the extra step of having to disable termination protection during the deletion process, and that could in some cases be enough to avert disaster.


2 - Validate the cloud network infrastructure

The list of items to check may change depending on your design, but in most cases, you should go thought your deployment and validate:

  • Naming conventions and other tags / values on all resources

  • Internet gateway

  • NAT gateways

  • VPC CIDR, subnets, availability zones, routing tables, security groups and ACLs

  • Jump box instances (and your ability to SSH to them, starting with a public facing jump box if you’ve created one)

  • Tests of outbound internet connectivity from both public and private subnets

  • EC2 instances such as EKS console instances (discussed shortly), and instances that will be used to deploy DS components that are not containerized

  • If applicable, the AWS connection objects to the on-prem network, like VPN gateways, customer gateways and VPN connections

3 - Deploy EKS

One approach we’ve used to deploy EKS clusters is to create an instance during the cloudformation deployment of the VPC that has tools like kubectl, and seed it with scripts / configuration files that are VPC specific. We call this an “EKS console”. During the VPC deployment, parameters are passed to a config file on the instance via the userdata script on the EC2 object, which specifies deployment settings for the associated EKS cluster. 

After the VPC is deployed, the EKS console instance serves the following purposes:

  1. You can simply launch a shell script from it that will execute AWS CLI commands, properly deploying the cluster to correct subnets, etc.

  2. You can manage the cluster from this instance as well as launch application deployments, services, ingress controllers, secrets, etc.

If you’ve taken a similar approach, you can now launch the EKS deployment script. When it completes, you can test out connecting to and managing the cluster.


4 - Integrate the cloud infrastructure with the on-prem network environment

If your use case includes connectivity to an on-prem network, you have likely already created or will shortly create the AWS objects necessary to establish connection.

If, for example, you are creating a point-to-point VPN connection to a supported internet-facing endpoint on the on-prem network, you can now generate a downloadable config file from within the AWS VPC web interface. This file contains critical device specific parameters, in addition to encryption settings, endpoint IPs on the AWS side, and pre-shared keys. This file should be shared with your networking team in a secure fashion. 

While the file has much of the configuration pre-configured, it is incumbent on the networking team to make any necessary modifications to it before applying it to the on-prem VPN gateway. This is to ensure that the configuration meets your use case, and does not adversely conflict with other settings already present on the device. 

Once your networking team has completed the on-prem configuration, and you have once again confirmed routing tables in the VPC are configured correctly, the tunnel can be activated by initiating traffic from the on-prem side. After an initial time-out, the tunnel should come up and traffic should be able to flow. If the VPN and routing appears to be configured correctly on both sides, and traffic still is not flowing, firewall settings should be checked.

5- Configure DNS

One frequent use case we encounter is the need for DNS resolvers in the VPC to be able to resolve domain names that exist in zones on private DNS servers in the customer’s on-prem network. This sometimes includes the ability for DNS resolvers on the on-prem network to resolve names in Route53 private hosted zones associated with the VPC. 

Once you have successfully established connectivity between the VPC and the on-prem network, you can now support this functionality via Route53 “inbound endpoints” and “outbound endpoints”. Outbound endpoints are essentially forwarders in Route53 that specify the on-prem domain name and DNS server(s) that can fulfill the request. Inbound endpoints provide addresses that can be configured in forwarders on the on-prem DNS servers. 

If after configuring these forwarders and endpoints resolution is not working as expected, you should check your firewall settings, ACLs, etc.


6- Prepare for the
ForgeRock implementation to commence

Congratulations! You’ve reached the very significant milestone of deploying your cloud infrastructure, validating it, integrating it with your on-prem network, and perhaps adding additional functionality like Route53 resolver endpoints. You have also deployed at least one EKS cluster, presumably in the Dev environment (if you designed your VPC to support multiple environments), and you are prepared to deploy clusters for the other lower environments when ready. 

It’s now time to bring in your developers / Identity Management team so they can proceed with getting Forgerock staged, tested and implemented.

  • Demo your cloud infrastructure so everyone has an operational understanding of what it looks like and where applications and related components need to be deployed

  • Utilize AWS IAM as needed to create accounts that only have the privileges necessary for team members to do their jobs. Some team members may not even need console or command line API access at all, and only will need SSH access to EC2 resources.

  • Be prepared to provide cloud infrastructure support to your team members

  • Start planning out your production cloud infrastructure, as you now should have the insight and skills needed to do it successfully



Next Steps

In this fourth installment of “Building IAM Infrastructure in Public Cloud,” we’ve discussed performing the actual deployment of your cloud infrastructure, validating it, and getting it connected to your on-prem network.

In future blogs, we will explore the process of planning and executing the deployment of ForgeRock itself into your cloud infrastructure. We hope you’ve enjoyed this series so far, and that you’ve gained some valuable insights on your way to the cloud.


CONTACT US for an introductory meeting with one of our networking experts, where we can apply this information to your unique system.

Read More
Technology Blog Robert Miranda Technology Blog Robert Miranda

Deploying Identity and Access Management (IAM) Infrastructure in the Cloud - PART 3: TOOLS, RESOURCES and AUTOMATION

Explore critical concepts (planning, design, development and implementation) in detail to help achieve a successful deployment of ForgeRock in the cloud. In Part 3 of the series, we take a closer look at tools, resources and automation to help you setup for implementation…

Blog Series: Deploying Identity and Access Management (IAM) Infrastructure in the Cloud

Part 3: TOOLS, RESOURCES and AUTOMATION

In the previous installment of this four part series on “Building IAM Infrastructure in the Public Cloud,” we discussed core cloud infrastructure components and design concepts. 

 In this third installment, you move into the development stage, where you take your cloud infrastructure design and set up the objects and automation that will be used to implement it.

Deployment Automation Goals and Methodology

At this stage of deploying our infrastructure, you should have a functional design of the cloud infrastructure that, at minimum, includes:

  • AWS account in use 

  • Region you are deploying into

  • VPC architectural layouts for all environments, including CIDR blocks, subnets, and availability zones

  • Connectivity requirements to the internet, peer VPCs, and on-prem data networks

  • Inventory of AWS compute resources used

  • List of other AWS services, how they are being used, and what objects need to be created

  • List and configuration of security groups and access control lists required

  • Clone of the Forgeops 6.5.2 repo from github


Deployment Objectives

The high-level objectives for proceeding with your deployment are as follows:

  • Create an S3 bucket to store Cloudformation templates

  • Build and execute Cloudformation templates to deploy the VPC hosting the lower environments and the production environment, their respective EKS console hosts, and additional EC2 resources as needed per your specific solution

  • Deploy EKS clusters from the EKS console hosts via the AWS CLI

  • Deploy EKS cluster nodes from the EKS console hosts via the AWS CLI

A Note on the Forgeops Repo

The Forgeops repo contains scripts and configuration files that serve as a starting point for modeling your deployment automation. It does not provide a turnkey solution for your specific environment or design and must be customized for your particular use case. Since this repo supports multiple cloud platforms, it is also important to distinguish the file naming conventions used for the cloud platform you are working with. In our scenario, “eks” either precedes or is a part of each filename that relates to deploying AWS VPCs and EKS clusters. Take some time to familiarize yourself with the resources in this repo.

Building the VPC Template

VPC templates will be created using AWS Cloudformation using your VPC architecture design and an inventory of VPC components as a guide. Components will typically include:

  • Internet gateway

  • Public and private facing subnets in each availability zone

  • NAT gateways

  • Routing tables and associated routes

  • Access control lists

  • Security groups, including the cluster control plane security groups

  • Private and or public jump box EC2 instances

  • EKS console instances

  • Additional ForgeRock related EC2 instances

  • S3 endpoints


Template Structure

While an extensive discussion on Cloudformation itself is beyond the scope of this text, a high level overview of some of the template sections will help get you oriented:

JSON formatted Cloudformation template; YAML format is also supported

The AWSTemplateFormatVersion section specifies the template version that the template conforms to.

The Description section provides a space to add a text string that describes the template.

The Metadata section contains objects that provide information about the template. For example this section can contain configuration information for how to present an interactive interface where users can review and select parameter values defined in the template.

The Parameters section is the place to define values to pass to your template at runtime. For example this can include values such as the name of your VPC, the CIDR block to use for it, keypairs to use for EC2, and virtually any other configurable value of EC2 resources that can be launched using Cloudformation.

The Mappings section is where you can create a mapping of keys and associated values that you can use to specify conditional parameter values, similar to a lookup table.

The Conditions section is where conditions can be defined that control whether certain resources are created or whether certain resource properties are assigned a value during stack creation or update. 

The Resources section specifies the stack resources and their properties, like VPCs, subnets, routing tables, EC2 instances, etc.

The Outputs section describes the values that are returned whenever you view your stack's properties. For example, you can declare an output for an S3 bucket name which can then be retrieved from the AWS CLI.

Cloudformation Designer

AWS Cloudformation provides the option of using a graphical design tool called Cloudformation Designer that dynamically generates the JSON or YAML code associated with AWS resources. The tool utilizes a drag and drop interface along with an integrated JSON and YAML editor, and is an excellent way to get your templates started while creating graphical representations for your resources. Once you’ve added and arranged your resources to the template, you can download your template for more granular editing of the resource attributes, as well as add additional scripting to the other sections of the template.

High-level view of Non-Production VPC. Supports individual environments for Development, Testing and Performance Testing, as well as shared infrastructure resources

High-level view of Production VPC with a single Prod environment and infrastructure resources

Closer view of subnet objects. These are private subnets, and are arranged by availability zone. The two subnets on the left are in AZa, the two in the center are in AZb, and the two on the right are in AZc.

Closer view of one component in the template, which in this case is a security group that will be used during the deployment and operation of the EKS cluster for our Test environment. Note: the JSON code that defines this object in the bottom pane, and the visual connections that illustrate relationships to other objects.

The same object with code in YAML format

EKS Console Instances

The EKS console EC2 instances will be used to both deploy and manage the EKS clusters and cluster nodes after the VPC deployment itself has completed. Each environment and its respective cluster will have its own dedicated console instance.

Each console instance will host shell scripts that invoke the AWS CLI to create an EKS cluster, and launch a cloudformation script to deploy the worker nodes. The Forgeops repo contains sample scripts and configuration files that you can use as a starting point to build your own deployment automation. The eks-env.cfg file in particular provides a list of parameters for which values need to be provided. These parameters include, for example, the cluster name, which subnets the worker nodes should be deployed on, the number of worker nodes to create, the instance types and storage sizes of the worker nodes, the ID of the EKS Optimized Amazon Machine Image to use, etc. These values can be added manually after the VPC and the EKS console instance is created, or the VPC Cloudformation template can be leveraged to populate these values automatically during the VPC deployment.

Prerequisites

Your EKS console instances will require the following software to be installed before launching the installation scripts:

Forgeops scripts can be used to install these prerequisites as well as additional utilities like helm, or you can prepare your own scripts.

To avoid the need for adding AWS IAM account access keys directly to the instance, it is recommended to utilize an IAM instance profile to provide the necessary rights to deploy and manage EKS and EC2 resources from the CLI.

  • A key pair will need to be created for use with the EKS worker nodes

Next Steps

In this third installment of “Building IAM Infrastructure in Public Cloud,” we’ve discussed building the automation necessary to deploy your AWS VPCs and EKS Clusters. In Part 4, we will move forward with deploying your VPCs, connecting them to other networks such as your on-prem network and / or peer VPCs, and finally deploy your EKS clusters.


CONTACT US for an introductory meeting with one of our networking experts, where we can apply this information to your unique system.

Read More
Technology Blog Robert Miranda Technology Blog Robert Miranda

Deploying Identity and Access Management (IAM) Infrastructure in the Cloud - PART 2: DESIGN

Explore critical concepts (planning, design, development and implementation) in detail to help achieve a successful deployment of ForgeRock in the cloud. In Part 2 of the series, we take a closer look at core cloud infrastructure components and design concepts…

Blog Series: Deploying Identity and Access Management (IAM) Infrastructure in the Cloud

Part 2: DESIGN - Architecting the Cloud Infrastructure

In Part 1 of this series, we discussed:

  • Overall design

  • Cloud components and services

  • “Security of the Cloud” vs. “Security in the Cloud” 

  • Which teams from within your organization to engage with on a cloud initiative

In Part 2, we will take a closer look at core cloud infrastructure components and design concepts.

For a visual deep dive into Running IAM using Docker and Kubernetes, check out our webinar with ForgeRock.

 

The VPC Framework

 VPCs

 The “VPC” or “Virtual Private Cloud” represents the foundation of the cloud infrastructure where most of our cloud resources will be deployed. It is a private, logically isolated section of the cloud which you control, and can design to meet your organization’s functional, networking and security requirements. 

At a minimum, one VPC in one of AWS’s operating regions will be required.

 Regions and Availability Zones

 AWS VPCs are implemented at the regional level. Regions are separate geographic areas, and are further divided into “availability zones” which are multiple isolated locations linked by low-latency networking. VPCs should generally be designed to operate across at least two to three availability zones for high availability and fault tolerance within a region.

 Interconnectivity

 When you are in the early stages of your design, it will be helpful to ascertain what kind of interconnectivity you will need. A VPC will more than likely need to be connected to some combination of other networks, such as the internet, peer VPCs and on-prem data centers. This needs to be considered in your design.

 Public-facing resources that need to be accessible from the internet will require publicly routable IP addresses, and must reside on subnets within your VPC that are configured with a direct route to the internet. For resources that do not directly face the internet but need outbound internet access, one or more internet facing NAT Gateways can be added to the VPC.

 Establishing connectivity to other VPCs either in the same or different AWS account can be achieved through a) “VPC Peering”, which establishes point-to-point connections, or by b) implementing a “Transit Gateway”, which uses a hub and spoke model.

 

Reference Architecture for Identity and Access Management (IAM) cloud deployment (select to expand)

Connecting VPCs and on-prem networks also presents some options to consider. Point-to-point IPSec VPN tunnels can be created between VPCs and on-prem terminating equipment capable of supporting them. A Transit Gateway can also be used as an intermediary, where the VPN Tunnels can be established between the on-prem network and the Transit Gateway, and the VPCs connected via “Transit Gateway Attachments.” This will reduce the number of VPN tunnels you need to support when integrating multiple VPCs. AWS “Direct Connect” is also available as an option where VPN connections are insufficient for your needs.

VPC IP Address Space and Subnet Layout

IP address space of VPCs connected to existing networks needs to be planned carefully. It must be sized to support your cloud infrastructure without conflicting with other networks or consuming significantly more addresses than you need. Each VPC will require a CIDR block that will cover the full range of IP subnets used in your implementation. Determining the proper size of CIDR blocks will be possible once the functional design of the cloud infrastructure is further along. 

VPC Security and Logging

 In keeping with the principle of “Security in the Cloud,” AWS provides a comprehensive set of tools to help secure your cloud environment. At a minimum, “Network ACLs” and “Security Groups” need to be configured to establish and maintain proper firewall rules both at the perimeter and inside of your VPC. 

 Other AWS native services can help detect malicious activity, mitigate attacks, monitor security alerts and manage configuration so it remains compliant with security requirements. It is also possible to capture and log traffic at the packet level on subnets and network interfaces, and log cloud API activity. You should take the opportunity to familiarize yourself with and utilize these services.

Some organizations may also have requirements to integrate 3rd party security tools, services, appliances, etc. with the cloud infrastructure.

 

EKS  

 ForgeRock solutions can be implemented as containerized applications by leveraging “Kubernetes” - an open-source system for automating the deployment, scaling and management of containerized applications.

“EKS” is Amazon’s “Elastic Kubernetes Service.” It is a managed service that facilitates deploying and operating Kubernetes as part of your AWS cloud infrastructure. From a high-level infrastructure perspective, a Kubernetes cluster consists of “masters” and “nodes.” Masters provide the control plane to coordinate all activities in your cluster, such as scheduling application deployments, maintaining their state, scaling them and rolling out updates. Nodes provide the compute environment to run your containerized applications. In a cluster that is deployed using EKS, the worker nodes are visible as EC2 instances in your VPC. The master nodes are not directly visible to you and are managed by AWS.

 An EKS cluster needs to be designed to have the resources needed to effectively run your workloads. Selection of the correct EC2 instance type for your worker nodes will depend on your particular environment. ForgeRock provides general guidance in the ForgeOps documentation to help in this process based on factors like the number of user accounts that will be supported; however, the final selection should be determined based on the results of performance testing before deploying into production.

While Kubernetes will automatically maintain the proper state of containerized applications and related services, the cluster itself needs to be designed in a manner that will be self-healing and recover automatically from infrastructure related failures, such as a failed worker node or even a failed AWS availability zone. Implementing Kubernetes in AWS is accomplished by deploying nodes in a VPC across multiple availability zones, and leveraging AWS Auto Scaling Groups to maintain or scale the appropriate number of running nodes.

It should be noted that EKS worker nodes can consume a large number of IP addresses. These addresses are reserved for use with application deployments and assigned to Kubernetes pods. The number of IP addresses consumed is based on the EC2 instance type used, particularly with respect to the number of network interfaces the instance supports, and how many IPs can be associated with these interfaces. Subnets hosting worker nodes must be properly sized to accommodate the minimum number of worker nodes you anticipate running in each availability zone. These subnets must also have the capacity to support additional nodes driven by events such as scaling out (due to high utilization), or the failure of another availability zone used by the VPC.


Other EC2 Resources

Your VPC may need to host additional EC2 resources, and this will be a factor in your capacity planning.

 ForgeRock Instances

Some ForgeRock solutions will require additional EC2 instances for ForgeRock components beyond what is allocated to the Kubernetes cluster. For example, these could include DS, CS, CTS, User, Replication and Backup instances depending on your specific design. 

 Utility Instances

 Other instances that will likely be a part of your deployment include jump boxes and management instances. Jump boxes are used by systems admins to gain shell access to instances inside the VPC. They are particularly useful when working in the VPC before connectivity to the on-prem environment has been established. Management instances, or what we tend to refer to as “EKS Consoles” are used for configuring and launching scripted deployments of EKS as well as providing a command line interface for managing the cluster and application deployments. Additional tools such as Helm or Skaffold are commonly installed here.

 Load Balancers

 Applications deployed on EKS are commonly exposed outside the cluster via Kubernetes services, which launch AWS Load Balancers (typically Application Load Balancers). ALBs can be configured as public (accessible from the internet) or private. Both types may be utilized depending on your design. Application load balancers should be deployed across your availability zones, and each subnet that ALBs are assigned to currently must have a minimum of eight free IP addresses to launch successfully. An ALB uses dynamic IP addressing and can add or reduce the number of network interfaces associated with it based on workload, therefore it should always be accessed via DNS.


 Global Accelerator

AWS recently introduced a service called “Global Accelerator.” This service provides two publicly routable static IP addresses that are exposed to the internet at AWS edge locations, and can be used to front-end your ALBs or EC2 instances. This service can enhance performance for external users by leveraging the speed and dependability of the AWS backbone. 

It also can be used to redistribute or redirect traffic to different back-end resources without requiring any changes to the way users connect to the application, e.g. in blue / green deployments or in disaster recovery scenarios. When external clients require static IP addresses for white listing purposes, this service addresses the ALB’s lack of support for static addresses. Furthermore, the global accelerator has built-in mitigation for DDOS attacks. The global accelerator should be associated with subnets in each availability zone in your VPC, and will consume at least one IP address from each of these subnets.

 

DNS

Route 53 is AWS’s DNS service. It can host zones that are publicly accessible from the internet as well as private hosted zones that are accessible within your cloud environment. In some situations it may be desirable for systems in on-prem and / or peer VPCs to be able to perform name resolution in zones hosted by on-prem DNS servers. Conversely, there are situations where on-prem systems need to be able to resolve names in your private hosted zones in Route 53. This can be achieved by establishing inbound and/or outbound Route 53 resolver endpoints in your VPC, as well as configuring the appropriate zone forwarders. Resolver endpoints should be associated with subnets in each availability zone in your VPC, and will consume at least one IP address from each of these subnets. 


Environments

 A significant element of planning your cloud infrastructure is determining the types of environments you will need and how they will be structured.

 For non-production, you may want to create separate lower environments for development, code testing  and load / performance testing. Depending on your preferences or specific needs, these environments can share one VPC or have their own dedicated VPCs. In a shared VPC, these environments can be hosted on the same EKS cluster using different namespaces, or have their own dedicated EKS clusters.

 For your production environment, you may want to implement a blue / green deployment model. This also presents choices, such as whether to run entirely separate cloud infrastructure for each, or whether to have shared infrastructure where the EKS cluster is divided into blue and green namespaces.

 Again, these choices depend on your preferences and specific requirements, but your operating budget for running a cloud infrastructure will also influence how much cloud infrastructure you will want to build and operate. The more infrastructure that is shared, the less your ongoing operating costs will be.

 

Putting It All Together

 Preparing Cost Estimates For Your Cloud Infrastructure

By now you should be developing a clearer picture of what your cloud infrastructure will look like and be better situated to put together a high-level design. The next step is to take an inventory of the components in your high-level design and enter this information into the various budgeting tools AWS makes available to you, like the Simple Monthly Calculator and TCO calculator. This step will provide you with estimates of your cloud infrastructure operating expenses and help you determine if the cost of what you have designed is consistent with your budget.

If costs come in higher than anticipated, there are approaches you can take to reduce your expenses. As previously discussed, sharing some infrastructure can help; for example, consolidating your lower environments into a single VPC if that is feasible. Since EC2 instances generally represent the largest percentage of your operating costs, exploring the use of “reserved instances” would also present cost savings opportunities once you are closer to finalizing design and are prepared to commit to using specific instance types over an agreed time period.

VPC IP Address Space and Subnet Layout – Part 2

We’ve already talked briefly about the criticality of planning your IP address space carefully. Now we will put this into practice.

 Once you have prepared a high-level design that meets your functional and budgetary requirements, the next step is to identify the networking requirements of each component; e.g. the number of IPs EKS worker nodes will consume, or how many IPs are needed to always be available on a subnet for an Application Load Balancer to successfully be deployed. 

Another factor to consider is the number of IP addresses reserved on each subnet by the cloud provider. On AWS, five addresses from each subnet are reserved and unavailable for your use. These factors, along with the number of availability zones you will be deploying into will drive the number and size of the subnets within each VPC.

 You should now have the basis for determining the CIDR block sizes needed to build your cloud infrastructure as designed, and can work with your organization’s networking team to have them allocated and assigned to you.

 Anecdotally, we have encountered situations where clients could not obtain the desired size CIDR blocks from their organization and had to scale back their cloud infrastructure design to meet the constraint of using smaller CIDR blocks. Engaging your networking team early in the design process will help identify if this is a potential risk and help you to more efficiently work through any IP address space constraints.

 

Next Steps

 In this installment, we’ve provided you with detailed insight into what goes into designing public cloud infrastructure to host your ForgeRock implementation. In the next part of this series, we will move into the development phase, including tools, resources and automation that can be leveraged to successfully deploy your cloud infrastructure.


Next Up in the Series: DEVELOPMENT - Tools, Resources and Automation

Read More
Technology Blog Robert Miranda Technology Blog Robert Miranda

Deploying Identity and Access Management (IAM) Infrastructure in the Cloud - PART 1: PLANNING

Explore critical concepts (planning, design, development and implementation) in detail to help achieve a successful deployment of ForgeRock in the cloud. Part 1 of the series delves into planning considerations organizations must make when deploying Identity and Access Management in the cloud…

Blog Series: Deploying Identity and Access Management (IAM) Infrastructure in the Cloud

In a 2018 blog post, we explored high-level concepts of implementing ForgeRock Identity and Access Management (IAM) in public cloud using Kubernetes and an infrastructure as code approach for the deployment. Since then, interest from our clients has increased substantially with respect to cloud-based deployments. They seek to leverage the benefits of public cloud as part of their initiative to modernize their Identity and Access Management (IAM) systems. 

Essential details to consider when using cloud-based technology infrastructure: 

Reference Architecture for Identity and Access Management (IAM) cloud deployment (select to expand)

In this blog series, we focus on exploring each of these concepts in greater detail to help you achieve a successful deployment of ForgeRock Identity and Access Management (IAM) in the cloud. While we will be referencing AWS as our cloud provider, the overall concepts are similar across other cloud providers regardless of the differences in specific tools, services or methods associated with them.

Part 1: PLANNING


Organizational Considerations

Given the power and capabilities of cloud computing, it is theoretically possible to design and build much of the entire platform with minimal involvement from other groups within your organization; however, in many cases, and particularly with larger companies, this can lead to significant conflicts and delays when it is time to integrate your cloud infrastructure with the rest of the environment and put it into production. 

The particulars vary between organizations, but here are suggestions of who to consult during the planning phase: 

  • Network Engineering team

  • Server Engineering team

  • Security Engineering team

  • Governance / Risk Management / Compliance team(s) 

These discussions will help you identify resources and requirements that will be material to your design. 

For example, the Networking team will likely assign the IP address space used by your virtual private clouds and work with you to connect the VPCs with the rest of the networking environment.

The Server Engineering team may have various standards, like preferred operating systems and naming conventions, that need to be applied to your compute instances. 

The Security Engineering and Risk Management teams will likely have various requirements that your design needs to comply with as well. 

One or more of these teams can also help you to identify common infrastructure services, such as internal DNS and monitoring systems that may be required to be integrated with your cloud infrastructure. 

Finally, your organization might already utilize public cloud and have an existing relationship with one or more providers. This can potentially influence which cloud provider you choose to move forward with, and the internal team(s) involved should be able to assist you with properly establishing a provider account or environment for your initiative.


Infrastructure Design Goals

Despite the absence of having to build a physical data center, planning cloud-based technology infrastructure has many similar requirements. For example:

  • Defining each environment needed for both production and lower environments

  • Identifying the major infrastructure components and services required and properly scaling them to efficiently service workloads of each respective environment

  • Designing a properly sized, highly available, fault tolerant network infrastructure to host these components and services

  • Providing internet access

  • Integrating with other corporate networks and services

  • Implementing proper security practices

  • Identifying and satisfying corporate requirements and standards as they relate to implementing technology infrastructure

  • Leveraging appropriate deployment tools

  • Controlling costs

Other important characteristics that should be considered early in the process include deciding if you will be deploying into a single region or multiple regions, and whether or not you plan to utilize a blue / green software deployment model for your production environment.  Both will have implications on your capacity and integration planning.

Security

Shared Responsibility Model

There are two primary aspects of cloud security:

  • “Security of the Cloud”

  • “Security in the Cloud”

Cloud providers like AWS are responsible for the former, and you as the customer are responsible for the latter. Simply stated, the cloud provider is responsible for the security and integrity of the underlying cloud services and infrastructure, and the customer is responsible for properly securing the objects and services deployed in the cloud. This is accomplished using tools and controls provided by the cloud service provider, applying operating system and application level patches, and even leveraging third party tools to further harden security. Furthermore, as a best practice, your architecture should limit direct internet exposure to the systems and services that need it to function. 

See AWS Shared Responsibility Model for more information.

Cloud Components and Services

Each function in the cloud architecture is dependent on a number of components and services, many of which are offered natively by the cloud provider. Keep in mind that specific implementations will vary and you can use alternatives for some functions if they are better fit for your organization. For example, for code collaboration and version control repositories, you can use AWS’s CodeCommit, or a third party solution like GitLab if you prefer. For deploying infrastructure using an “infrastructure as code” approach, you can use AWS’s Cloud formation, or alternatively Hashicorp’s Terraform. On the other hand, there are cloud provider components and services that must be utilized, like AWS’s VPC and EC2 services which provide networking and compute resources respectively. The following is a summary of some of the components and services we will be using. If you are relatively new to AWS, it would be helpful to familiarize yourself with them:

Amazon VPC

Amazon VPC

Provides the cloud based networking environment and connectivity to the internet and on-prem networks

Amazon EC2

Amazon EC2

Provides compute resources like virtual servers, load balancers, and autoscaling groups that run in the virtual private cloud

Amazon Elastic Kubernetes Service

Amazon Elastic Kubernetes Service

Managed kubernetes service that creates the control plane and worker nodes

AWS Global Accelerator

AWS Global Accelerator

Managed service for exposing applications to the internet

Amazon Route 53

Amazon Route 53

Cloud based DNS service

Amazon Simple Storage Service

Amazon Simple Storage Service

Object storage service

Amazon Elastic Container Registry

Amazon Elastic Container Registry

Managed Docker Container Registry

AWS CodeCommit

AWS CodeCommit

Managed source control service that hosts git-based repositories

AWS Certificate Manager

AWS Certificate Manager

Service for managing public and private SSL/TLS certificates

AWS Cloudformation

AWS Cloudformation

Scripting environment for provisioning AWS cloud resources

AWS Identity and Access Management

AWS Identity and Access Management

Provides fine-grained access control to AWS resources

Amazon CloudWatch

Amazon CloudWatch

Collects monitoring and operational data in the form of logs, metrics, and events.

AWS Support and Resource Limits

Support 

As you work through the design and testing process, you will invariably encounter issues that can be resolved more quickly if you have a paid AWS support plan in place that your team can utilize.

You can review the available support plans for more information to determine which plan is right for you. 

Resource Limits

While it is still early in the process to determine the number and types of cloud resources you will need, one aspect you will need to plan ahead for is managing resource limits. The initial limits that are granted to new AWS accounts for certain resource types can have thresholds that are far below what you will need to deploy in an enterprise environment.

Furthermore, the newer the account, the longer it can take for AWS to approve requests for limit increases, and this can adversely impact your development and deployment timelines. Establishing a relationship with an AWS account manager and familiarizing them with your initiative can help expedite the process of getting limit increases approved in a more timely fashion until the account has some time to build a satisfactory payment and utilization history.

 

Next Steps

We’ve covered several aspects of the planning process for building your IAM infrastructure in Public Cloud. In the second installment of this four part series, we will explore design concepts, including the architecture for the VPCs, EKS clusters and integration with on-prem networks.


For more content on deploying Identity and Access Management (IAM) in the cloud, check out our webinar series with ForgeRock: Containerized IAM on Amazon Web Services (AWS)

Part 1 - Overview

Part 2 - Deep Dive

Part 3 - Run and Operate

Read More
Technology Blog, Featured Jacque Tesoriero Technology Blog, Featured Jacque Tesoriero

Delegated Administration: A Vital Part of the Modern IAM Platform

One of the key capabilities of modern Identity Management platforms is the ability to configure Delegated Administration for daily operations. Let’s take a look at why Delegated Administration plays an important role in today’s world...

One of the key capabilities of modern Identity Management platforms is the ability to configure Delegated Administration for daily operations. Let’s take a look at why Delegated Administration plays an important role in today’s world.

First, we need to define what we mean by the term ‘Delegated Administration’. To delegate refers to ‘a person sent with power to act for another’. Administration means ‘the performance of supervisory duties’. When we put this in an Identity specific context, we are looking at in terms of: 

  • decentralization of role-based access control systems

  • ability to assign limited authority (i.e. administrative privileges) to a user, or subset of users, permitting performance of actions on a specific object or group objects (i.e. scope)

Now that we’ve set a baseline for understanding what Delegated Administration is, we can use common scenarios or, as we refer to them from a system integration perspective, Client Use Cases to illustrate the business problem, how Delegated Administration addresses it and the corresponding business value.

Scenario 1: Provisioning, Profile Management and Access Requests

Context: 

The advent of information technology ushered in a number of important new job functions to the marketplace, one of the most common and important being the role of ‘Administrator’. Administrators are technologists who understand how to operate these complex systems and applications. Administrators have several key tasks performed on a daily basis, including: Provisioning (the creation of new user accounts), Profile Management (which includes password resets and access request fulfillment) or granting of new rights and / or privileges to existing user accounts.

Business challenge:

Businesses have seen an exponential growth in both the number of IT systems and users. The essential nature of these systems and applications to daily operations and revenue, coupled with the sheer scale of administering them in a small, centralized fashion, creates bottlenecks that significantly impact company profitability - for better or for worse. Users need access to these systems to perform their daily job function and forgotten passwords and account lockouts can severely inhibit productivity. 

Solution: 

So, we now have three main problems - Provisioning, Profile Management and Access Requests. How do we effectively manage these tasks in an ever expanding and complex IT ecosystem without overburdening or continuously increasing support staff? The answer is through Delegated Administration. 

Provisioning happens through two basic means today: automated (i.e. ‘birthright’) provisioning (typically governed by feeds from Human Resources coupled with logic based upon job functions) and request based (generally through human interaction). The latter, request based provisioning or new access, can handle large-scale efficiency in varying degrees, limited by the choice of what types of users can be assigned these roles. 

In our initial example, we discussed the limiting factor of a small, centralized group of administrators handling a large volume of tasks. Delegated Administration can increase optimization by allowing another level of administration, e.g. Department Managers, to perform limited administrative tasks for their direct reports OR allowing a Help Desk Representative to reset passwords.

Taking it to the largest scale, enabling end-users to self administer, can completely offload the burden of certain, specialized tasks from central administrators and empower end-users to manage their own profile data (e.g. name, address, contact information), as well as reset a forgotten password.

Scenario 2: Consumer-scale Identity and Internet of Things

Context: 

The next wave in information technology is upon us - a combination of smartphones, connected devices and mobile applications - leading to a literal explosion of identities. With that exponential growth, of course, comes more administration.

Business challenge: 

In the face of increasing user demand for ubiquitous access (24x7) anywhere from any device, organizations struggle to provide end-users with the ability to manage multiple identities and the devices with which they can be accessed. In addition, modern smartphones and mobile applications often rely on back-end systems, secured with account credentials, to provide services to the end-user. As a byproduct of this array of devices, applications and identities, end-users face an ever increasing number of personal accounts to manage. Providing a top-notch user experience is paramount to maintaining and growing a loyal user base.

Solution: 

As highlighted, Delegated Administration is leveraged to address modern Identity and Access Management challenges many organizations face. Some examples of common uses of Delegated Administration that we encounter daily are:

  1. Consumer registration of a new IoT device to access required services

  2. End-user self-service password reset on almost any website

  3. Adding a child to a “family share” plan for mobile application stores

  4. Adding a spouse as an “authorized” agent on a credit or bank account

  5. Allowing a manager to request access on behalf of direct reports

  6. Allowing a Customer Service Representative to see and modify specific information for clients they support

From an individual perspective, these are empowering capabilities that we have come to expect as part of our overall user experience. 

From a business perspective, it drives customer satisfaction and retention while reducing operational costs and resources. Truly a “win-win” for all parties!

While each business or entity may have variations on business process and governance in how this is implemented, the fact remains that by leveraging Delegated Administration in the current user population, we gain economies of scale much greater than through a centralized model. This allows retaining system integrity through limiting functions to an approved set of well-known Client Use Cases.

 

Related links: Delegated Administration for ForgeRock (Product), Delegated Administration and ForgeRock Identity Management (Blog by Anders Askåsen - Sr. Technical Product Manager at ForgeRock)


SENIOR SALES CONSULTANT

Read More
Technology Blog Jacque Tesoriero Technology Blog Jacque Tesoriero

Bringing Identity and Access Management Home

Two hundred billion divided by the world’s population shows us that each person on the Earth is going to use roughly 26 smart devices by 2020...

As IoT Grows In The Business World, Where Does That Leave Home Users? 

Popularity and awareness of the Internet of Things (IoT) is rising exponentially. Industry icon, Intel, projects two hundred billion IoT devices will be in use by 2020(1). Conversely, Gartner predicts only twenty billion(2). Though a wide range of predictions can be found online, we feel confident claiming that smart devices are going to continue to be a major part of our lives, with their role expanding at an extremely fast rate. Assuming Intel’s prediction is correct, two hundred billion divided by the world’s population shows us that each person on the Earth is going to use roughly 26 smart devices by 2020 in their daily life(3). This leads to something that all users need to consider - enterprise level Identity and Access Management (IAM) architecture.

What’s the security issue with IoT?

Industry experts warn that IoT security breaches provide attackers opportunities to control devices remotely and use them as an entry point to networks. Some devices do not use encryption and have weak default passwords which allow attackers to perform malicious firmware updates and control the device remotely(4). Remotely controlled IoT devices can provide all necessary information to man-in-the-middle(5) attackers who will be able to disable or abuse the security systems put in place which protect our home or personal information. IoT expert, Bill Montgomery, provides ten different ‘real life’ experiences across industries that have been hacked in the recent past. Each attack has one important common element - hackers used IoT devices as an entry point to networks in hospitals, governments, schools, utility companies and personal homes(6).

How do you prevent malicious access?

Traditionally, security infrastructure is built at the access point where humans interact with devices. IoT is a new way of communication, through which machines are communicating with other machines, applications or services. This extension of traditional security infrastructure presents a new set of challenges. As devices have not been part of traditional IAM systems, IoT requires defined IAM architecture. IAM leaders, such as Oracle, Forgerock or Salesforce, currently offer solutions to their enterprise level clients where they can manage all connected devices as a new identity and apply policy to those users accessing that data. Now, a similar approach needs to be implemented for home users.  

How can consumers protect their network? 

Previously, home users only had to worry about securing their individual networks. Now, they need to secure all of their individual IoT devices. The FBI recommends consumers protect their network and identity by changing default passwords, isolating IoTs on their own protected network and disabling Universal Plug and Play (UPnP) on routers(7); however, that is still not enough. Unfortunately, IoT devices still do not offer additional security, which makes for a weak system. Implementing enterprise level IAM solutions are obviously going to be very complicated and expensive for home users. Industry experts will need to find ways to make it affordable and easy to use for home users, giving them the additional layers of security essential for home networks of the future.  

 
 
(1) Dukes, Elizabeth. "200 Billion Smart Devices in the Workplace: Are You Ready?" 200 Billion Smart Devices in the Workplace: Are You Ready? N.p., 27 June 2016. Web. 03 Nov. 2016.

(2) Meulen, Rob Van Der. "Gartner Says 8.4 Billion Connected "Things" Will Be in Use in 2017, Up 31 Percent From 2016" Gartner Says 8.4 Billion Connected "Things" Will Be in Use in 2017, Up 31 Percent From 2016  N.p., 07 Feb 2017. Web. 07 Feb 2017.

(3)  By Signing In, You Agree to Our Terms of Service. "A Guide to the Internet of Things Infographic." A Guide to the Internet of Things Infographic (n.d.): n. pag. Intel. Web. 03 Nov. 2016.

(4) Osborne, Charlie. "Vulnerable Smart Home IoT Sockets Let Hackers Access Your Email Account." ZDNet. Zero Day, 18 Aug. 2016. Web. 3 Nov. 2016.

(5) Rouse, Margaret. "Man-In-The-Middle Attack." TechTarget. N.p., Dec. 2015. Web. July 2017. .

(6) Montgomery, Bill. "The 10 Most Terrifying IoT Security Breaches You Aren't Aware of (so Far)." Linked In. N.p., 13 Sept. 2015. Web. 3 Nov. 2016.

(7) United States of America. Federal Bureau of Investigation. IC3. Internet Crime Complaint Center (IC3) | Internet of Things Poses Opportunities for Cyber Crime. N.p., 10 Sept. 2015. Web. 04 Nov. 2016.
 

 

Read More
Technology Blog Jacque Tesoriero Technology Blog Jacque Tesoriero

The Pitfalls of Poor Deployment Planning

A nightmare scenario for any IT department is the critical failure of a production system, particularly during rollout of a new feature...

When A Typo Can Bring Down Your Entire System, You've Got To Know Which Preventative Measures To Take

A nightmare scenario for any IT department is the critical failure of a production system, particularly during rollout of a new feature. An error during an upgrade can take an entire division of a company offline for employees and clients, sometimes resulting in serious profit loss. Even minor mistakes can cause unnecessary headaches for end-users and IT staff. Preparing and implementing a comprehensive deployment strategy should be a mission-critical initiative for any corporation that performs their own internal development and maintenance.
 
Even experienced companies aren’t immune to deployment problems. Consider the latest outage event for Amazon Web Services (AWS) that occurred earlier this year. The AWS team published an explanation shortly after resolving the error, stating root cause was a simple typo in executing a deployment script. This typo rendered the AWS S3 service (a cloud storage solution used by many websites for content hosting) in the US-EAST-1 region inaccessible, affecting dozens of websites and web-based applications across the Internet. Multimedia publication site The Verge reported apps such as Trello, Quora, IFTTT and GroupMe all experienced some level of outage, ranging from a mere loss of displayed images to complete site downtime. Ironically, even isitdownrightnow.com, the website that checks if a site is down, also had hosting and response issues. This isn’t to say that the event is a perfect example of poor deployment practice; in fact it’s far from it. It goes to show that even for an organization as practiced and efficient as the AWS team, something simple and easy-to-miss can occur when implementing a production-level change. 
 
In general, the following high-level approach should be taken when standardizing production deployment practices: 


1. Consider impact to integrated applications. Analyze connections between internal applications and determine acceptable downtime periods. For example, perhaps there is a continuous syncing process running between a directory and a database. Failing to account for downtime on both endpoints can result in errors or data going out-of-date. A complicated or sensitive system that requires continuous uptime may need a multi-step process involving temporary switchovers to secondary or disaster recovery environments.


2. Plot rollback steps. This step is essential no matter the deployment type: always take backups of data, applications or file systems that could be affected prior to starting a production change! Most importantly, the actual process of a system restore must be planned and tested. The last thing any company wants is to run into an error while pushing a change to a production system, attempt a rollback, then learn at that moment that the restoration process doesn’t actually work in this situation.


3. Test and automate as much of the deployment process as possible. Every manual action taken during deployment increases the risk of improper code promotion, loss of user information, or worse, unintended data changes on a large scale -- remember, a single typo caused region-wide impact for AWS S3! Any automated script must also be painstakingly scoped to follow the security principle of “least privilege”. Simply put, don’t let the process have more power or access than it needs for the situation. That single typo in the AWS example wouldn’t have been as catastrophic if the script being utilized didn’t have the ability to shut down more than the desired set of servers. Lastly, all deployment scripts must be tested in lower-level environments prior to being used in production. The best way to mitigate risk of data corruption and potential loss of revenue when making updates to production systems is to test the process in a “production-like” environment beforehand*. At the very least, deployment scripts should be run in smaller-scale environments more than once, under different circumstances, to catch any potential bugs and crashes.


Of course, not all risk is avoidable. Software and development practices change constantly and nobody’s perfect. Companies that lack manpower or “know how” to accomplish the work described above should outsource to experience professionals. The teams at Hub City Media, for example, have handled large and small scale production deployments, ranging from major upgrades of identity governance infrastructure, to network-wide implementations of single sign-on and federation, to migration and consolidation of dozens of directory systems. Any company implementing their own DevOps processes should take note of their existing infrastructure needs and differences between internal applications, prepare contingency plans for system restoration and rollbacks and, most importantly, test deployment processes to catch bugs before they make it to production. 

 

*In all fairness, it’s probably fairly difficult for AWS to simulate their S3 production servers, but most companies also aren’t providing cloud storage services for entire regions.

 



"Summary of the Amazon S3 Service Disruption in the Northern Virginia (US-EAST-1) Region." Amazon Web Services, Inc. Amazon, 02 Mar. 2017. Web. 01 Apr. 2017.
 
Kastrenakes, Jacob. "Amazon's Web Servers Are down and It's Causing Trouble across the Internet." The Verge. The Verge, 28 Feb. 2017. Web. 01 Apr. 2017.
 

Read More
Technology Blog Jacque Tesoriero Technology Blog Jacque Tesoriero

Password Resets in the Enterprise

Having a secure password reset process is crucial for mitigating IT security risks in an enterprise environment...

Password Reset May Seem Like A Simple Task, But Performing It Successfully Is Integral To Account Security. 

We've all had to reset a password at one point or another. The process can be somewhat tedious, especially when you want to get into an account quickly; however, it's important to look past this minor annoyance, and understand why it is in fact a very important piece to the account security puzzle.

When Is Password Reset Required?

Typically, a password reset is done when a user is not able to log into one or more applications - either they forgot their password or it has expired. To construct a secure password reset policy, businesses need to decide:

  • Who should have password reset privileges?

  • Should password reset privileges apply to all users, a specific group of users or an individual user account?

Why Is Having A Secure Password Reset Process Important?

Having a secure password reset process is crucial for mitigating IT security risks in an enterprise environment. Password resets in an enterprise environment are unique, because a user often has accounts on several applications. This requires users to know and maintain credentials for each individual application, and if the standard protocol for password reset is not secure, the risk of passwords being lost, stolen or compromised increases. 

What Does A Secure Password Reset Process Look Like? 

For enterprise environments, password reset typically follows the progression in the diagram below: 

password.png

What Are Causes For Security Concerns? 

The part of any password reset process that causes several IT security concerns is how to definitively confirm that the source of the request is the owner of the account. Is the appropriate user asking to create a new password or is someone attempting to hack into the account? Another concern is how to send the temporary password to the owner of the account. If either concern is not addressed, any data accessible in an account that had a password reset is not secure.

How Are These Concerns Mitigated?

The specifications for how to validate a password reset request should include gathering information known only to the owner of the account, such as security questions. To further verify the request, any contact information in the account should also be used to confirm that the requester is also the owner of the account. 

To reduce security risks even more, it is best to have the time between a password reset and the next user login to be as brief as possible. This can be done by emailing the user a temporary password reset link that expires after a certain period of time. 

Note: In enterprise environments this option may not always be available if users do not have a secondary email - the user’s primary email account could be an application that they are not able to access.

If the user contacted a help desk or if they are not able to access their email account, another possible option is resetting the password while on a call with the user, then having the user set their own password. Both options have the advantage of encouraging the user to set their new password as soon as possible.

An IAM product such as Oracle Identity Manager or Forgerock OpenIDM can be used to configure self-service password resets for users - recover accounts without contacting the help desk. Self-service Password Resets can reduce the amount of help desk calls, but may not always be the best option depending on how your organization confirms the owner of an account.

For more information on this post or any of our services or offers, contact us today.


PROFESSIONAL SERVICES - SYSTEMS ENGINEER

Read More
Technology Blog Jacque Tesoriero Technology Blog Jacque Tesoriero

SAML Federation Single Sign-On

Federation Single Sign-on (SSO) is a very popular means of providing SSO among internet applications...

Why Is It Good For Business?

Federation Single Sign-on (SSO) is a very popular means of providing SSO among internet applications. There are few specifications that provide SSO across the internet. What exactly is Security Assertion Markup Language (SAML) Federation? Why is it good for business?

SAML Federation works on the basis of establishing trust between entities to form a federation. A federation is a group of organizations which share information, but are internally independent. Essentially, once two entities decide to form a federation, they exchange information to identify each other. With SAML, each entity exchanges a metadata file representing basic information about the entity. An entity is either an Identity Provider (IdP) or a Service Provider (SP). The IdP provides information about the user and SP provides  service to the user.

This is great for business, as SAML provides flexibility of who will be able to access your service or user information. It requires both parties to be aware of each other through use of the metadata file, with each party understanding who is providing service and who will be providing user identity information. The IdP needs to know what additional user information needs to be passed to the SP.

To create the federation, metadata files must be exchanged. Then, either side can initiate the SSO event. Depending on whether the use is already authenticated with the IdP, it will be sent to the SP’s application or be prompted for authentication before access to the application is granted.

As soon as the IdP determines the user is authenticated, it sends the necessary user information in a SAML Assertion. The SAML Assertion alerts the SP that a user has been authenticated, initiating a search for a matching user so access to the application can be granted. If a matching user is found, they receive access. If not, the SP has the options of either creating the non-existing user or rejecting the authentication.

The SAML assertion is sent as POST data through the end user's browser, so there is no direct connection from the IdP to the SP. There are options to encrypt the data within the assertion to prevent any browser-side snooping of information. During the Metadata swap, there is the option to provide encryption certificates, including the certificate’s public key to encrypt data. The server receiving data can only decrypt it with the private key of the certificate. There is an extra layer of protection on top of the TLS protocol that is used to protect the traffic. 

The Beauty of SAML Federation

Once you are part of a federation, you can take advantage of services that your partners are federated with. Essentially you can “daisy chain” providers within the federation.

saml.png

In the above diagram, an employee of Company A (A) will authenticate through A’s website and access a service provided to Company B (B) from Company C (C). The employee will access C using his A credentials. C does not know what A is and vice versa. There is no agreement among the two. B collects the user information from A and then provides it to C. The access is dependent on B’s relationship with A and C. As far as A is concerned, B is providing the service that C has.

This is the brokered trust model, much like how a mortgage broker is the middleman between you and the bank. You trust when you go to a mortgage broker that they have a good relationship with the bank and your goal is to leverage that relationship to get a better deal. Company A is trusting Company B’s relationship with Company C.

SAML Federation is an amazing technology that makes user management across the internet easy. SAML Federation goes beyond just internet-based SSO, and allows systems across many different services to maintain the user data through SAML Assertions. Federation allows anyone to supplement their service with other service providers, meaning that you can provide a complete solution without owning and operating everything and provide quick and easy access to your clients.

Here at Hub City Media, I have had the opportunity to see many clients with varying implementations of federation, and I find that Oracle Access Manager's flexibility is quite amazing in this area. I expect SAML Federation to be with us for a long time.


PROFESSIONAL SERVICES - ARCHITECT 

 

OASIS SAML Technical Overview - https://wiki.oasis-open.org/security/Saml2TechOverview

Damien Carru's Blog: It's a Federated World - https://blogs.oracle.com/dcarru/entry/federation_proxy_in_oif_idp

Read More
Technology Blog Jacque Tesoriero Technology Blog Jacque Tesoriero

IAM Systems And Successful Business

Building a successful Identity and Access Management program isn’t just about having a feature-rich IAM product...

How Do You Leverage Your IAM System To Improve Your Organization's Security? 

Building a successful Identity and Access Management program isn’t just about having a feature-rich IAM product. A feature-rich product will aid in automating the provisioning and deprovisioning of applications, but it may not necessarily improve the security posture of an organization.

To improve security and raise awareness, it is crucial to form an IAM governance team responsible for enforcing policies and procedures. Awareness can be raised inside-out through security, business and compliance managers. Support of these personnel is crucial, as they have the necessary avenues already in place to influence users in the organization.  

An IAM program relies on the following factors to ensure durability to ever-changing business needs:

  • Product Selection

  • Governance Team

  • End-User Support

Product Selection

Assessing a product is critical to ensure longevity of the IAM program, as the organization matures. The following factors should be assessed when selecting an IAM product:

  • Available Connectors

  • Scalability

  • Deployment tools

  • Feature set

These factors will dictate the type of team required to maintain and administer the IAM solution. Ease of deployment and integration will significantly increase the productivity of IAM engineers, as it will provide a flexible system that is able to support constantly changing business needs with minimal friction, providing engineers with an ability to quickly integrate applications while ensuring business enablement.

The feature set should also be assessed while keeping current and future business needs in mind. Almost all available vendors provide a feature-rich IAM product, which makes the product selection process difficult. To further narrow the selection, companion products such as authentication directory, role management solutions and governance products provided by the same vendor should be assessed. These could provide tighter integration across IAM components and ensure efficient interoperability.

Governance Team

A Governance team plays an integral part in maintaining the IAM program. The Program Manager oversees the activities of the program, defining policies and forecasting IAM needs to increase the maturity, while providing support to the business. The following processes, not limited to, should be considered for the IAM program:

  • Application discovery

  • Rectifying business pain points while interacting with IT systems

  • Enforcing security and compliance policies

  • Automating processes to reduce operating costs

These processes should be enforced by leveraging the features available in your IAM product. Implementing Governance features such as recertification is crucial to the organization to stay in compliance with regulatory mandates. In order to implement the governance features, an application discovery initiative is required to identify the critical applications within the organization. Leveraging these discovery findings, additional projects should be planned to automate account provisioning. This will significantly improve current operating procedures, while ensuring provisioning activities are audited and reported appropriately. The end goal for a Governance team is to have all business and mission critical applications fully automated and remediated by the IAM system.  

To implement these processes, a team that is versatile and aware of the current organization processes is crucial to the success of the IAM program. These resources should identify gaps in existing processes and provide optimized solutions that can scale to the diverse landscape. Not having the proper resources involved will result in wasted time and cause initiatives/projects to either fail or take longer than expected.

Once in operational steady state, IAM program should invest in more advanced tools such as System Information and Event Management (SIEM) systems to provide context around security events and correlation of incidents with other systems. This will allow for processes to continuously monitor, assess and improve, thereby expanding the footprint of an IAM system within the organization.

End-User Support

No matter how the processes are optimized and automated to fulfill IAM needs, end-user participation is quintessential. End-users play an integral role in the success of the IAM program. Persistent channels of communication are required with the end-users to train and educate the IAM processes as the program matures. This practice will result in alleviated productivity for end-users.

End-user training will raise awareness of the IAM program, while ensuring a constant feedback loop that will aid in assessing the current state and optimizing the processes to achieve a higher degree of end-user loyalty. End-users should be treated as the partners rather than the users of the IAM system. 

A mature IAM program will result in tools and processes available to the application owners and other business teams to collaboratively improve organization’s security posture. Improving organization’s security will not only reduce operating costs but also aid in building an IAM foundation that can sustain the growth of an organization.

For more information on Identity and Access Management, contact us today. 


MANAGER - ARCHITECTURE

Read More
Technology Blog Jacque Tesoriero Technology Blog Jacque Tesoriero

Automation: Enhance Platform Deployments

The successful deployment of a critical application is crucial, and failure can have far-reaching consequences...

Human Error Can Lead To Larger Issues Down The Line. So How Do You Prevent It? 

The process of application deployment can be a stressful one for a company’s computing systems, management and IT department. The successful deployment of a critical application is crucial, and failure can have far-reaching consequences. For an example, think back to the enormous technical snafu of the healthcare.gov launch - by some estimates, the government healthcare enrollment website was only able to enroll 1% of interested individuals in its first week (1). The importance of a successful deployment is amplified when considering cybersecurity infrastructure, such as identity and access management systems. Automated deployment can improve the speed, reliability and security of a “typical” manual deployment, and can significantly reduce the stress and foundational investment associated with this process.

Deployment is defined as ‘all of the activities that make a software system available for use’ (2). There can be great variability in how deployment is carried out, as both applications and customers have different characteristics and requirements; however, the general pattern consists of: installation, configuration and activation of new software, adaptation of existing systems to new software and any necessary updates. In production environments, the roles involved in this process generally include systems engineers, database administrators, network teams, IT stakeholders and project managers. Automation can reduce much of the complexity involved with deployment, and can realize improvements in speed, reliability and security.

Speed

Oftentimes, it takes a significant amount of time to deploy an application. Coordination of the roles involved may take longer than anticipated due to timezone differences, pre-existing obligations, lack of dedicated resources and other ‘human’ factors. Each role may possess a different part of the information required for successful deployment, such as a password or some configuration information, and preparation of the computing environment can drag on, wasting company time and money. Automation helps to alleviate these issues and can drastically cut down installation and configuration time. For example, many applications have configuration files that can be fed into the installer, and run immediately. Automation tools such as Ansible can feed these files into the installer, with all the information provided beforehand by those that possess it. Additionally, system configuration and installation management can be negotiated beforehand, provided to the automation tool, and with one click the entire deployment process can be kicked off and completed, without any need for manual intervention and all of the slowdowns associated with it. 

Reliability

Let’s face it - we all make mistakes. Whether that means forgetting your wallet at home or accidentally ‘fat-fingering’ a configuration option, mistakes make life more difficult. In a business’ IT systems, mistakes can mean lost time, profits and opportunities. Automated deployment significantly reduces the chance of making a mistake by minimizing human error. Most automation tools have the user define their tasks as a series of steps, specified in a file. For example, Ansible has users define steps in a ‘playbook’ - an easily readable list of steps written in a programmatic format (3). As long as this file does not change, the steps involved and the changes made to the system will be identical each time the automation tool is run. This makes troubleshooting, auditing and tracking changes significantly easier.

Security

Generally, the less hands involved in deploying an application, the smaller the chance of a security breach within it. Passwords, protected system information and security keys are all exchanged between roles when installing and deploying software systems. This cross-talk introduces significant security holes, as confidential information often sits on email and chat servers, and maybe even on a piece of paper (hopefully not!). With automated deployment, one person can attain all of the necessary information and provide it to the automation agent, which usually has tools for encryption. Thus, automated deployment increases the security of a regular deployment by simplifying it.

At Hub City Media, we have used the automation tool Ansible to expedite the installation of identity and access management solutions. Our AutoInstaller products run on top of Ansible, and significantly cut down the installation time required to install products by up to 80%. Our clients get a robust, secure and easily replicable way to integrate software systems into their existing architecture, and a much less stressful deployment process. We also use Ansible to automate internal tasks, such as setting up machine instances and installing bootcamps. Automated deployment has added tremendous value to our internal and external processes, and we hope you too can use it to realize your personal and business goals.



(1) http://www.bloomberg.com/news/articles/2013-10-16/why-the-obamacare-website-was-destined-to-bomb
(2) https://en.wikipedia.org/wiki/Software_deployment
(3) http://docs.ansible.com/ansible/playbooks.html

Read More
Technology Blog Jacque Tesoriero Technology Blog Jacque Tesoriero

Reducing IT Security Risks with Identity Management

Leveraging an identity management solution can mitigate IT security risks by eliminating orphan accounts, fixing poor password standards and...

Why An Identity Management System Is Essential To Your Organization

Identity-related security breaches are major concerns for organizations. Due to rapid technological growth, identity is no longer "just" a user account. ‘Identity’ can consist of many devices, roles and entitlements. With the influx of these additional entities associated with an identity, enterprises can become vulnerable when these complex structured identities are not properly administered. Leveraging an identity management solution can mitigate IT security risks by eliminating orphan accounts, fixing poor password standards and providing auditing services.

Orphaned accounts, accounts that still have access to systems without a valid owner, can introduce potential security holes to an enterprise. Without prompt and thorough de-provisioning of terminated employees, stagnant accounts can grant unauthorized access to sensitive systems and provide information to unauthorized users.

An Identity Management (IDM) system can: 

  • Discover, continuously monitor and cleanse orphaned accounts from an organization

  • Reconcile accounts from various sources, such as databases, applications and directories, to find lingering orphaned accounts

  • Automate the deprovisioning process of orphaned accounts with well-defined workflows and policies, allowing for more consistent, coordinated and immediate removal, compared to a manual process, which is prone to mistakes

Poor password standards can put an organization at risk, as users with weak passwords are more susceptible to identity theft. In the worst case, an entire organization can be compromised if passwords of privileged accounts are exposed to intruders. As new applications are introduced to an organization, users often have numerous credentials, due to different password complexity rules between applications, and may result in users creating passwords that are not complex and easy to remember.

An IDM system can remedy these potential security risks. Password inconsistencies can be reduced by utilizing centralized password policies within an IDM system. In ForgeRock OpenIDM, for example, password policies can be scoped over groups of users. This allows for a tighter level of control on end-user authentication security, especially for high-risk groups who might need more frequent resets or more complex standards for password content and length.

Auditing user and group activity is essential for any organization, especially for meeting regulatory requirements. IDM systems can:

  • Centralize historical records, which can be crucial to debugging problems

  • Provide answers to questions such as “when was this account provisioned?” and “who approved the request?” in activity logs and database tables

  • Identify unusual or suspicious activity in real time

Oracle Identity Manager has functionality to define audit policies that detect and inform administrators of Segregation of Duty violations, constructing robust approval workflows to handle them.

IDM systems offer a multitude of benefits to an organization, not least of which is reducing critical security risks. Vendors such as Oracle and ForgeRock offer feature-rich and extensible IDM solutions that can complement existing environments with powerful governance tools. Consumers should decide which solution best meets their unique needs, bearing in mind that an IDM system is essential to the security and efficiency of an organization. 


ForgeRock. “White Paper: OpenIDM.” July 2015, https://www.forgerock.com/app/uploads/2015/07/FR_WhitePaper-OpenIDM-Overview-Short-Letter.pdf

Guido, Rob. University Business. “Before the Breach: Leveraging Identity Management Technology to Proactively Address Security Issues.” February 2009, https://www.universitybusiness.com/article/breach-leveraging-identity-management-technology-proactively-address-security-issues

Lee, Spencer. Sans Institute. “An Introduction to Identity Management.” March 11, 2003,
https://www.sans.org/reading-room/whitepapers/authentication/introduction-identity-management-852

Lieberman, Philip. Identity Week. “Identity Management And Orphaned User Accounts.” January 30, 2013,
https://www.identityweek.com/identity-management-and-orphaned-user-accounts/

Oracle. “Oracle Identity Manager - Business Overview.” March 2013,
http://www.oracle.com/technetwork/middleware/id-mgmt/overview/oim-11gr2-business-wp-1928893.pdf

Prince, Brian. eWeek. “Old User Accounts Pose Current Security Risks for Enterprises.” May 5, 2008, http://www.eweek.com/c/a/Security/Old-User-Accounts-Pose-Current-Security-Risks-for-Enterprises

PWC, Inc. “How to use identity management to reduce the cost and complexity of Sarbanes-Oxley compliance*”. April 14, 2005
http://www.pwc.com/us/en/increasing-it-effectiveness/assets/howidmsupportscompliance.pdf

Read More
Technology Blog Jacque Tesoriero Technology Blog Jacque Tesoriero

Minimizing Access Request Complexities - Maximizing User Experience

What exactly should an end-user see when requesting access? This is a common hurdle for teams when implementing an Identity solution…

Why Is A Simplified End-User Experience Beneficial To All?

You are an Administrative Assistant on the first day of your new job. Your manager sends you the link to your new company’s access request site and asks you to request everything you will need to perform your duties. You log into the Identity Management (IDM) system to make a request and are immediately alarmed at the number of options and fields available.  Selecting any of these options brings you to a new page with the same number of options!  You start to contemplate asking a colleague what they requested or even begin to submit a few generic requests -- if only you can find where to submit them!

What Exactly Should An End-User See When Requesting Access?

Unfortunately, this scenario happens to end-users more often than we’d like to admit.  When considering a new Identity Management solution, or even reevaluating an existing solution for improvement, it’s important to keep this type of scenario in mind and set goals to reduce the complexities of access requests in the eyes of the end-user.

What exactly should an end-user see when requesting access? This is a common hurdle for teams when implementing an Identity solution. One overarching guideline for approaching this issue is to keep the interface simple. While this is not a new concept, it is often forgotten when attempting to provide a feature-rich solution. Remember, in general, less is more to an end-user.  

Most end-users are not frequenting the Identity Management solution, so there is little opportunity to transfer knowledge with it remembered for subsequent sessions. Even if there is no direct impact to security, an implementer should consider restricting view-permissions on screens, resources or attributes to only the necessary groups. In addition, the end-user should be provided choices and direction over free-form requests in order to make the requests meaningful, the fulfillment of manual processes more efficient and the setup of automated processes possible. This may require translations of attribute values to help the end-user understand the requests they are creating.

What impact do target resources have on this process? When developing the interface for end-users, implementers must consider that the Identity Management solution is dependant upon target resources for defining necessary form field values. Often these inputs are similar to what is supplied from the trusted source and can be transferred to the target resource behind the scenes within the Identity Management solution. However, some of these inputs are specific to the resource and must be specified on account creation or update.  

It may be possible to shift this responsibility away from the end-user by manipulating the target resource to default some of these values in certain situations. Target resource administrators may even be able to take this a step further by consolidating points of access control. For example, several application owners may choose to utilize a common repository to manage permissions allowing the Identity Management solution to interact with a single target system for all participating applications. Either approach may translate to the end-user as less to manage and remember. 

What if end-users are still confused by what they should request? It is not uncommon for end-users to know what job they must fulfill and still not know what access is needed for that job. This is especially true for ‘Day One’ employees. At this point, Role Based Access Control (RBAC) may be considered to further simplify the request system. Following this approach, roles would be defined to identify specific duties of an employee within an organization.  

Once defined, these roles can be mapped to all target resource permissions required to perform those duties. A user no longer has to request an individual piece of access from each target resource, only the role they need to fulfill. This makes the requests more intuitive by further automating the process and placing more of the technical attributes beyond the scope of end-user visibility. These benefits come at some cost, however. Significant effort may be required by a Business Analyst to initially define roles, approval workflows may become more complex and certifications may be necessary to maintain the roles (although that comes with additional benefits as well!). 

Can we eliminate end-user requests altogether? In most cases, this is not feasible. However, the number of requests may be greatly reduced by further automating processes in the Identity Management solution. Information from the Identity Management solution’s trusted source may be able to identify a number of roles applicable to a user.

This starts to form the basis for Attribute Based Access Control (ABAC) and the idea of birthright resources. Attributes of a user profile, specifying anything from a user’s position to the entire active user base, can be mapped to a set of roles. From this point, provisioning is carried out similar to RBAC. This may, for example, further alleviate ‘Day One’ basic access requests for new and transferred employees. Roles that are provisioned via ABAC can be removed from the request system, reducing the choices available to end users, while a RBAC approach can be utilized in parallel for the remaining roles.

The methods described above aim to reduce what is available to end-users when requesting access. By doing so, end-users are less likely to request inappropriate access or have requests stalled in approval or manual provisioning workflows due to inaccurate request descriptions. It also lends itself to a better user experience by limiting the training required to make an employee effective at utilizing the IDM system.

As the system evolves and begins to build upon each of these methods, Identity Management solution administrators will begin to focus more heavily on certification and segregation of duty definitions to maintain the relationships among attributes, roles and target resources. 

 

Read More
Technology Blog Jacque Tesoriero Technology Blog Jacque Tesoriero

Confidentiality and Ethics: When Outside Consultants have Inside Access

With increasing security concerns in both consumer applications and large-scale enterprise deployments, it becomes even more critical as professional consultants to adhere to a code of ethics...

Ethics Through The Eyes Of An IAM Consultant

As Identity and Access Management (IAM) consultants, we spend a significant amount of time in differing client environments, often having access to databases, directories and applications containing very sensitive user data. 

For example, we might be at a client site with full access to their Human Resources application. These applications contain very sensitive user information, including home addresses, social security numbers, salaries, etc. I can personally recall several instances where I was on a project with all of this data accessible.  

With increasing security concerns in both consumer applications and large-scale enterprise deployments, it becomes even more critical as professional consultants to adhere to a code of ethics that maintains end-user privacy, preserves confidentiality and protects against information leaks.

A few things to keep in mind: 

  1. We have a responsibility to our clients and their user base to maintain privacy. User data, not just personally identifiable information, should always be respected. User data in Development and Quality Assurance environments is often directly copied from Production. This is a great security risk, as non-Production environments are often less secure and have greater levels of access within an organization, making them prone to misuse. Clients are advised to invest time in sanitizing data in these environments (e.g. scrambling SSNs or changing birth dates). With a bit of work, it is very much possible to maintain and mirror Production level data in Test environments.

  2. While working on client projects, we have an obligation to keep information that we discover confidential. For instance, a consultant might have access to a client's IAM system and see a familiar employee in a 'Disabled' state with all access revoked. While it might be tempting to share this information with colleagues, it is highly unethical to do so. Often, we are asked to sign non-disclosure agreements; however, even if we are not, there is still a strong responsibility to keep private information private.

  3. We also have an obligation to report when confidentiality might be at risk. For example, if you received an improperly distributed spreadsheet containing very sensitive information, such as employee salaries, you should quickly realize the error and immediately inform someone who is able to intercede before that information is leaked. If not, extremely sensitive data could be severely compromised.

Computer systems can be used to violate the privacy of others. As consultants, we have an obligation to maintain confidentiality. In the end, it’s about being professional and respecting the value of privacy.

For further reading about this topic, please refer to Software Engineering Code of Ethics and Professional Practice by the Association of Computing Machinery. 

Read More
Technology Blog Jacque Tesoriero Technology Blog Jacque Tesoriero

Four Tips for Integrating Your Identity Management System with Your Information Technology Service Management System

We have always had customers who wanted to integrate their identity management system to a custom user interface. Recently, we have been noticing...

We have always had customers who wanted to integrate their identity management system to a custom user interface. Recently, we have been noticing an increase in customers that want to integrate the identity management system (IDM) with an information technology service management system (ITSM). For those of you unfamiliar with the term, an ITSM is basically your trouble ticketing or IT request system. More organizations are using the ITSM as the central system for all user IT requests, such as for equipment and software. It’s part of a larger movement to attempt to centralize IT processes and measure the effectiveness of IT to provide those services. So if organizations are centralizing all IT requests, it seems only natural that they would want user requests for access to flow through the same system. This creates a “one stop shop” for all interactions between business users and IT.

Oracle Identity Manager (OIM) 11g R2 introduced a new request user interface that uses a more familiar metaphor, the shopping cart. System access is now something you search for in a catalog, add to your shopping cart and then “check out” to submit the request. This type of task-based UI is something users need little training to master because they use it all the time when they shop online; however, despite this tremendous leap forward in usability, some customers still want to move requests to the ITSM.

There is no out-of-the-box integration between OIM and any of the more popular ITSM systems. So this means a custom integration using the OIM API and the API of the ITSM is required. Here are four guidelines you should consider in your integration design:

  • Use the ITSM for requests only. While it may be tempting to hide the entire IDM system from end users, it’s unnecessary and will require you to re-engineer more than the request interface. Most users will understand that the ITSM is for service requests but things like password changes / resets happen elsewhere.

  • Keep access approvals in the IDM system. If your IDM system is like OIM, then it will be capable of supporting custom approval workflows. Use the IDM system for these approvals. Approvals may require the approver to do more than merely accept or reject the request. The approver may be asked to update fields in the request. Since this is something that is already happening on the IDM system, don’t reinvent the wheel. You also want your IDM system to be the single point of audit. This means that all data around the request should be collected and captured by the IDM system. If you have approvals occurring in the ITSM, you will need to pull data from the IDM and ITSM systems to get a complete picture for your auditors. By keeping the requests in the IDM system, you will simplify your ability to provide auditors with information.

  • Post status updates from the IDM to the ITSM. While users are going to be submitting access requests to the ITSM, they are also going to be checking on the status of those requests. It’s important to update the ITSM with the current status of the request from key points in the request workflow running in the IDM system.

  • Automatically synchronize the IDM catalog with the ITSM catalog. The catalog of requestable items in your IDM system are going to change constantly. You want to automate the synchronization of items from the IDM catalog into your ITSM catalog as much as possible. This is critical as you don’t want to duplicate configuration on your IDM and ITSM for every change to your access catalog.

This is by no means an exhaustive list, but it’s a good start. Your requirements are going to drive much of the specific design of your integration.

If you have any questions or comments, feel free to contact me. I’d like to hear how you are planning your ITSM / IDM integration. We’ve created several ITSM / IDM integrations for our customers and if you’re considering it, we can help.

Email: steve@hubcitymedia.com       Twitter: @stevegio


CTO AND FOUNDER

Read More