Scenario - secure analytics platform

» Download this section as PDF (opens in a new tab/window)

In this scenario we will design a secure analytics platform ingesting data from various external and internal sources and ETL’ing into a data lake where analytics are performed. This is a scenario I am personally familiar with as I worked on an internal project called Project Earth doing just this.

Given the scope of this being very large and proprietary, only the security aspects of the platform will be discussed in the first iteration.

At a Glance

The following shows a high-level view of the solution layers and data flows:

Scenario - Secure Analytics Platform Scenario - Secure Analytics Platform

Security Architecture

As mentioned in the ‘Security and Encryption’ section, security occurs at multple levels ranging from data to systems to people. In the following we will cover how we hardened each of these components at a high-level.

Networking & Communication

When it comes to networking and communication we need to ensure only known / secure enclaves were able to get access to the systems and data flows outbound are restricted.

We achieved this using a few items in alignment:

The following shows the Flow policies for the dev/staging/production environments:

Scenario - Secure Analytics Platform - Flow Policies Scenario - Secure Analytics Platform - Flow Policies

Only specific ports / protocols were allowed between application tiers and inbound:

Scenario - Secure Analytics Platform - Flow Policy Detail Scenario - Secure Analytics Platform - Flow Policy Detail

Categories were leverated to specify app tiers and environments. Only certain ports were allowed between:

Scenario - Secure Analytics Platform - Policy Categories Scenario - Secure Analytics Platform - Policy Categories

Here’s a sample look at a Flow policy for dev which shows the allowed inbound sources. It also highlights the blocked connections which coincidentally were from an internal pentesting tool:

Scenario - Secure Analytics Platform - Flow Policy Detail Scenario - Secure Analytics Platform - Flow Policy Detail

Systems and Configuration

When it comes to the stack there were a few core layers:

The full stack was 100% automated using Puppet, Nutanix SCMA and environment templates. This allowed us to ensure security / configuration baselines and adherance to them. This also allowed us to simply update packages if any security vulnerabilities were found.

Within the Nutanix platform the native SCMA was leveraged (enabled by defualt) which ensures a STIG’d / secure configuration for the CVMs and hosts. Cluster lockdown mode was enabled (recommended) to force key based access.


With any platform that is integrating with multiple systems, secret management is a very important item. Initially we started with using encrypted yaml (eyaml) within Puppet but eventually moved this to a more secure / manageable hiera backend. There are multiple options here like HashiCorp Vault, etc.

Data Encryption

Data encryption is important to ensure an attacker can’t make any sense of the data if they were to steal or become in posession of it.

Native Nutanix software-based encryption was leveraged to provide data encryption. In scenarios where a key manager isn’t available, the local key manager (LKM) can be leveraged. If using an external key manager (EKM) it is recommended to rotate keys, this occurs yearly with the LKM by default.

Scenario - Secure Analytics Platform - Data Encryption Scenario - Secure Analytics Platform - Data Encryption

Data Scoping and RBAC

One of the most important things once the “stack” has been hardened, is ensuring only specified individuals get access to the data they should have access to and all access / requests / granting is fully auditible. From my perspective the only way to accurately do this is through a fully automated system where any granting of access, approvals, etc. has to flow through the automation and nothing can be done manually. This is exactly what we did with Project Earth, even as admins, we can’t override access for users.

If we break this down there are a few core stages:

For all requests and ticketing we leverage ServiceNow (SNOW). Within SNOW we created a custom catalog in which users could request access to specific types of data. The available types of “roles” / data available are automatically generated and published to SNOW whenever new data sources / roles are created.

Once requested, it would go to their manager for approval and then two other approvers who must approve before any access is granted. This “dual key” approval ensured proper checks and balances. Another key point here is that the time which people could request access for was limited and could be revoked at any time. Upon expiration / revocation, membership was automatically removed.

Once the request was approved the role assignment / group membership was fully automated.

The following figure shows a high-level view of the flow:

Scenario - Secure Analytics Platform - Flow Policy Detail Scenario - Secure Analytics Platform - RBAC / Role Assignment

For validation we have checks to ensure members of a group match the “approved” state in SNOW. All authentication / access requests are logged and stored in a central logging system. Using this system we could look for anomalous access or things out of the ordinary.

Change Control

A key for auditibility is proper change control throughout all aspects of the system. For requests / approvals those were all stored in SNOW. All other items whether it be Puppet, custom logic and code, etc. were all kept in a hardened source control system with only specific developers / keys have access. Any modifications/ new code first went through a code review process / security validation. Once reviewed / approved the changes were put into “purgatory” until validation in dev / staging environments were coplete. Any modifications to production systems are done using automation to ensure human error potential is minimized.