Securing The Land Registry in the Cloud
By James Bromberger
15th October 2017
The Scenario
The AWS Public Cloud was the only direction for the Land Registry for a large set of reasons. Akkodis prepared a tour-de-force using the advanced security features of Amazon Web Services to exceed the expectations of the customer.
The Solution
Any deployment to the Cloud requires the knowledge to secure the workload at all points. With the AWS platform, the physical security model became straight forward, but to create a highly secure solution many security aspects and controls need to be considered. Here is a sample of some of the points that Akkodis uses in its’ AWS environments.
AWS Master Account Security
Each AWS account currently has a master user, known as the Root user. However, this user is not covered by IAM security policies, and is not recommended to be used in general operations. As such, the Akkodis team always assigns a physical MFA token to the identity, and then deposits the token with the customer’s information management security team. From this point in time, the account Root credentials can only be subsequently accessed through a request to the customers security team, and is only used in unforeseen emergencies. Furthermore, the account is secured with multiple challenge-response security questions.
IT Staff Identity Federation
Maintaining separate “islands of identity” in multiple service providers is a recipe for disaster. Too often service providers offer only a totally stand-alone user management system for their service, requiring additional overhead for management of passwords (issuance, reset/recover), attributes (email address, phone, even photo), and account removal. While AWS has stand-alone authentication of users available, along with Multi-Factor Authentication, and by way of Policy IP Address Conditions, location constraits, there are better ways of authenticating management staff.
As Active Directory is the authoritative source of identity credentials for the customer, the Akkodis team uses Active Directory Federation Services (ADFS) as a SAML 2.0 provider to bridge the divide between AD as the Identity Provider (IDP), and the AWS control plane as the Relying Party (RP).
This means one set of credentials are maintained by staff, in one authoritative directory. Any modifications to this one directory is highly controlled, and alerted on in one place. Membership of groups matches roles, and HR procedures already cover staff departures.
AD FS also permits us to tie in Multi Factor Authentication, providing a higher level of trust for the authenticated party. Furthermore, when assuming an IAM role, the team chose to lock access down to a select few trusted Internet address ranges (including from IP ranges at primary sites and from DR facilities) by way of Condition elements in IAM policies.
With the recent CloudFormation IAM Role support (Oct 6, 2016), user identities and roles that create CloudFormation Stacks can finally also be protected by IP address Conditions in IAM Policies, and this is what the Akkodis team has now implemented.
API Logging in Escrow
The AWS cloud permits us to log many of the API operations against its service via a critical service called CloudTrail. The Akkodis team always sends this log for all regions in a workload account directly to a dedicated AWS account used exclusively for security logging and analytics. Operations teams have no access at all to this account, only limited security users, and processing in this account is automated.
It is easy to bring new Workload accounts into this logging and analytics regeme. When a new Workload account is created, an S3 bucket has its Bucket Policy updated to permit the CloudTrail service in the new account to log to it for the new account identifier. This falls into scope under a Prefix that is picked up by automatic log examination and analysis tools, and immediately comes under the review of many rules.
Additional rules per workload, environment or account also come to bear, providing a greater level of visibility around operations. For example, any access to the AWS control plane not originating from the customer’s IP ranges is seen as an interesting event, and duly included in near-real-time operational reporting.
More importantly, Akkodis configures the delivered logs to be subject to a Lifecycle Policy for their retention. As this is stored on Amazon S3, there is no need to ever prematurely delete API activity logs due to capacity limits, as S3 is theoretically unlimited capacity.
Virtual Private Cloud
The Akkodis team employs VPC in Sydney, using RFC 1918 compliant private address space split across three Availability Zones. This permits a distributed approach to fault tolerance: in the rare event of a failure of any single Availability Zone, only a third of operational instances would be impacted (down from half in a 2-AZ model), and AutoScale would be able to return to normal capacity by way of instances being launched in the two surviving Availability Zones (previously in a 2AZ model, ther is only 1 surviving AZ to provide capacity during a failure.
The Akkodis design of VPCs is to align subnets across Availability Zones for that will be used for the same purpose, thereby simplifying traditional on-premise firewalls to respect simple CIDR ranges. For example, a set of subnets for internal elastic load balancers may be:
- AZ A: 10.0.0.0/26
- AZ B: 10.0.0.64/26
- AZ C: 10.0.0.128/26
- (Reserved for AZ D in future: 10.0.0.196/26)
- => Contiguous CIDR range for internal load balancers: 10.0.0.0/24
In order to simplify legacy on-premise firewall policies only load balancers were deployed across this set of contiguous CIDR block, listening on their specific service ports (e.g., 443, see Elastic Load Balancer below).
Groups of EC2 instances are configured in Security Groups per role, permitting ingress access from the role’s shared load balancers and permitting access to service ports. The applications Security Group is also given inbound access to relational databases if required (see RDS, below).
Occasionally the EC2 instances are required reach out across the Internet reliably to external APIs and data sources. Using AWS NAT Gateway egress-only Internet access is enforced for groups of EC2 instances (over and above the restrictive Security Groups and any Network ACLs).
Elastic Compute Cloud (EC2) Instances – Application Servers
The EC2 instances are launched by way of AutoScale groups (across three Availability Zones). Each application server is launched into an IAM Instance Profile and associated with an IAM Role and thus IAM Policies, providing the AWS API Credentials required by the application server for connecting natively to S3, DynamoDB, SQS, and other services. These credentials rotate multiple times per day, and give the application server to be provided unique access as required, such as to restricted prefixes in S3 Buckets, or explicit namespaces for SQS Queues.
The EC2 instance themselves are generated from a custom Amazon Machine Image (AMI), itself generated on demand from a CloudFormation template. This custom AMI is refreshed periodically to include all Operating System updates at the point of image creation. The image is customised to apply all updates upon boot, and to further apply pending critical security updates every day (outside of business hours). This helps to reduce the window during which newly discovered vulnerabilities are present on the deployed EC2 instances.
Elastic Load Balancer (ELB) Configuration
Privacy is important for the Land Titles Registry system, even within its VPC and on premise networks. The project’s preference is the use of modern, strong cryptography in flight using TLS (formerly SSL) connections for all protocols that support this. Unencrypted options are removed, preferring to not transparently redirect HTTP to HTTPS but ensure HTTPS is used end-to-end in the application’s workflow.
Amazon’s Elastic Load Balancing service permits fine-grained control over the termination of TLS traffic. Various pre-baked SSL Polices permit one to select various protocols, ciphers and other options. To have a truly secure solution an informed position must be taken on protocols and ciphers to be negotiated with clients, and the ability to adapt this over time as the threat landscape changes becomes even more important.
In today’s world, the capability around encryption is not negotiable for corporate applications, and public facing applications face serious tests as stronger compliance programs come into force. The Akkodis team only permits the use of TLS 1.2 on ELB (and manages the permissions over time, looking forward to TLS 1.3 being available in ELB, and supported in clients). Various bulk cipher algorithms, such as 3-DES, and older message digests, such as MD5 and SHA1 are also disabled, and Perfect Forward Secrecy and Ephemeral keys are permitted, as well as requiring server order preference. More importantly, these choices are managed and maintained in an ongoing basis, and will change based upon the threat landscape over time.
Load balancers are configured to log access to S3, and this S3 Bucket is also configured to with Object Versioning, Lifecycle Policies to expire data after a defined period, and Bucket Policies for policing encryption and other requirements.
Web Application Development
Web browser clients may accidentally use unencrypted protocols when a user types a URL into their browser by hand without specifying a protocol as most web browsers default to using unencrypted HTTP in this case. This can be avoided if the user has previously successfully used a given HTTPS endpoint previously by configuring web applications (or reverse proxies) to inject the HSTS header. This is an advisory HTTP header that requests the client to remember that, if it has successfully negotiated a trusted SSL connection on this request, then the web browser should default to using HTTPS in future for this site (until some maximum timeout).
SSL Certificate Management
Akkodis uses a combination of SSL certificates from multiple providers. With standard service-verified SSL connections (no X509 client certificates), certificates issued by Amazon Certificate Manager are used. These certificates have a strong chain-of-trust using SHA256 signatures, and automatically refresh upon expiry. This reduces operational overheads or risk of expired certificates.
Amazon Key Management Service
Amazon KMS durably holds symmetric encryption keys, all within the same region as the workload is operating from, with visibility of key usage via the AWS CloudTrail service (see above for in-escrow logging). With some of these keys, Envelope Encryption is performed on large data exports.
S3 Object Encryption at Rest
Amazon S3 offers us multiple methods to encrypt data at rest:
- Encrypted before upload to S3 (i.e., from within the application servers)
- Encrypted after upload to S3, using a specific named key from the Amazon Key Management Service
- Encrypted after upload to S3, using a default key in the AWS account per service and per region
For some data, a combination of the pre-S3-upload encryption) and S3 server-side encryption I used. For some S3 buckets, the use of encryption is enforced by S3 Bucket policy, thwarting possible attempts to upload data that does not meet set requirements.
S3 Buckets Locked to Specific VPCs
Further security is layered in for S3 to not only require authentication over strong TLS 1.2 HTTPS connections, but to lock access to specific Virtual Private Cloud networks by way of S3 Endpoints. This makes the bucket inaccessible – even for Bucket management in the AWS Console – from everywhere except for API requests originating from these permitted VPCs.
A use case for this additional level of security is to further protect sensitive data, such as security credentials, source code or highly sensitive documents.
Database Encryption at Rest and In Flight
Using the Amazon KMS service, configure the Amazon Relational Database Service (RDS) to encrypt storage at rest for all primary data sets. This also protects the snapshots that are generated daily from this RDS instance.
A custom RDS Parameter Group is used to enforce SSL encryption in-flight for SQL traffic from the application servers to the database instance(s). This rejects any unencrypted connections, ensuring that end-to-end encryption requirements are met.
Integration to On Premise Systems
Connectivity to the on premise systems is done by way of private fibre connectivity and a set of backup VPNs over the Internet, each of which dual IPSEC (encrypted) tunnels. Each VPC has its traffic to/from on premise presenting into the relevant virtual router on premise, and must traverse additional on premise firewall(s).
Two further techniques were used to integrate between on-cloud, and on premise, depending on the support capability of the on premise system. Those modern enough were configured to accept specific X509 certificates from clients, making the integration a native TLS two-way verified connection. For other integrations, a dedicated integration layer was pushed using Docker into the on premise environment to act as a proxy – itself requiring X509 client certificates, and using other protocols on premise (e.g., username + password to talk to on premise databases).
AWS Shield and Web Application Firewall (WAF)
Protection from Denial of Service, and Distributed Denial of service is helped by the AWS Shield service. The project deliberately chose to minimise its attack surface (exposed service ports), and additional pieces of the platform, such as CloudFront for CDN services, and Route53 for DNS services, are also protected by this automatically.
Additionally, AWS Web Application Firewall lets the Akkodis team write rules to further reject traffic and protect exposed endpoints based on a number of conditions.
The Outcomes and Benefits
The set of security options used has ensured that the Land Registry has operated smoothly, and raised the bar on on security for all operations for the jurisdiction.
It’s important to note that the state of security in the major Cloud providers is a constant journey, with improvements in posture being made available throughout the year.
A key differential in making the public cloud work for your organisation is the depth and breadth of knowledge that Akkodis brings to bear. Very few consultancies worldwide have the expertise that exist at Akkodis. Please contact us for more information.
NB: Not all security considerations are disclosed here, for operational reasons. Details such as protocols, ciphers, algorithms, and services used are subject to change, based upon changing threats over time, and innovations in architecture and services.