Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix listener certificate handling #62

Merged

Conversation

JonRoma
Copy link
Collaborator

@JonRoma JonRoma commented Feb 13, 2024

In the original design of this module, we assumed that each ECS task had its own listener certificate and that these resources could be applied and destroy along with the ECS task. This works in most case, but was broken by the special case wherein >1 ECS task shares a listener certificate because both tasks listen on the same host_header but are differentiated by different path_pattern values (and different priorities).

This assumption broke for services like authman which meet these conditions by sharing a host_header. The original design allowed for creating a listener certificate independently (outside the apps directory), but naively assumed that separate listener certificates could be managed in the individual apps subdirectories.

The result is that running the Terraform in any of the apps subdirectories would apply or destroy the listener certificate regardless of whether other ECS tasks were using it.

In order to fix this proble, we add a manage_listener_certificate variable. This changes the behavior when applying or destroying an ECS task.

This variable is only meaningful when a load_balancer object is defined for the ECS task, and when a public load balancer is used. Tasks using a private load balancer do not need SSL certificates because intra-VPC traffic is deemed secure. Tasks not using a load balancer (such as a daemon process) don't have a listener at all.

The following assumes that the Terraform for an ECS task specifies a load_balancer block:

  • If the manage_listener_certificate sub-object in the load_balancer
    block is true (which is the default when a load_balancer block is
    defined), the listener certificate is managed with the ECS task,
    and will have the same lifecycle.

  • If the manage_listener_certificate sub-object in the load_balancer
    block is false, the module assumes that a listener certificate is
    managed independently, in a separate configuration directory,
    using the terraform-aws-lb-listener-certificate module.

In the latter case, the listener certificate in this case is not managed with the container, and persists beyond the lifetime of any of the individual ECS tasks that use the listener_certificate.

The intent of setting manage_listener_certificate to false is for use cases where multiple tasks share a host_header, and use path_pattern and priority sub-objects in the load_balancer block to distinguish the task to which traffic should be routed.

@JonRoma JonRoma added bug Something isn't working service:authman labels Feb 13, 2024
@JonRoma JonRoma self-assigned this Feb 13, 2024
In the original design of this module, we assumed that each ECS
task had its own listener certificate and that these resources
could be applied and destroy along with the ECS task. This works
in most case, but was broken by the special case wherein >1 ECS
task shares a listener certificate because both tasks listen
on the same host_header but are differentiated by different
path_pattern values (and different priorities).

This assumption broke for services like authman which meet these
conditions by sharing a host_header. The original design allowed
for creating a listener certificate independently (outside the
apps directory), but naively assumed that separate listener
certificates could be managed in the individual apps subdirectories.

The result is that running the Terraform in *any* of the apps
subdirectories would apply or destroy the listener certificate
regardless of whether other ECS tasks were using it.

In order to fix this proble, we add a manage_listener_certificate
variable. This changes the behavior when applying or destroying
an ECS task.

This variable is only meaningful when a load_balancer object is
defined for the ECS task, and when a public load balancer is used.
Tasks using a private load balancer do not need SSL certificates
because intra-VPC traffic is deemed secure. Tasks not using a
load balancer (such as a daemon process) don't have a listener
at all.

The following assumes that the Terraform for an ECS task specifies a
load_balancer block:

*   If the manage_listener_certificate sub-object in the load_balancer
    block is true (which is the default when a load_balancer block is
    defined), the listener certificate is managed with the ECS task,
    and will have the same lifecycle.

*   If the manage_listener_certificate sub-object in the load_balancer
    block is false, the module assumes that a listener certificate is
    managed independently, in a separate configuration directory,
    using the terraform-aws-lb-listener-certificate module.

In the latter case, the listener certificate in this case is *not*
managed with the container, and persists beyond the lifetime of any
of the individual ECS tasks that use the listener_certificate.

The intent of setting manage_listener_certificate to false is for
use cases where multiple tasks share a host_header, and use
path_pattern and priority sub-objects in the load_balancer block
to distinguish the task to which traffic should be routed.
@JonRoma JonRoma marked this pull request as ready for review February 13, 2024 22:03
@JonRoma JonRoma merged commit 0ad4207 into techservicesillinois:main Feb 14, 2024
1 check passed
@JonRoma JonRoma deleted the feature/fix-listener-cert branch February 14, 2024 03:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working service:authman
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants