Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicate logs: structured and unstructured #115

Closed
Nxtra opened this issue Aug 18, 2020 · 6 comments
Closed

Duplicate logs: structured and unstructured #115

Nxtra opened this issue Aug 18, 2020 · 6 comments
Labels
bug Something isn't working

Comments

@Nxtra
Copy link

Nxtra commented Aug 18, 2020

My logs are appearing twice in CloudWatch: once in a structured format and once unstructured.
I'd only like to have the structured logs.
I didn't see this behavior before so it might be something on my side.

Expected Behavior

Having the logs once in structured format.

Current Behavior

Having the logs both structured and unstructured.

Environment

  • Powertools version used: 1.1.2
  • AWS Lambda function runtime: Python
  • Debugging logs

Powertools is setup like this:

from aws_lambda_powertools import Logger, Tracer, Metrics
from aws_lambda_powertools.metrics import MetricUnit

logger = Logger()
tracer = Tracer()
metrics = Metrics()

@metrics.log_metrics
@logger.inject_lambda_context
@tracer.capture_lambda_handler
def handle(event, context):
   ...
   logger.info('validated')

I am seeing the logs twice:
image

Seems like the root logger is still capturing and logging.

Why do I have duplicate logs? What am I missing here?

@Nxtra Nxtra added bug Something isn't working triage Pending triage from maintainers labels Aug 18, 2020
@heitorlessa
Copy link
Contributor

heitorlessa commented Aug 18, 2020 via email

heitorlessa referenced this issue in heitorlessa/aws-lambda-powertools-python Aug 18, 2020
@heitorlessa
Copy link
Contributor

@Nxtra managed to replicate: It's what I suspected - Lambda configures the root logger with a basic unstructured info (it shouldn't), and since you're configuring a new Logger it emits the message twice due to how Logging works in Python (hierarchy: root logger -> your logger, the record traverses from your logger upwards to the root).

Why wasn't this happening before

On 1.1.0, we introduced Logging Inheritance feature which also made our Logger implementation fully compliant with how Python Logging works, however that also means changes you create in your Parent Logger, or Child Logger propagates between themselves.

Solution

We remove the root logger handler set by Lambda since you're using ours. An alternative solution would be to disable log propagation when we create a Logger for your app, however, this would remove the ability to propagate structured logging changes between the Parent and the Child - I chose the former.

This fix will come within an hour on 1.1.3

heitorlessa added a commit that referenced this issue Aug 18, 2020
fix: remove root logger handler set by Lambda #115
@heitorlessa
Copy link
Contributor

1.1.3 is now live in PyPi -- Could you please try that and let us know here @Nxtra?

I've deployed a fresh function, run it locally w/ SAM CLI too, and managed to replicate the issue in tests but... just in case

@Nxtra
Copy link
Author

Nxtra commented Aug 18, 2020

I updated my project to 1.1.3.
The issue has disappeared!
Thanks for elaborating on what caused the issue.
And wow, big thanks for the solution on such a short term!

@Nxtra Nxtra closed this as completed Aug 18, 2020
@viniciusao
Copy link

viniciusao commented Nov 20, 2021

just an info for everyone coming in here and it's using AWS Chalice.
You must set to false the configure_logs parameter when instantiating Chalice() app's object.

image

@heitorlessa heitorlessa removed the triage Pending triage from maintainers label Feb 13, 2023
@warvariuc
Copy link

warvariuc commented Sep 3, 2023

I had a similar issue. In my project the PowerTools logger was configured before the main logging conf. I ended up doing this:

pt_logger = aws_lambda_powertools.Logger(service="...")
# PowerTools logger sets up its own handler to stdout and adds a filter to prevent duplicates
# in the root logger, but the logging is not configured at this point, so the filter is not attached
# to any handler of the root logger, so it's easier just to disable the propagation
pt_logger._logger.propagate = False  # noqa

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants