Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interceptor Proxy - OTEL Instrumentation #910

Closed
Tracked by #965 ...
zorocloud opened this issue Feb 2, 2024 · 6 comments · Fixed by #927
Closed
Tracked by #965 ...

Interceptor Proxy - OTEL Instrumentation #910

zorocloud opened this issue Feb 2, 2024 · 6 comments · Fixed by #927
Labels
stale-bot-ignore All issues that should not be automatically closed by our stale bot

Comments

@zorocloud
Copy link
Contributor

zorocloud commented Feb 2, 2024

Proposal

Currently the interceptor proxy does not emit any metrics or traces which makes it difficult to understand how the service is performing. It should use OTEL instrumentation to provide observability, making it easier to monitor key performance attributes of the service in a real world setting.

Use-Case

Running the interceptor proxy in a production setting where having insight into the performance of your platform services is key.

Is this a feature you are interested in implementing yourself?

Yes

Anything else?

No response

@zorocloud zorocloud changed the title Interceptor Proxy OTEL Instrumentation Interceptor Proxy - OTEL Instrumentation Feb 2, 2024
@JorTurFer
Copy link
Member

JorTurFer commented Feb 3, 2024

I think that this is a really interesting feature!
In KEDA (core) we already support prometheus and recently we added experimental support to otel (for metrics in both cases). I think that supporting both for metrics + otel for traces would be an awesome improvement.
If you are willing to help with that, I can assign the issue to you :)

@zorocloud
Copy link
Contributor Author

Yes I'm happy to do that. I actually know someone who is interested in contributing the trace portion of this, is it possible for both of us to take the issue and split it?

@JorTurFer
Copy link
Member

Yes I'm happy to do that. I actually know someone who is interested in contributing the trace portion of this, is it possible for both of us to take the issue and split it?

I think so. You can work in a single fork together if both parts are required for working, or just in to PRs if they can work independently, in that case, the second PR will just rebase the first one and validates that it works

@zorocloud
Copy link
Contributor Author

just in to PRs if they can work independently, in that case, the second PR will just rebase the first one and validates that it works

Sweet, will go with this approach :)

@zorocloud
Copy link
Contributor Author

zorocloud commented Feb 20, 2024

@JorTurFer I have raised an initial PR for metrics support. Let me know your thoughts :)

Copy link

stale bot commented Apr 21, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Apr 21, 2024
@JorTurFer JorTurFer added the stale-bot-ignore All issues that should not be automatically closed by our stale bot label Apr 21, 2024
@stale stale bot removed the stale All issues that are marked as stale due to inactivity label Apr 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale-bot-ignore All issues that should not be automatically closed by our stale bot
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

2 participants