-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
System.Diagnostics.TraceInternal.Listeners.get timed out #55696
Comments
Tagging subscribers to this area: @safern Issue DetailsI represent the Application Insights .NET SDK. DescriptionPer our investigation, we're seeing deadlocks involving clr native code and Related issues:
Other informationWe had a customer kindly share a repro here: We investigated using WinDbg and found evidence of deadlocks.
Thread 49 => clr code => Entered critsec => Executing code => Managed code => Waiting to enter a managed lock
Thread 5 => clr code => managed code => Entered lock => executing code => clr code => Waiting to enter critsec
|
@rajkumar-rangaraj @cijothomas @reyang For reference |
Tagging subscribers to this area: @tarekgh, @tommcdon, @pjanotti Issue DetailsI represent the Application Insights .NET SDK. DescriptionPer our investigation, we're seeing deadlocks involving clr native code and Related issues:
Other informationWe had a customer kindly share a repro here: We investigated using WinDbg and found evidence of deadlocks.
Thread 49 => clr code => Entered critsec => Executing code => Managed code => Waiting to enter a managed lock
Thread 5 => clr code => managed code => Entered lock => executing code => clr code => Waiting to enter critsec
|
Thanks for the nice writeup @TimothyMothra ! @gregg-miskelly - who is good person for handling func-eval issues these days? When thread 5 tries to abort the func-eval I assume graceful abort fails because the thread is holding the event source monitor lock. In general that placement for BreakpointWithDebuggerFuncEval() is awkward because it means every func-eval run from that point will always have a monitor lock held. One potential solution would be don't func-eval at that point and instead step-out three frames which will get the app clear of the lock, then do the func-eval. |
I marked this as 7.0, but realistically the .NET milestone doesn't matter for issues resolved in Visual Studio. If it does wind up being something that needs a .NET change we'll need to evaluate what work is involved. |
So, we've found a work-around for this issue inside VS. I'm working on a fix right now. As things go, it would probably be available in VS 2022. I don't know if we have a path to getting it fixed in 2019, though. |
This issue has been fixed. It will be released with VS 2022, Preview 4. If there is a large need to have it in VS 2019, we can see about back porting the fix. Let us know. |
I know I would love to see it get into 2019 only because after 2019 I will not be able to use the load test and things that I use with my WCF project. Therefore I will be developing in 2019 for quite a while at this point. We are unable to divert resources to rewrite our custom load testing. So anyway the trace timeouts really would be something that would be nice not to have to worry about |
Thanks @delmyers much appreciated! |
Looking forward to include the fix in VS 2019 as well |
As long as the workaround won't break in VS 2021 I'm fine with no backports. (Is there a way to conditionalize project file properties on VS version?) |
@argelj289 @haldiggs OK, we have fixed the issue in VS 2019 as well, version 16.11.2. It should be released in the next month or two as our servicing releases come out. |
Thank you very much!
…On Wed, Aug 18, 2021, 4:03 PM Del Myers ***@***.***> wrote:
@argelj289 <https://github.com/argelj289> @haldiggs
<https://github.com/haldiggs> OK, we have fixed the issue in VS 2019 as
well, version 16.11.2. It should be released in the next month or two as
our servicing releases come out.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#55696 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAM5KUWPEQFKMBU3VR34Q3LT5QN3TANCNFSM5AMO2NQQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email>
.
|
@delmyers @TimothyMothra - is there anything left that needs fixing being tracked by this issue or can we close it? |
@noahfalk Nothing that I know of. You can go ahead and close this as far as I'm concerned. |
My team hasn't received any customer complaints since the fix was released. I think we can close this. :) |
I represent the Application Insights .NET SDK.
Our customers are reporting an issue with our TraceListener.
Related source: https://referencesource.microsoft.com/#System/compmod/system/diagnostics/TraceInternal.cs,30
Description
Per our investigation, we're seeing deadlocks involving clr native code and
System.Diagnostics.TraceInternal
and need help investigating.Related issues:
Reproduction
We had a customer kindly share a repro here:
Investigation
We investigated using WinDbg and found evidence of deadlocks.
Thread 49 => clr code => Entered critsec => Executing code => Managed code => Waiting to enter a managed lock
Thread 5 => clr code => managed code => Entered lock => executing code => clr code => Waiting to enter critsec
The text was updated successfully, but these errors were encountered: