Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize xTaskIncrementTick for configNUMBER_OF_CORES > 1 #1118

Merged
merged 2 commits into from
Aug 19, 2024

Conversation

cymizer
Copy link
Contributor

@cymizer cymizer commented Aug 13, 2024

Description

The original implementation only initializes the first variable. After executing xTaskIncrement, the schedule might not behave as expected.

Test Steps

N/A

Checklist:

  • I have tested my changes. No regression in existing tests.
  • I have modified and/or added unit-tests to cover the code changes in this Pull Request.

Related Issue

N/A

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@cymizer cymizer requested a review from a team as a code owner August 13, 2024 15:12
tasks.c Outdated
@@ -4699,7 +4699,7 @@ BaseType_t xTaskIncrementTick( void )
BaseType_t xSwitchRequired = pdFALSE;

#if ( configUSE_PREEMPTION == 1 ) && ( configNUMBER_OF_CORES > 1 )
BaseType_t xYieldRequiredForCore[ configNUMBER_OF_CORES ] = { pdFALSE };
BaseType_t xYieldRequiredForCore[ configNUMBER_OF_CORES ] = { [ 0 ... configNUMBER_OF_CORES ] = pdFALSE };
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not C89 syntax but GNU extension which we cannot use because FreeRTOS is used with a variety of compilers. Please use a loop to initialize the array.

Copy link
Contributor

@jefftenney jefftenney Aug 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an existing loop where this change would fit nicely.

Existing Code:

for( xCoreID = 0; xCoreID < ( ( BaseType_t ) configNUMBER_OF_CORES ); xCoreID++ )
{
    if( listCURRENT_LIST_LENGTH( &( pxReadyTasksLists[ pxCurrentTCBs[ xCoreID ]->uxPriority ] ) ) > 1U )
    {
        xYieldRequiredForCore[ xCoreID ] = pdTRUE;
    }
    else
    {
        mtCOVERAGE_TEST_MARKER();
    }
}

Proposed Code:

for( xCoreID = 0; xCoreID < ( ( BaseType_t ) configNUMBER_OF_CORES ); xCoreID++ )
{
    if( listCURRENT_LIST_LENGTH( &( pxReadyTasksLists[ pxCurrentTCBs[ xCoreID ]->uxPriority ] ) ) > 1U )
    {
        xYieldRequiredForCore[ xCoreID ] = pdTRUE;
    }
    else
    {
        xYieldRequiredForCore[ xCoreID ] = pdFALSE;
    }
}

EDIT: Just reviewed the configuration options and found something unexpected. The code I propose above requires that we also resolve what seems to be a small inconsistency in the way we use configUSE_TIME_SLICING. The loop above runs only when configUSE_TIME_SLICING is 1, but the loop later that checks the array to induce the yields runs even if configUSE_TIME_SLICING is 0. That's fine of course, but the code I propose above requires that we add the extra condition (configUSE_TIME_SLICING == 1) on the later loop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @aggarg , @jefftenney
Thank you for your valuable suggestions. I have made the changes based on your recommendations. Kindly assist with the review.

@cymizer cymizer force-pushed the main branch 2 times, most recently from 269eda9 to c377beb Compare August 14, 2024 15:12
tasks.c Outdated
if( ( xYieldRequiredForCore[ xCoreID ] != pdFALSE ) || ( xYieldPendings[ xCoreID ] != pdFALSE ) )
#else
if( xYieldPendings[ xCoreID ] != pdFALSE )
#endif /* #if ( configUSE_TIME_SLICING == 1 ) */
Copy link
Contributor

@jefftenney jefftenney Aug 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks fine to me now, but can't help but wonder if we should eliminate xYieldRequiredForCore altogether. (Sorry for not seeing this earlier.) The code above that sets xYieldRequiredForCore[ xCoreID ] would just set xYieldPendings[ xCoreID ] instead, leaving the original test-coverage marker in the "else" case. Would you mind investigating that possibility? Any concerns @aggarg?

@cymizer cymizer changed the title Fix xTaskIncrementTick local varible initialization Optimize xTaskIncrementTick for configNUMBER_OF_CORES > 1 Aug 15, 2024
@cymizer cymizer force-pushed the main branch 2 times, most recently from 73c2277 to b7a7636 Compare August 15, 2024 15:32
The original implementation only initializes the first
variable. After executing xTaskIncrementTick, the schedule
might not behave as expected.

When configUSE_PREEMPTION == 1 & configUSE_TIME_SLICING == 1,
replace setting xYieldRequiredForCore[ xCoreID ] with setting
xYieldPendings[ xCoreID ].

And when configUSE_PREEMPTION == 1, simplify the check
condition to only check xYieldPendings[ xCoreID ].

Signed-off-by: cymzier <cymb103u@cs.ccu.edu.tw>
Copy link
Contributor

@jefftenney jefftenney left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. This PR resolves the issue of the uninitialized array, and it slightly increases performance too. The only functional difference I see is for a task with preemption disabled (via vTaskPreemptionDisable()). For that task, a time-slicing yield is dropped before this PR and is delayed after this PR. We could see that as an improvement too.

Copy link

sonarcloud bot commented Aug 16, 2024

@aggarg
Copy link
Member

aggarg commented Aug 16, 2024

For that task, a time-slicing yield is dropped before this PR and is delayed after this PR.

Thanks for pointing that. It should be okay as the scheduler will not evict a task for which the preemption is disabled - https://github.com/FreeRTOS/FreeRTOS-Kernel/blob/main/tasks.c#L1230.

@aggarg aggarg merged commit 294569e into FreeRTOS:main Aug 19, 2024
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants