-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use a single segmented index for local and global iterations #2057
Conversation
This removes the need for a lot of the additional synchronization. This also makes it so there are global iterations for all executors. For some of those it means that the global iteration will be *unique* among instances but doesn't necessary mean that iterations with smaller numbers *have* run or *will* run in the same test run. This is somewhat like how it is for the executors for which we already had it, there is nothing making certain that a shared-iterations executor will not run out of time to run all the iterations on one of the instances or a arrival-rate to not have to drop iterations due to lack VUs to execute them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @mstoykov, this is indeed simpler! 👏
I thought about using a single function for both iterations, but didn't consider using SegmentedIndex
for it. I suppose that has a slight overhead since before it was a single *uint64
and atomic
and now it's more complex, but since we're doing it for all executors it shouldn't matter.
Just left some minor comments, but LGTM. 👍
"go.k6.io/k6/js" | ||
"go.k6.io/k6/js/common" | ||
"go.k6.io/k6/js/modules" | ||
"go.k6.io/k6/lib" | ||
"go.k6.io/k6/lib/testutils" | ||
"go.k6.io/k6/loader" | ||
"go.k6.io/k6/stats" | ||
"github.com/spf13/afero" | ||
"github.com/stretchr/testify/assert" | ||
"github.com/stretchr/testify/require" | ||
"gopkg.in/guregu/null.v3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit and unrelated to this PR, but this is still weirdly formatted. Did you add go.k6.io/k6
to goimports -local
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no I haven't ... and I am against it as the whole idea of gofmt (and goimports respectively) is for them to work the same for everybody and to preferably have 0 knobs ...
Even if I decide to go through the trouble of doing this is everyone who contributes will likely not do it so we will need to ask them to ... fix it, which IMO is a waste of everybodies time, as again you probably shouldn't go look at 20 lines of imports and care how they are ordered.
Don't get me wrong if goimprots decides to default to first putting the names of the module you are in 👍 , but me telling it that it should do it is a totally different thing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am against it as the whole idea of gofmt (and goimports respectively) is for them to work the same for everybody and to preferably have 0 knobs
Yeah, but they don't 😄
I don't think we should insist on this for external contributors, but we at least can have that configured so that whenever we change a file it's updated. It's a minor thing, but increases readability of imports substantially IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should insist on this for external contributors, but we at least can have that configured so that whenever we change a file it's updated.
And apparently, if it's not configured and someone contributes it will move them ... so this IMO should just not be a thing
@@ -189,21 +188,11 @@ func (varr *RampingArrivalRate) Init(ctx context.Context) error { | |||
et, err := varr.BaseExecutor.executionState.ExecutionTuple.GetNewExecutionTupleFromValue(varr.config.MaxVUs.Int64) | |||
varr.et = et | |||
start, offsets, lcd := et.GetStripedOffsets() | |||
varr.segIdx = lib.NewSegmentedIndex(start, lcd, offsets) | |||
varr.iterationSegIndex = lib.NewSegmentedIndex(start, lcd, offsets) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hhmm do we still need to redefine it here? Couldn't we use the instance created in NewBaseExecutor()
, in which case we can get rid of this Init()
entirely?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do think it is needed as this is based on the execution tuple created from the MaxVUs not just based on the execution segment alone.
If I remember correctly this meant that for some corner cases there will be ... wrong values returned otherwise:
execution segment 0:1/2,1/2:1 with 3 VUs will mean that 1 of the instances will have 2 VUs and there other 1 VU so if we are doing arrival rate of 3
the iterations should be
1st instance: 1, 3, 4,6, ...
2nd instance: 2,5,8 ...
but if you just base it on the original execution segments they will be
1st instance: 1,3,5,7 ...
2nd instance: 2,4,6,8 ...
It will still do the same amount of iteration just in the case that there needed to be 12(divides by 2 and 3 ;) )(let's say the rate is 3 and we run for 4 seconds ;) ) iterations at some point there will be ... holes:
1st instance (has to do 2/3 of 12 so 8): 1,3,5,7,9,11,13,15
2nd instance (has to do 1/3 of 12 so 4):,2,4,6,8
P.s. I wrote this quickly so ... likely some of my math is slightly wrong and I also index from 1 ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, because of differences like that, I'd prefer if we don't have NextIterationCounters()
in BaseExecutor
and instead have it as a helper. Maybe something like GetSegmentedCounterIterator(ExecutionTuple) func() (uint64, uint64)
that essentially returns the next()
closure?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to do this fast ... so I didn't do it, but IMO SegmentedIndex should have 0 amount of changes from the previous unexported version and it is the nextIterationCounters
or w/e will replace it that should have the mutex and get the counters/indexes ...
I can do it in this PR if you want me to ? WDYT @na-- @imiric
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should probably have 2 different types - a simple SegmentedIndex
without any locking or synchronization, and a SynchronizedSegmentedIndex
(or some better name...) that has locking and atomic operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think a second type is needed, as we have 1 place we use it and it's only for 1 method it is literally going to be
func nextIteartionsCounters() (uint64, uint64){
someMutex.Lock()
defer someMutex.Unlock()
segmentedIndex.next()
return segmentedIndex.scaled, segmentedIndex.unscaled
}
I see no reason to make a whole new type for that .. .but it definitely is better so I am fine with that as well :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw I had this pending as a comment in the main PR, but probably worth mentioning here: SegmentedIndexResult
seems unnecessary. We are never going to have more than the scaled and unscaled indexes in this type, so a tuple (i.e. 2 return values) seems a bit more idiomatic in Go.
If we ever add more things on top of SegmentedIndex
, we are likely to do it as another wrapper, not by changing the original (synchronized or unsynchromized), so we can add a struct then, if we need to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(didn't see the previous comment when I wrote my last one...)
👍 @mstoykov I'm fine with having the locking only in the helper function where it's needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM in principle, though as I said in the inline comment, I think we shouldn't add to BaseExecutor
. I also haven't looked too closely at the details, we can fix any of my nitpicks in #1863, since there we can see the full diff with master
.
Co-authored-by: Ivan Mirić <ivan@loadimpact.com>
Codecov Report
@@ Coverage Diff @@
## feat/1320-execution-api #2057 +/- ##
===========================================================
- Coverage 72.33% 71.96% -0.38%
===========================================================
Files 182 178 -4
Lines 14404 14323 -81
===========================================================
- Hits 10419 10307 -112
- Misses 3350 3363 +13
- Partials 635 653 +18
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
} | ||
if u.getNextScGlobalIter != nil { | ||
u.scIterGlobal = u.getNextScGlobalIter() | ||
// TODO remove this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think comments like this are helpful without indicating why. ;)
There is nothing stopping 5 VUs starting a shared iterations at the same time and one of them getting "slower" to the actual execution than it's iteration counter suggests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👏
* Use a single segmented index for local and global iterations This removes the need for a lot of the additional synchronization. This also makes it so there are global iterations for all executors. For some of those it means that the global iteration will be *unique* among instances but doesn't necessary mean that iterations with smaller numbers *have* run or *will* run in the same test run. This is somewhat like how it is for the executors for which we already had it, there is nothing making certain that a shared-iterations executor will not run out of time to run all the iterations on one of the instances or a arrival-rate to not have to drop iterations due to lack VUs to execute them. * Update lib/executor/base_executor.go Co-authored-by: Ivan Mirić <ivan@loadimpact.com> * unexport and comment nextIterationCounters * Remove synchronization in SegmetnedIndex * Drop SegmentedIndexResult * Fix race in TestSharedIterationsGlobalIters There is nothing stopping 5 VUs starting a shared iterations at the same time and one of them getting "slower" to the actual execution than it's iteration counter suggests. Co-authored-by: Ivan Mirić <ivan@loadimpact.com>
* Use a single segmented index for local and global iterations This removes the need for a lot of the additional synchronization. This also makes it so there are global iterations for all executors. For some of those it means that the global iteration will be *unique* among instances but doesn't necessary mean that iterations with smaller numbers *have* run or *will* run in the same test run. This is somewhat like how it is for the executors for which we already had it, there is nothing making certain that a shared-iterations executor will not run out of time to run all the iterations on one of the instances or a arrival-rate to not have to drop iterations due to lack VUs to execute them. * Update lib/executor/base_executor.go Co-authored-by: Ivan Mirić <ivan@loadimpact.com> * unexport and comment nextIterationCounters * Remove synchronization in SegmetnedIndex * Drop SegmentedIndexResult * Fix race in TestSharedIterationsGlobalIters There is nothing stopping 5 VUs starting a shared iterations at the same time and one of them getting "slower" to the actual execution than it's iteration counter suggests. Co-authored-by: Ivan Mirić <ivan@loadimpact.com>
* Use a single segmented index for local and global iterations This removes the need for a lot of the additional synchronization. This also makes it so there are global iterations for all executors. For some of those it means that the global iteration will be *unique* among instances but doesn't necessary mean that iterations with smaller numbers *have* run or *will* run in the same test run. This is somewhat like how it is for the executors for which we already had it, there is nothing making certain that a shared-iterations executor will not run out of time to run all the iterations on one of the instances or a arrival-rate to not have to drop iterations due to lack VUs to execute them. * Update lib/executor/base_executor.go Co-authored-by: Ivan Mirić <ivan@loadimpact.com> * unexport and comment nextIterationCounters * Remove synchronization in SegmetnedIndex * Drop SegmentedIndexResult * Fix race in TestSharedIterationsGlobalIters There is nothing stopping 5 VUs starting a shared iterations at the same time and one of them getting "slower" to the actual execution than it's iteration counter suggests. Co-authored-by: Ivan Mirić <ivan@loadimpact.com>
* Use a single segmented index for local and global iterations This removes the need for a lot of the additional synchronization. This also makes it so there are global iterations for all executors. For some of those it means that the global iteration will be *unique* among instances but doesn't necessary mean that iterations with smaller numbers *have* run or *will* run in the same test run. This is somewhat like how it is for the executors for which we already had it, there is nothing making certain that a shared-iterations executor will not run out of time to run all the iterations on one of the instances or a arrival-rate to not have to drop iterations due to lack VUs to execute them. * Update lib/executor/base_executor.go Co-authored-by: Ivan Mirić <ivan@loadimpact.com> * unexport and comment nextIterationCounters * Remove synchronization in SegmetnedIndex * Drop SegmentedIndexResult * Fix race in TestSharedIterationsGlobalIters There is nothing stopping 5 VUs starting a shared iterations at the same time and one of them getting "slower" to the actual execution than it's iteration counter suggests. Co-authored-by: Ivan Mirić <ivan@loadimpact.com>
No description provided.