Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incremental Delivery Suggestions #154

Closed
wants to merge 204 commits into from
Closed

Conversation

yaacovCR
Copy link
Owner

@yaacovCR yaacovCR commented Feb 4, 2022

Some of these were discussed during the Feb 2022 WG, some are new, possibly to be discussed in March :)

Execution:
= resolver functions for list fields should return AsyncIterables of Iterables, rather than just AsyncIterables to allow chunking
= change AsyncGenerator returned by execute to return an iterable of all available payloads instead of just the next payload

@stream:
= change stream payload to return an array for data, new atIndex property to specify the initial index for the array
= stream payloads currently act as error boundaries (as opposed to borking the entire stream/future payloads)
= add maxChunkSize Int argument for streaming in chunks
= add 'maxInterval' Int argument to send chunks before maxChunkSize if maxInterval ms have elapsed
= add 'inParallel' Boolean argument to allow sending items whenever ready, uses 'atIndices' instead of 'atIndex' to specify the indices of what is being returned

Related:
graphql/defer-stream-wg#32
graphql/defer-stream-wg#23
graphql/defer-stream-wg#17

split executor from graphql-js
slightly more cross platform
will fix with deprecation of getOperationRootType
The codebase should refer to functions that execute the query, mutation, and/or subscription root fields as such, rather than as functions that execute the operations themselves. Executing a query or mutation returns a map of data and errors, while executing the root fields returns the data of the root fields.
The `subscribe` method calls `createSourceEventStream` and `execute`, each of which normalize the request properties separately. By normalizing requests within the `subscribe` function, we can pass the normalized arguments to the implementation for `createSourceEventStream` and `execute`, i.e. new functions `createSourceEventStreamImpl` and `executeQueryOrMutation`.
The function is used only by `buildExecutionContext` and can be called there rather than exported
* Move non-class Executor functions to end of file

* Move memoized collectSubFields adjacent to class functions

* Add buildPerPayloadExecutionContext function

* Add main exports to executor

...in preparation for wrapping as class

* Refactor Executor as class
* Export internal Executor class for experimentation

This class, similar to the Parser class, is exported only to assist people in implementing their own executors without duplicating too much code and should be used only as last resort for cases such as experimental syntax or if certain features could not be contributed upstream.

It is still part of the internal API (so far), so any changes to it are never considered breaking changes. The `graphql-executor` package must therefore be pinned to a specific patch version, e.g. `0.0.4`.

* Fix README
to speed up ci, benchmarks irrelevant to execution can be retired
to reference graphql-executor instead of graphql-js
yaacovCR added a commit that referenced this pull request Mar 18, 2022
extracted from #154

relies on bundlers that can be used for batched streaming as well
yaacovCR added a commit that referenced this pull request Mar 18, 2022
extracted from #154

relies on bundlers that can be used for batched streaming as well
extracted from #154

relies on bundlers that can be used for batched streaming as well
yaacovCR and others added 12 commits March 18, 2022 16:12
when using `@stream`, the `data` property will be an array only when batching.

when/if graphql-js changes to always send data as an array for streamed results, graphql-executor will do so as well. until then, data will be an array only when necessary, i.e. when batch streaming
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* fix batched parallel stream error with defer

where the initial responseNode was not added to the list

using identical helpers for all bundlers makes this code better tested all around

* add changeset
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
the Publisher and Bundler abstractions are such core parts of the execution algorithm that they probably belong within the execution folder (following the pattern of the  map/flatten asyncIterable helpers)
to distinguish a single null item versus a null item in a list of non-nullable items where the null bubbles up to the list itself
...rather than just AsyncIterables to facilitate chunking
@yaacovCR
Copy link
Owner Author

Defer with fragments

Operation:

query HeroNameQuery {
  hero {
    id
    ...TopFragment @defer(label: "DeferTop")
  }
}
fragment TopFragment on Hero {
  name
  ...NestedFragment @defer(label: "DeferNested")
}
fragment NestedFragment on Hero {
  friends {
    name
  }
}

Response:

[
  {
    data: {
    hero: {
      id: '1',
    },
  },
    hasNext: true,
  },
  {
    data: {
      friends: [{ name: 'Han' }, { name: 'Leia' }, { name: 'C-3PO' }],
    },
    path: ['hero'],
    label: 'DeferNested',
    hasNext: true,
  },
  {
    data: {
      name: 'Luke',
    },
    path: ['hero'],
    label: 'DeferTop',
    hasNext: false,
  },
]

@yaacovCR
Copy link
Owner Author

Defer with slow field in initial payload

Operation:

query HeroNameQuery {
  hero {
    id
    ...NameFragment @defer
  }
}
fragment NameFragment on Hero {
  slowField
  friends {
    ...NestedFragment @defer
  }
}
fragment NestedFragment on Friend {
  name
}

Response:

Results in two sets of payloads.

[
  {
    data: {
      hero: { id: '1' },
    },
    hasNext: true,
  },
],
[
  {
    data: {
      slowField: 'slow',
      friends: [{}, {}, {}],
    },
    path: ['hero'],
    hasNext: true,
  },
  {
    data: { name: 'Han' },
    path: ['hero', 'friends', 0],
    hasNext: true,
  },
  {
    data: { name: 'Leia' },
    path: ['hero', 'friends', 1],
    hasNext: true,
  },
  {
    data: { name: 'C-3PO' },
    path: ['hero', 'friends', 2],
    hasNext: false,
  },
]

@yaacovCR
Copy link
Owner Author

yaacovCR commented Mar 30, 2022

Stream

Operation:

{
  scalarList @stream
}

OR

{
  scalarList @stream(initialCount: 1)
}

Response:

[
  {
    data: {
    scalarList: ['apple'],
  },
    hasNext: true,
  },
  {
    data: ['banana'],
    path: ['scalarList'],
    atIndex: 1,
    hasNext: true,
  },
  {
    data: ['coconut'],
    path: ['scalarList'],
    atIndex: 2,
    hasNext: false,
  },
]

@yaacovCR
Copy link
Owner Author

Stream with chunks of greater than 1

Operation:

{
  scalarList @stream(maxChunkSize: 2)
}

Response:

The first stream chunk hits its max.

[
  {
    data: {
      scalarList: [],
    },
    hasNext: true,
    },
  {
    data: ['apple', 'banana'],
    path: ['scalarList'],
    atIndex: 0,
    hasNext: true,
  },
  {
    data: ['coconut'],
    path: ['scalarList'],
    atIndex: 2,
    hasNext: false,
  },
]

@yaacovCR
Copy link
Owner Author

Stream in correct order even when the first item is slow

Operation:

query { 
  asyncSlowList @stream(initialCount: 0) {
    name
    id
  }
}

Response:

Note that a slow item 1 will move all the items to the second set of payloads.

[
  {
    data: {
      asyncSlowList: [],
    },
    hasNext: true,
  },
],
[
  {
    data: [
      {
        name: 'Luke',
        id: '1',
      },
    ],
    path: ['asyncSlowList'],
    atIndex: 0,
    hasNext: true,
  },
  {
    data: [
      {
        name: 'Han',
        id: '2',
      },
    ],
    path: ['asyncSlowList'],
    atIndex: 1,
    hasNext: true,
  },
  {
    data: [
      {
        name: 'Leia',
        id: '3',
      },
    ],
    path: ['asyncSlowList'],
    atIndex: 2,
    hasNext: false,
  },
]

@yaacovCR
Copy link
Owner Author

Stream in parallel

Operation:

query { 
  asyncSlowList @stream(initialCount: 0, maxChunkSize: 1, inParallel: true) {
    name
    id
  }
}

Response:

When streaming in parallel, the first set of payloads includes everything except the slow item 1.

[
  {
    data: {
      asyncSlowList: [],
    },
    hasNext: true,
  },
  {
    data: [
      {
        name: 'Han',
        id: '2',
      },
      {
        name: 'Leia',
        id: '3',
      },
    ],
    path: ['asyncSlowList'],
    atIndices: [1, 2],
    hasNext: true,
  },
],
[
  {
    data: [
      {
        name: 'Luke',
        id: '1',
      },
    ],
    path: ['asyncSlowList'],
    atIndices: [0],
    hasNext: false,
  },
]

@yaacovCR
Copy link
Owner Author

Stream with asyncIterableList

Operation:

query { 
  asyncIterableList @stream {
    name
    id
  }
}

OR

query { 
  asyncIterableList @stream(maxChunkSize: 1) {
    name
    id
  }
}

OR

query { 
  asyncIterableList @stream(maxChunkSize: 2, maxInterval: 1) {
    name
    id
  }
}

Response:

With AsyncIterables, each item will have to be in its own set of payloads, unless chunked. (The default maxChunkSize is 1.) But, even if chunked, if the interval elapses, it is sent separately before the chunk maxes out. So, a maxChunkSize greater than 1 could still end up sending smaller payloads if the maxInterval is exceeded.

[
  {
    data: {
      asyncIterableList: [],
    },
    hasNext: true,
  },
  {
    data: [
      {
        name: 'Luke',
        id: '1',
      },
    ],
    path: ['asyncIterableList'],
    atIndex: 0,
    hasNext: true,
  },
],
[
  {
    data: [
      {
        name: 'Han',
        id: '2',
      },
    ],
    path: ['asyncIterableList'],
    atIndex: 1,
    hasNext: true,
  },
],
[
  {
    data: [
    {
        name: 'Leia',
        id: '3',
    },
    ],
    path: ['asyncIterableList'],
    atIndex: 2,
    hasNext: true,
  },
  {
    hasNext: false,
  },
]

@yaacovCR
Copy link
Owner Author

Stream with asyncIterableList with chunk size > 1

Operation:

query { 
  asyncIterableList @stream(maxChunkSize: 2) {
    name
    id
  }
}

Response:

[
  {
    data: {
      asyncIterableList: [],
    },
    hasNext: true,
  },
  {
    data: [
      {
        name: 'Luke',
        id: '1',
      },
      {
        name: 'Han',
        id: '2',
      },
    ],
    path: ['asyncIterableList'],
    atIndex: 0,
    hasNext: true,
  },
],
[
  {
    data: [
    {
        name: 'Leia',
        id: '3',
    },
    ],
    path: ['asyncIterableList'],
    atIndex: 2,
    hasNext: true,
  },
  {
    hasNext: false,
  },
]

@yaacovCR
Copy link
Owner Author

Stream where a non-nullable item returns null

Operation:

query { 
  asyncIterableNonNullError @stream(initialCount: 1) {
    name
  }
}

Response:

[
  {
    data: {
      asyncIterableNonNullError: [
        {
          name: 'Luke',
        },
      ],
    },
    hasNext: true,
  },
  {
    data: null,
    path: ['asyncIterableNonNullError'],
    atIndex: 1,
    errors: [
      {
        message:
        'Cannot return null for non-nullable field Query.asyncIterableNonNullError.',
        locations: [
          {
            line: 3,
            column: 9,
          },
        ],
        path: ['asyncIterableNonNullError', 1],
      },
    ],
    hasNext: true,
  },
],
[
  {
    data: [
      {
        name: 'Han',
      },
    ],
    path: ['asyncIterableNonNullError'],
    atIndex: 2,
    hasNext: true,
  },
  {
    hasNext: false,
  },
]

@yaacovCR
Copy link
Owner Author

yaacovCR commented Oct 1, 2024

Returning to simple fork.

@yaacovCR yaacovCR closed this Oct 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants