Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pagination? #4

Closed
luisobo opened this issue Jul 2, 2015 · 13 comments
Closed

Pagination? #4

luisobo opened this issue Jul 2, 2015 · 13 comments

Comments

@luisobo
Copy link

luisobo commented Jul 2, 2015

Hey facebook,

First of all, thanks for open sourcing this, I'm so excited I forgot how to even.

With that out of the way, I have a question, how is pagination implemented in the facebook schema? There is no mention of pagination in the spec so I assume is implemented adhoc using the primitives provided by the spec. A hint on how to model this common use case would be really appreciated.

What I have in mind is some is some Page type. Which yields the following question: is it possible to define generic types the same way that List is defined? Asking because ideally such Page type would be Page<T>

Also, I'd also appreciate some advice on how to model querying for the count of a list property or a paginated property. In 2.10.1 Types of fragments there is the following example:

query FragmentTyping
{
  profiles(handles: ["zuck", "cocacola"]) {
    handle
    ...userFragment
    ...pageFragment
  }
}

fragment userFragment on User {
  friends { count }
}

fragment pageFragment on Page {
  likers { count }
}

There is no schema attached for that example, but as far as I understand, those fragments are trying to get the total number of friends of a user and the total number of likes of a Page. It is hard to say without the schema, but if we assume that a User has a List of Users, the fist fragment is really querying for the property count in each individual User of said list, and not the property count in the list itself (which is the intention, arguably). Any light in this topic would be very much appreciated.

Thanks for open sourcing this and for your time.

@devknoll
Copy link
Contributor

devknoll commented Jul 2, 2015

As far as what I've seen in the past on GraphQL, if you wanted pagination, one schema you could use to achieve that would be:

type FriendList {
  count: Int,
  edges: [User!]
}

type User {
  friends(first: Int, after: ID): FriendList
}

This should hopefully answer both of your questions. Pagination would be implemented by using the first and after field arguments on friends, and the count accessed would be on the type FriendList, a wrapper over the list of Users.

I'm not entirely sure if this has been changed, so I would love to see @leebyron pitch in too 😄

@luisobo
Copy link
Author

luisobo commented Jul 2, 2015

@devknoll thanks for your response. Yup, I guess something among those lines is always possible. There are some caveats with an approach is that you'd have to duplicate the FriendList type for every type you want to paginate over PostList etc.

This relates to the question I mentioned, whether or not is possible to create generic types just like the native List.

@leebyron
Copy link
Collaborator

leebyron commented Jul 2, 2015

We don't support generic types like Page<T>, as representing them client-side can be really tricky. Is it a template type? Is it a generic interface? Lots of questions and complexity we chose to avoid, at least for now.

@devknoll, you're pretty much on the money with that answer. We have a convention that we use at Facebook that we call Connection. We have some helper functions on the server side that keeps us from duplicating the work of implementing this convention, and avoids mistakes. That gets you most of the value you would get from having some kind of template type or generics.

At Facebook, it looks like this:

type UserConnection {
  count: Int
  nodes: [User]
  edges: [UserConnectionEdge]
}

type UserConnectionEdge {
  node: User
  cursor: String
}

You can imagine replacing User with any type here and the type model still works. The nodes field is a shorthand for edges { node { ... } } when all you want are the nodes at the end of the connection. edges also has cursor as a utility for performing pagination.

Pagination in GraphQL at Facebook looks something like:

type User {
...
friends(after: String, first: Int, before: String, last: Int): UserConnection
...
}

So here, the friends field on a User accepts a couple arguments. after accepts a value that cursor gave you in a prior query, and first limits how many items are returned. before and last are the same ideas, but for paginating from the back of a list towards the front.

So again, this is just a convention that Facebook uses. GraphQL itself doesn't know what pagination is, it just enables patterns like these. Underlying code is responsible for actually reading these arguments and applying them in a sensible way.


You could also imagine a much simpler pagination model that just does offset, limit. There are some issues with this approach (for example, if the front of the list changes often, or when deep into a very long list), but it's simplicity is compelling.

In this model, you might instead type things as:

type User {
...
friends(offset: Int, limit: Int): [User]
...
}

Now there's no additional Connection type at all; I would suggest that offset and limit work here just as they do in SQL.

@leebyron leebyron closed this as completed Jul 2, 2015
@luisobo
Copy link
Author

luisobo commented Jul 2, 2015

💃

@leebyron thanks! I'll play around with this ideas. Thank you!

@graue
Copy link

graue commented Jul 24, 2015

@leebyron What do you think about documenting that pattern somewhere more visible than this issue?

@dschafer
Copy link
Contributor

@graue Yep, we're definitely planning on doing so!

@luisobo
Copy link
Author

luisobo commented Aug 24, 2015

@leebyron I've been playing with this and it works great. The duplication in the server is not a problem as you mentioned, a function to define a connection type for a given types solves the problem just fine.

I have a couple of follow up questions and I was hoping you could shed some light:

type UserConnectionEdge {
  node: User
  cursor: String
}
  • why do you guys put the cursor of every node along with that node? What's the use case behind this decision? Wouldn't it be enough to have a before and after (first and last item) cursor at the connection level?
  • why do you call it connection? I'm just trying to understand the abstraction properly. I've been thinking about it in terms of pages. I'm wondering if there are any caveats.
  • I'm thinking to implement pagination like this. I'd love feedback.
type UserPage {
  count: Int
  nodes: [User]
  after: String
  before: String
}

and the entry point would be the same

type User {
...
friends(offset: Int, limit: Int): [User]
...
}

Thank you so much in advance.

@RavenHursT
Copy link

@leebyron Thanks for your informational reply. It's helped a lot! So sorry to revive this old thread.. but I'm completely lost as to how to implement one piece of what you said:

after accepts a value that cursor gave you in a prior query

Ok.. and I've definitely gotten that to work by taking a look at the edge cursors on the /graphql response in my browser. Problem is.. how do I make this work for the first "page" of items, when I have no "prior query" to work from?

I tried leaving after as undefined in my query, but I only got the following error:

Uncaught Invariant Violation: callsFromGraphQL(): Expected a declared value for variable, $curs.

It seems, if you define first and after in your container's fragment, then they're required parameters. But I have no value for after, so how in the world does one initialize this?

Thanks!

@leebyron
Copy link
Collaborator

@RavenHursT, ensure that your variable is not required.

For example, this query should error if no value is provided for $after since it's required to be a non-null string (String!).

query Example($after: String!) { 
  someConnection(after: $after) { count }
}

Where this example which accepts null (just String), should not error if no value is provided for $after. This is considered best practice for using variables with pagination.

query Example($after: String) { 
  someConnection(after: $after) { count }
}

@RavenHursT
Copy link

@leebyron yeah.. turns out that you have to explicitly set default variables w/ null, not just allow them to be undefined:

http://stackoverflow.com/questions/34406667/relay-pagination-how-to-initialize-after-value/34407181#34407181

Thanks.

@leebyron
Copy link
Collaborator

That sounds like a bug actually. What GraphQL library are you using? graph-js?

@RavenHursT
Copy link

We're using express-graphql

@caub
Copy link

caub commented Jan 31, 2018

We don't support generic types like Page<T>

well it's working: https://launchpad.graphql.com/nxqjv8k4l7

With this Page template type for example:

function Page(Type) {
	return new ObjectType({
		name: `Page${Type}`,
		description: `A simple pagination method
 - nodes contains the actual list of data
 - cursor is the end cursor or falsy if there are no more pages`,
		fields: {
			nodes: {type: new NonNull(new List(new NonNull(Type)))},
			cursor: {type: String},
		},
	});
}
const QueryType = new ObjectType({
	name: 'Query',
	fields: {
		foos: {
			type: Page(Foo),
			args: listArgs,
			resolve: (_, args) =>getFoos(args),
		},
	},
});

fotoetienne pushed a commit to fotoetienne/graphql-spec that referenced this issue May 6, 2021
yaacovCR referenced this issue in yaacovCR/graphql-spec Nov 17, 2022
* streamline stream execution

Currently, these spec changes introduce a new internal function named `ResolveFieldGenerator` that is suggested parallels `ResolveFieldValue`. This function is used  during field execution such that if the stream directive is specified, it is called instead of `ResolveFieldValue`.

The reference implementation, however, does not require any such function, simply utilizing the result of `ResolveFieldValue`.

With incremental delivery, collections completed by `CompleteValue` should be explicitly iterated using a well-defined iterator, such that the iterator can be passed to `ExecuteStreamField`. But this does not require a new internal function to be specified/exposed.

Moreover, introducing this function causes a mixing of concerns between the `ExecuteField` and `CompleteValue` algorithms; Currently, if stream is specified for a field, `ExecuteField` extracts the iterator and passes it to `CompleteValue`, while if stream is not specified, the `ExecuteField` passes the collection, i.e. the iterable, not the iterator. In the stream case, this shunts some of the logic checking the validity of resolution results into field execution. In fact, it exposes a specification "bug" => in the stream case, no checking is actually done that `ResolveFieldGenerator` returns an iterator!

This change removes `ResolveFieldGenerator` and with it some complexity, and brings it in line with the reference implementation.

The reference implementation contains some simplification of the algorithm for the synchronous iterator case (we don't have to preserve the iterator on the StreamRecord, because there will be no early close required and we don't have to set isCompletedIterator, beacuse we don't have to create a dummy payload for termination of the asynchronous stream), We could consider also removing these bits as well, as they are an implementation detail in terms of how our dispatcher is managing its iterators, but that should be left for another change.

* run prettier
yaacovCR referenced this issue in yaacovCR/graphql-spec Jan 15, 2023
* streamline stream execution

Currently, these spec changes introduce a new internal function named `ResolveFieldGenerator` that is suggested parallels `ResolveFieldValue`. This function is used  during field execution such that if the stream directive is specified, it is called instead of `ResolveFieldValue`.

The reference implementation, however, does not require any such function, simply utilizing the result of `ResolveFieldValue`.

With incremental delivery, collections completed by `CompleteValue` should be explicitly iterated using a well-defined iterator, such that the iterator can be passed to `ExecuteStreamField`. But this does not require a new internal function to be specified/exposed.

Moreover, introducing this function causes a mixing of concerns between the `ExecuteField` and `CompleteValue` algorithms; Currently, if stream is specified for a field, `ExecuteField` extracts the iterator and passes it to `CompleteValue`, while if stream is not specified, the `ExecuteField` passes the collection, i.e. the iterable, not the iterator. In the stream case, this shunts some of the logic checking the validity of resolution results into field execution. In fact, it exposes a specification "bug" => in the stream case, no checking is actually done that `ResolveFieldGenerator` returns an iterator!

This change removes `ResolveFieldGenerator` and with it some complexity, and brings it in line with the reference implementation.

The reference implementation contains some simplification of the algorithm for the synchronous iterator case (we don't have to preserve the iterator on the StreamRecord, because there will be no early close required and we don't have to set isCompletedIterator, beacuse we don't have to create a dummy payload for termination of the asynchronous stream), We could consider also removing these bits as well, as they are an implementation detail in terms of how our dispatcher is managing its iterators, but that should be left for another change.

* run prettier
yaacovCR referenced this issue in yaacovCR/graphql-spec May 11, 2023
* streamline stream execution

Currently, these spec changes introduce a new internal function named `ResolveFieldGenerator` that is suggested parallels `ResolveFieldValue`. This function is used  during field execution such that if the stream directive is specified, it is called instead of `ResolveFieldValue`.

The reference implementation, however, does not require any such function, simply utilizing the result of `ResolveFieldValue`.

With incremental delivery, collections completed by `CompleteValue` should be explicitly iterated using a well-defined iterator, such that the iterator can be passed to `ExecuteStreamField`. But this does not require a new internal function to be specified/exposed.

Moreover, introducing this function causes a mixing of concerns between the `ExecuteField` and `CompleteValue` algorithms; Currently, if stream is specified for a field, `ExecuteField` extracts the iterator and passes it to `CompleteValue`, while if stream is not specified, the `ExecuteField` passes the collection, i.e. the iterable, not the iterator. In the stream case, this shunts some of the logic checking the validity of resolution results into field execution. In fact, it exposes a specification "bug" => in the stream case, no checking is actually done that `ResolveFieldGenerator` returns an iterator!

This change removes `ResolveFieldGenerator` and with it some complexity, and brings it in line with the reference implementation.

The reference implementation contains some simplification of the algorithm for the synchronous iterator case (we don't have to preserve the iterator on the StreamRecord, because there will be no early close required and we don't have to set isCompletedIterator, beacuse we don't have to create a dummy payload for termination of the asynchronous stream), We could consider also removing these bits as well, as they are an implementation detail in terms of how our dispatcher is managing its iterators, but that should be left for another change.

* run prettier
yaacovCR referenced this issue in yaacovCR/graphql-spec Nov 6, 2023
* streamline stream execution

Currently, these spec changes introduce a new internal function named `ResolveFieldGenerator` that is suggested parallels `ResolveFieldValue`. This function is used  during field execution such that if the stream directive is specified, it is called instead of `ResolveFieldValue`.

The reference implementation, however, does not require any such function, simply utilizing the result of `ResolveFieldValue`.

With incremental delivery, collections completed by `CompleteValue` should be explicitly iterated using a well-defined iterator, such that the iterator can be passed to `ExecuteStreamField`. But this does not require a new internal function to be specified/exposed.

Moreover, introducing this function causes a mixing of concerns between the `ExecuteField` and `CompleteValue` algorithms; Currently, if stream is specified for a field, `ExecuteField` extracts the iterator and passes it to `CompleteValue`, while if stream is not specified, the `ExecuteField` passes the collection, i.e. the iterable, not the iterator. In the stream case, this shunts some of the logic checking the validity of resolution results into field execution. In fact, it exposes a specification "bug" => in the stream case, no checking is actually done that `ResolveFieldGenerator` returns an iterator!

This change removes `ResolveFieldGenerator` and with it some complexity, and brings it in line with the reference implementation.

The reference implementation contains some simplification of the algorithm for the synchronous iterator case (we don't have to preserve the iterator on the StreamRecord, because there will be no early close required and we don't have to set isCompletedIterator, beacuse we don't have to create a dummy payload for termination of the asynchronous stream), We could consider also removing these bits as well, as they are an implementation detail in terms of how our dispatcher is managing its iterators, but that should be left for another change.

* run prettier
yaacovCR referenced this issue in yaacovCR/graphql-spec Nov 18, 2023
* streamline stream execution

Currently, these spec changes introduce a new internal function named `ResolveFieldGenerator` that is suggested parallels `ResolveFieldValue`. This function is used  during field execution such that if the stream directive is specified, it is called instead of `ResolveFieldValue`.

The reference implementation, however, does not require any such function, simply utilizing the result of `ResolveFieldValue`.

With incremental delivery, collections completed by `CompleteValue` should be explicitly iterated using a well-defined iterator, such that the iterator can be passed to `ExecuteStreamField`. But this does not require a new internal function to be specified/exposed.

Moreover, introducing this function causes a mixing of concerns between the `ExecuteField` and `CompleteValue` algorithms; Currently, if stream is specified for a field, `ExecuteField` extracts the iterator and passes it to `CompleteValue`, while if stream is not specified, the `ExecuteField` passes the collection, i.e. the iterable, not the iterator. In the stream case, this shunts some of the logic checking the validity of resolution results into field execution. In fact, it exposes a specification "bug" => in the stream case, no checking is actually done that `ResolveFieldGenerator` returns an iterator!

This change removes `ResolveFieldGenerator` and with it some complexity, and brings it in line with the reference implementation.

The reference implementation contains some simplification of the algorithm for the synchronous iterator case (we don't have to preserve the iterator on the StreamRecord, because there will be no early close required and we don't have to set isCompletedIterator, beacuse we don't have to create a dummy payload for termination of the asynchronous stream), We could consider also removing these bits as well, as they are an implementation detail in terms of how our dispatcher is managing its iterators, but that should be left for another change.

* run prettier
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants