My question is: what is the best when implementing resolving methods? direct call of data repositories or calling back a main resolver aka one implementing GraphQLQueryResolver
(provided it has appropriate methods)? in other words (see example below), is the DataFetchingEnvironment
properly adjusted/set when calling back a main resolver?
Note: if you're not familiar with how Resolvers
work with GraphQL Java Tools, I'll just let you have a look @ https://www.graphql-java-kickstart.com/tools/schema-definition/
Now the example.
In a Spring Boot app, with GraphQL Java Tools (with the graphql-spring-boot-starter
dependency), let's have this schema:
type User {
id: ID
name: String
company: Company
}
type Company {
id: ID
name: String
}
with matching POJO or entity (getters/setters are omitted):
class User {
private Long id;
private String name;
private Long idCompany;
}
class Company {
private Long id;
private String name;
}
and these resolvers (Note: UserRepository and CompanyRepository are your usual DAO/Repository-kind-of-classes, either backed by Spring Data (JPA), something else or your own custom implementation, whatever...):
QueryResolver implements GraphQLQueryResolver {
@Autowired
private UserRepository userRepository;
@Autowired
private CompanyRepository companyRepository;
public User user(String id) {
return userRepository.findById(id);
}
public Company company(String idCompany) {
return companyRepository.findById(idCompany);
}
}
UserResolver implements GraphQLResolver<User> {
@Autowired
private CompanyRepository companyRepository;
public Company company(User user) {
return companyRepository.findById(user.getIdCompany());
}
// ...or should I do:
@Autowired
private QueryResolver queryResolver;
public Company company(User user) {
return queryResolver.company(user.getIdCompany());
}
}
This makes (more) sense when adding DataFetchingEnvironment environment
at the end of each method, AND using it before performing the calls to the various (data) repositories.
Continuing with the example above, would it be correct to do this (i.e. would the DataFetchingEnvironment
be properly populated when transmitted again to the main QueryResolver)?
UserResolver implements GraphQLResolver<User> {
@Autowired
private QueryResolver queryResolver;
public Company company(User user, DataFetchingEnvironment environment) {
return queryResolver.company(user.getIdCompany(), environment);
}
}
You can delegate your resolver calls to the service layer, but do not pass the DataFecthingEnvironment between resolvers/services. It would not be correctly populated.
It is not safe and it could result in bugs difficult to pinpoint and data losses.
The DataFetchingEnvironment is populated from the graphql query/mutation being performed, and you would expect the DataFetchingEnvironment in your resolver method to be consistent with the resolver method being called.
Consider the schema below:
type Movie {
id: ID!
title: String!
rating: String
actors: [Actor]
}
type Actor {
id: ID!
name: String!
role: String
}
input ActorUpdateInput {
id: ID!
name: String
role: String
}
type Query {
#Search movies with a specified Rating
searchMovie(name: movieTitle, rating: String): Book
#Search R-rated movies
searchRRatedMovie(name: movieTitle): Book
}
type Mutation {
#Update a movie and its actors
updateMovie(id:Id!, title: String, actors: [ActorUpdateInput]): Movie
#Update an actor
updateActor(input: ActorUpdateInput!): Actor
}
query {
searchRRatedMovie(name: "NotRRatedMovie") {
title
}
}
The movie "NotRRatedMovie" is not R rated, we can expect this query to return a null data.
Now, the implementation below passes the DataFetchingEnvironment from the searchRRatedMovie to the searchMovie query resolver implementation.
public class QueryResolver {
@Autowired
MovieRepository repository;
public Movie searchRRatedMovie(String title, DataFetchingEnvironment environment) {
return this.searchMovie(name, "R", environment);
}
public Movie searchMovie(String title, String rating, DataFetchingEnvironment environment) {
if(!environment.containsArgument("rating")) {
//if the rating argument was omitted from the query
return repository.findByTitle(title);
} else if(rating == null) {
//rating is an argument but was set to null (ie. the user wants to retrieve all the movies without any rating)
return repository.findByTitleAndRating(title, null);
} else {
repository.findByNameAndTitle(name,rating);
}
}
}
That looks good, but the query will not return null.
The first resolver will call searchRRatedMovie("NotRRatedMovie", environment)
. The environment does not contain a "rating"
argument. When reaching the line: if(!environment.containsArgument("rating")) {
the "rating"
argument is not present and it will enter the if statement, returning repository.findByTitle("NotRRatedMovie")
instead of the expected repository.findByTitleAndRating("NotRRatedMovie","R")
.
We can use the DataFetchingEnvironment arguments to implement partial updates in a mutation: if an argument is null
we need the DataFetchingEnvironment arguments to tell us if the argument is null
because it was set to null
(ie. the mutation should update the underlying value to null
) or because it was not set at all (ie. the mutation should not update the underlying value).
public class MutationResolver {
@Autowired
MovieRepository movieRepository;
@Autowired
ActorRepository actorRepository;
public Movie updateMovie(Long id, String title, List<ActorUpdateInput> actors, DataFetchingEnvironment environment) {
Movie movie = movieRepository.findById(id);
//Update the title if the "title" argument is set
if(environment.containsArgument("title")) {
movie.setTitle(title);
}
if(environment.containsArgument("actors")) {
for(ActorUpdateInput actorUpdateInput : actors) {
//The passing the environment happens here
this.updateActor(actorUpdateInput, environment);
}
}
return movie;
}
public Actor updateActor(ActorUpdateInput input, DataFetchingEnvironment environment) {
Actor actor = actorRepository.findById(input.getId());
//We retrieve the argument "input". It is a Map<String, Object> where keys are arguments of the ActorUpdateInput
Map<String, Object> actorArguments = (Map<String, Object>) env.getArguments().get("input");
//Problem: if the environment was passed from updateMovie, it does not contains an "input" parameter! actorArguments is now null and the following code will fail
//Update the actor name if the "name" argument is set
if (actorArguments.containsKey("name")) {
actor.setName(input.getName());
}
//Update the actor role if the "role" argument is set
if (actorArguments.containsKey("role")) {
actor.setRole(input.getRole());
}
return actor;
}
}
Here the updateActor resolver expected an input argument (that would match the updateActor mutation definition). Because we passed a wrongly populated environment, the implementation broke.
Partial updates without DataFetchinEnvironment
If you want to implement partial updates, you can do so without using the DataFecthingEnvironment, as I did in this comment: https://github.com/graphql-java-kickstart/graphql-java-tools/issues/141#issuecomment-560938020
Rebuild the DataFetchingEnvironment before passing it to the next resolver
If you really need the DataFetchingEnvironment, you can still build a new one to pass to the next resolver. This is gonna be probably more difficult and error prone but you can have a look at how the original DataFetchingEnvironment is created in ExecutionStrategy.java https://github.com/graphql-java/graphql-java/blob/master/src/main/java/graphql/execution/ExecutionStrategy.java#L246