Search code examples
falcor

Falcor - Deep nested references not cached


I'm seeing an problem in Falcor client when I request a route that contain nested references.

Here is an example:

Consider the following JsonGraph response from Falcor server on a model.get call

{
  "todos": {
    "0": { "$type": "ref", "value": ["todosById", "id_0"] },
    "1": { "$type": "ref", "value": ["todosById", "id_1"] },
    "length": 2
  },
  "todosById": {
    "id_0": {
      "name": "get milk",
      "label": { "$type": "ref", "value": ["labelsById", "lbl_0"] },
      "completed": false
    },
    "id_1": {
      "name": "do the laundry",
      "label": { "$type": "ref", "value": ["labelsById", "lbl_1"] },
      "completed": false
    }
  },
  "labelsById": {
    "lbl_0": { "name": "groceries" },
    "lbl_1": { "name": "home" }
  }
}

When I call model.get with the following path, all the above jsonGraph result should be in cache:

model.get(['todos', {from: 0, to: 1}, ['completed', 'label', 'name']])

However, manually accessing the cache, I can see todos and todosById are in cache, but not labelsById.

I'm not certain but it looks like labelsById is not in cache because it's a second level reference?

Am I missing something here or is that an expected behaviour of Falcor cache? Would there be any way to force labelsById to be in cache, so no additional datasource request would be made?

Any help is appreciated !

The problem can be reproduced in this small project: https://github.com/ardeois/falcor-nested-references-cache-issue

UPDATE

Thanks to @james-conkling answer the json graph can be cached by doing the following model.get:

model.get(
  ['todos', {from: 0, to: 1}, ['completed', 'name']],
  ['todos', {from: 0, to: 1}, 'label', 'name']
);

However, on the server side Falcor Router will call todos[{integers:indices}] route twice. This could have an impact on API or database calls to whatever your Falcor server is fronting.


Solution

  • In pathset ['todos', {from: 0, to: 1}, ['completed', 'label', 'name']], the paths ending with the completed and name keys terminates at an atom. But the path ending with the label key terminates at a ref. If you want to actually follow that ref, you'll have to include it as a second path:

    [
       ['todos', {from: 0, to: 1}, ['completed', 'name']],
       ['todos', {from: 0, to: 1}, 'label', 'name']
    ]
    

    In general, all paths should terminate on an atom, never on a ref. I'm not sure what the expected behavior is for paths that terminate on a ref, or even if it's well defined (as your other question notes, the behavior has changed from v0 to v1).

    The model.get(...paths) call can take multiple pathSet arrays, so rewriting the query should be as straightforward as

    model.get(
      ['todos', {from: 0, to: 1}, ['completed', 'name']],
      ['todos', {from: 0, to: 1}, 'label', 'name']
    );
    

    EDIT

    As noted in the comments below, because the router handlers can only resolve a single pathSet at a time, GET requests with multiple pathSets can result in multiple requests to your upstream backing service/db. Some possible solutions:

    use a single path

    Rewrite the request using a single path ['todos', range, ['completed', 'name', 'label'], 'name']. Technically, this request is asking for todos.n.completed.name and todos.n.label.name (which don't exist), in addition to todos.n.label.name (which does exist).

    However, if your router handler returns pathValues for paths that are shorter than the matched path, the shorter pathValues should be merged into your jsonGraph cache. E.g. when matching todos.0.completed.name, return { path: ['todos', 0, 'completed'], value: true }, while when matching todos.0.label.name return { path: ['todos', 0, 'label', 'name'], value: 'First TODO' }.

    This is probably the easiest approach, but means your queries aren't really semantically correct (you're knowingly asking for paths that don't exist).

    batch upstream requests made by the router

    In your router, batch upstream requests to your backing service/db. This is not always straightforward. One possible approach is to use facebook's data-loader, written to solve an equivalent problem w/ GraphQL routers, but not necessarily tied to GraphQL. Another approach could use a custom reducer function to combine requests issued w/i the same tick (e.g. here).

    rewrite your schema

    So that all paths that need to be requested at the same time are of the same length. This won't always be possible, though, so :shrug.