Search code examples
pythontensorflowtensorflow-federatedfederated-learning

How to use TFF api's for custom usage?


I have read and studied the TFF guide and APIs pages precisely. But I am confused in some detail parts.

For example, when I want to wrap/decorate a TF/python function, use these two below APIs:

1. tff.tf_computation()
2. tff.federated_computation()

I can not find what are differences between them and when I am allowed to use them. Especially, in case I want to use other algorithms except for FedAvg or FedSgd. I wonder if you know:

  1. How they could be used to manipulate inputs? do they work on @CLIENT or @SERVER?
  2. How I could use them in another usage except for the output of tff.federated_mean or tff.federated_sum that the value will be in the server?
  3. How I am able to have access to the detail of data and metrics in @CLIENT and @SERVER?
  4. Why we should invoke the tff.tf_computation() from tff.federated_computation()? In this link, there was not any explanation about them.
  5. Do these APIs (e.g. tff.federated_mean or tff.federated_sum) modify the output elements of each @CLIENT and bring them to the @SERVER?

Could anyone help me to understand intuitive behind the concept?


Solution

  • A possible rule of thumb about the different function decorators:

    • tff.tf_computation is for wrapping TF logic. Think "tensors in, tensors out": this should be very similar to the usage of tf.function, where the parameters and return values are tensors, or nested structures of tensors. TFF intrinsics (e.g. tff.federated_mean) cannot be used inside a tff.tf_computation, and tff.tf_computations cannot call tff.federated_computations. The type signature is always on unplaced.

    • tff.federated_computation should be used to wrap TFF programming abstractions. Think "tensors here, tensors there": Inside this context, a tff.tf_computation can be applied to tff.Values and tff.Values can be communicated to other placements using the intrinsics. The type signature can accept federated types (i.e. types with placements).

    For your list of questions:

    1. Both can work on values placed at CLIENTS or SERVER. For example, tff.tf_computation called my_comp can be applied to a value v with type int32@CLIENTS with tff.federated_map(my_comp, v), which will run my_comp on each client.
    2. tff.federated_map() supports applying a computation pointwise (across clients) to data not on the server. You can manipulate the metrics on each client using tff.federated_map. TFF isn't intended for separate options on different clients; the abstractions do not support addressing individuals. You may be able to simulate this in Python, see Operations performed on the communications between the server and clients.
    3. The values of placed data can be inspected in simulation simply by returning them from a tff.Computation, and invoking that computation. The values should be available in the Python environment.
    4. tff.tf_computations should be invokable from anywhere, if there is documentation that says otherwise please point to it. I believe what was intended to highlight is that a tff.federated_computation may invoke a tff.tf_computation, but not vice versa.
    5. The tutorials (Federated Learning for Image Classification and Federated Learning for Text Generation) show examples of printing out the metrics in simulation. You may also be interested in the answer to how to print local outputs in tensorflow federated?
    6. tff.tf_computations should be executed directly if desired. This will avoid any of the federated part of TFF, and simply delegate to TensorFlow. To apply the computation to federated values and use in combination with federated intrinsics, they must be called inside a tff.federated_computation.