Search code examples
dartgrpcenvoyproxygrpc-gogrpc-web

Using gRPC Web with Dart


I have a web application with the following stack:

  • UI: Flutter Web/Dart
  • Server: Go
  • Communication Protocol: gRPC/gRPC-Web

I have defined a few protobufs and compiled them into both Go and Dart successfully. When I run the Go server code, I am able to successfully make gRPC calls with Kreya, however when I try making the same call from Flutter using grpc/grpc_web.dart, though I keep running into the following error:

gRPC Error (code: 2, codeName: UNKNOWN, message: HTTP request completed without a status
(potential CORS issue), details: null, rawResponse: , trailers: {})

Here is my UI Code:

class FiltersService {
  static ResponseFuture<Filters> getFilters() {

    GrpcWebClientChannel channel =
        GrpcWebClientChannel.xhr(Uri.parse('http://localhost:9000'));

    FiltersServiceClient clientStub = FiltersServiceClient(
      channel,
    );

    return clientStub.getFilters(Void());
  }
}

Backend Code:

func StartServer() {
    log.Println("Starting server")
    listener, err := net.Listen("tcp", fmt.Sprintf(":%v", port))
    if err != nil {
        log.Fatalf("Unable to listen to port %v\n%v\n", port, err)
    }

    repositories.ConnectToMongoDB()

    grpcServer = grpc.NewServer()

    registerServices()

    if err = grpcServer.Serve(listener); err != nil {
        log.Fatalf("Failed to serve gRPC\n%v\n", err)
    }
}

// Register services defined in protobufs to call from UI
func registerServices() {
    cardsService := &services.CardsService{}
    protos.RegisterCardsServiceServer(grpcServer, cardsService)

    filtersService := &services.FiltersService{}
    protos.RegisterFiltersServiceServer(grpcServer, filtersService)
}

As mentioned, the API call is successful when Kreya is used to make the call, however the Dart code keeps failing.

I have tried wrapping the gRPC server in the gRPC web proxy, however that also failed from both Dart and Kreya. Here is the code I tried:

func StartProxy() {
    log.Println("Starting server")
    listener, err := net.Listen("tcp", fmt.Sprintf(":%v", port))
    if err != nil {
        log.Fatalf("Unable to listen to port %v\n%v\n", port, err)
    }

    repositories.ConnectToMongoDB()

    grpcServer = grpc.NewServer()
    registerServices()
    grpcWebServer := grpcweb.WrapServer(grpcServer)

    httpServer := &http.Server{
        Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
            if r.ProtoMajor == 2 {
                grpcWebServer.ServeHTTP(w, r)
            } else {
                w.Header().Set("Access-Control-Allow-Origin", "*")
                w.Header().Set("Access-Control-Allow-Methods", "POST, GET, OPTIONS, PUT, DELETE")
                w.Header().Set("Access-Control-Allow-Headers", "Accept, Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization, X-User-Agent, X-Grpc-Web")
                w.Header().Set("grpc-status", "")
                w.Header().Set("grpc-message", "")
                if grpcWebServer.IsGrpcWebRequest(r) {
                    grpcWebServer.ServeHTTP(w, r)
                }
            }
        }),
    }

    httpServer.Serve(listener)

}

func StartServer() {
    StartProxy()
}

I am also aware of Envoy Proxy which can be used in place of this gRPC web proxy, however if I do that, I would be exposing the endpoints on Envoy as REST APIs, which would then forward the request as a gRPC call. From what I understand, this would require maintaining 2 versions of the data models - one for communication between the UI and Envoy (in JSON), and the other for communication between Envoy and the server (as protobuf). Is this the correct understanding? How can I move past this?

*** EDIT: *** As per the suggestion in the comments, I have tried using Envoy in place of the go proxy. However, even now, I'm having trouble getting it to work. I'm now getting upstream connect error or disconnect/reset before headers. reset reason: overflow when trying to access the port exposed by Envoy 9001, though I can successfully call the backend service directly from kreya on port 9000.

Here is my envoy.yaml:

static_resources:
  listeners:
    - name: listener_0
      address:
        socket_address: { address: host.docker.internal, port_value: 9001 }
      filter_chains:
        - filters:
            - name: envoy.filters.network.http_connection_manager
              typed_config:
                "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
                codec_type: auto
                stat_prefix: ingress_http
                route_config:
                  name: local_route
                  virtual_hosts:
                    - name: local_service
                      domains: ["*"]
                      routes:
                        - match: { prefix: "/" }
                          route:
                            cluster: greeter_service
                            max_stream_duration:
                              grpc_timeout_header_max: 0s
                      cors:
                        allow_origin_string_match:
                          - prefix: "*"
                        allow_methods: GET, PUT, DELETE, POST, OPTIONS
                        allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
                        max_age: "1728000"
                        expose_headers: id,token,grpc-status,grpc-message
                http_filters:
                  - name: envoy.filters.http.grpc_web
                  - name: envoy.filters.http.cors
                  - name: envoy.filters.http.router
  clusters:
    - name: greeter_service
      connect_timeout: 0.25s
      type: logical_dns
      http2_protocol_options: {}
      lb_policy: round_robin
      # win/mac hosts: Use address: host.docker.internal instead of address: localhost in the line below
      load_assignment:
        cluster_name: cluster_0
        endpoints:
          - lb_endpoints:
              - endpoint:
                  address:
                    socket_address:
                      address: host.docker.internal
                      port_value: 9000

Solution

  • I was able to resolve the issue by taking the suggestion in the comments and using Envoy to proxy instead of the go-proxy, though the solution didn't work purely out of the box according to the linked post.

    Here is the working envoy.yaml

    admin:
      access_log_path: /tmp/admin_access.log
      address:
        socket_address:
          address: 0.0.0.0
          port_value: 9901
    static_resources:
      listeners:
        - name: listener_0
          address:
            socket_address: { address: 0.0.0.0, port_value: 9000 }
          filter_chains:
            - filters:
                - name: envoy.http_connection_manager
                  typed_config:
                    "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
                    codec_type: auto
                    stat_prefix: ingress_http
                    route_config:
                      name: local_route
                      virtual_hosts:
                        - name: local_service
                          domains: ["*"]
                          routes:
                            - match: { prefix: "/" }
                              route:
                                cluster: greeter_service
                                max_grpc_timeout: 0s
                          cors:
                            allow_origin_string_match:
                              - prefix: "*"
                            allow_methods: GET, PUT, DELETE, POST, OPTIONS
                            allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
                            max_age: "1728000"
                            expose_headers: custom-header-1,grpc-status,grpc-message
                    http_filters:
                      - name: envoy.filters.http.grpc_web
                      - name: envoy.filters.http.cors
                      - name: envoy.filters.http.router
      clusters:
        - name: greeter_service
          connect_timeout: 0.25s
          type: logical_dns
          lb_policy: round_robin
          http2_protocol_options: {}
          load_assignment:
            cluster_name: cluster_0
            endpoints:
              - lb_endpoints:
                  - endpoint:
                      address:
                        socket_address:
                          address: host.docker.internal
                          port_value: 9001
    

    The working Dockerfile

    FROM envoyproxy/envoy:v1.20-latest
    COPY ./envoy.yaml /etc/envoy/envoy.yaml
    CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml -l debug
    

    And the commands used to run Envoy

    docker build -t envoy .

    The previous command has to be run in the same directory as the Dockerfile

    docker run -p 9000:9000 -p 9901:9901 envoy

    Where 9000 and 9901 are the ports I want to expose and be able to access externally (listed in the envoy.yaml)

    **NOTE: ** Make sure to include http2_protocol_options: {}. Following some possible solutions online, I had removed it, which led to a connection reset due to protocol error. I had been stuck on this issue for hours until I saw that Envoy can forward a request as either HTTP/2 or HTTP/3, and decided to add it back, which finally allowed me to make a gRPC call using the gRPC Web client.

    Hope this helps anyone else that may be coming across this issue.