I have a .NET Core 2.1 Web API (using 2.1.0-preview1-final
) working fine locally using SignalR 1.0.0-preview1-final
. I'm using for the front-end an Angular app which has the package "@aspnet/signalr": "1.0.0-preview1-final"
so everything matches and I have both HTTP endpoints and Hubs working as expected when I run the programs locally.
When I deploy to my virtual server I have an Nginx reverse proxy that sends request to all the applications behind it. I'm using Docker and I haven't had any problems with it in other projects when we deploy the v1.0 of an entire ecosystem.
The differences I have in this particular scenario are two:
proxy_buffering off
option from the Nginx configuration to get it to work (following https://andrewlock.net/fixing-nginx-upstream-sent-too-big-header-error-when-running-an-ingress-controller-in-kubernetes/)I'm capturing the logs of the API and I can see that when I try to connect to the Hubs I get back:
info: Microsoft.AspNetCore.Cors.Infrastructure.CorsService[4]
Policy execution successful.
info: Microsoft.AspNetCore.Authentication.JwtBearer.JwtBearerHandler[2]
Successfully validated the token.
info: Microsoft.AspNetCore.Authorization.DefaultAuthorizationService[1]
Authorization was successful.
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
Request finished in 7.4652ms 200 application/json
So I'm assuming this is correct. Now on the client side (Angular app): I see this:
Error: Failed to start the connection. Error: No available transports found.
but if I inspect the response:
{"connectionId":"nHzKKYtp0ITwlEntjqLprA","availableTransports":[{"transport":"WebSockets","transferFormats":["Text","Binary"]},{"transport":"ServerSentEvents","transferFormats":["Text"]},{"transport":"LongPolling","transferFormats":["Text","Binary"]}]}
UPDATE
Compared the response when running locally I got:
{"connectionId":"4ea7b1ea-8754-472b-baef-527073872d2a","availableTransports":["WebSockets","ServerSentEvents","LongPolling"]}
That means there is no restrictions in terms of transfer formats? Not sure if that's relevant either...It's very weird, it's the same thing that happens here: SignalR no transport
------UPDATE END--------
So my questions are:
Have I broken SignalR connectivity because I did set the proxy_buffer
? If so, is there a way to get both IS4 and SignalR running behind the same Nginx instance? - To make things more difficult I'm using an Nginx template that gets autogenerated using docker-gen.
If my changes to Nginx shouldn't break SignalR, why is not establishing the connection?
Thanks!
UPDATE! FOUND THE ISSUE!!!
I write this because I think that can be useful for someone else.
The problem I had was that I was using preview1 on both the client and the API but back in the day when I created the Dockerfile
I couldn't get the FROM microsoft/dotnet:2.1.0-preview1-aspnetcore-runtime
to work so I chose to use preview2
: FROM microsoft/dotnet:2.1.0-preview2-aspnetcore-runtime
and that was the problem. Now I quickly changed the client and the API to use preview of SignalR and I could get the connection to work. Happy days! Hope this el helpful :) So not only client and API need matching but the actual docker image needs to be aligned as well.
The problem I had was that I was using preview1 on both the client and the API but back in the day when I created the Dockerfile I couldn't use the FROM microsoft/dotnet:2.1.0-preview1-aspnetcore-runtime
because I had metadata issues (taken from the error) so I chose to use preview2: FROM microsoft/dotnet:2.1.0-preview2-aspnetcore-runtime
and that was the problem. Now I quickly changed the client and the API to use preview of SignalR
and I could get the connection to work. Happy days! Hope this el helpful :) So not only client and API need matching but the actual docker image needs to be aligned as well.
So make sure you have in sync you client version , net Core signalr versión and the runtime version of your Dockerfile
when creating the image