tl; dr: My website is successfully served on ether localhost with https, or domain/ip with http, but not domain/ip with https. Code's at the bottom.
I have an A
record to @
(gremy.co.uk) and to www
in my registrar and it resolves correctly when hosting on http (e.g. .bind("0.0.0.0:80")
)
I have generated a certificate using certbot certonly --standalone
for both gremy.co.uk
and www.gremy.co.uk
and have pointed the rustls::ServerConfig
to the fullchain.pem
and privkey.pem
files.
When running with .bind("0.0.0.0:80")
I can successfully visit my website as intended. Changing to .bind_rustls("0.0.0.0:443", config)?
times out both https://gremy.co.uk
and the ip pointing to it ("the server is taking too long to respond."). Console logs nothing, no connection has been attempted. There is an attempted connection when going to localhost, server logs a BadCertificate
error (which is fair since I have a cert for gremy.co.uk not localhost)
I am able to run mkcert to generate a self-signed certificate for localhost, and that connects as expected and works fine.
So it seems that I can serve my website on localhost via https, but not on my server's actual public ip/domain. Is there perhaps something I'm missing with my actix-web configuration?
Code:
use rustls::internal::pemfile::{certs, pkcs8_private_keys};
use actix_web::middleware::Logger;
use anyhow::Context;
use actix_web::{middleware::Logger, get, http, web, App, HttpRequest, HttpResponse, HttpServer, http::ContentEncoding, middleware::*};
use actix_cors::Cors;
use actix_files::{Files, NamedFile};
#[actix::main]
async fn main() -> anyhow::Result<()> {
let mut config = rustls::ServerConfig::new(rustls::NoClientAuth::new());
let cert_file = &mut std::io::BufReader::new(std::fs::File::open(&CONFIG.ssl.cert)?);
let key_file = &mut std::io::BufReader::new(std::fs::File::open(&CONFIG.ssl.key)?);
let cert_chain = certs(cert_file).ok().context("no certs")?;
let mut keys = pkcs8_private_keys(key_file).ok().context("no private keys")?;
config.set_single_cert(cert_chain, keys.remove(0))?;
Ok(HttpServer::new(|| {
App::new()
.wrap(Logger::new("%s in %Ts, %b bytes \"%r\""))
.wrap(NormalizePath::default())
.wrap(Cors::permissive()) // seems to be necessary for chrome?
.service(actix_files::Files::new("/public", "public"))
.default_service(web::route().to(reply))
})
// .bind("0.0.0.0:80")? // just works
.bind_rustls("0.0.0.0:443", config)? // just doesn't
.run().await?)
}
fn reply() -> web::HttpResponse {
static REPLY: Lazy<String> = Lazy::new(|| {
let js = minifier::js::minify(&std::fs::read_to_string("public/wasm/client.js").expect("couldn't find the js payload"));
minify::html::minify(&format!(
include_str!("../response.html"),
js = js,
// or localhost/http as appropriate
path = "https://gremy.co.uk/public/wasm/client_bg.wasm",
))
});
web::HttpResponse::Ok().body(&REPLY as &str)
}
Fun fact if you're hosting a server you still have to port forward default ports like 443 too.
After forwarding 443 in router and, just in case, opening 443 through windows firewall, I can now serve on https via my public ip.