

Secondary/Failover DNS or any other service that would be nice to have running when the main server is down for any reason.
Secondary/Failover DNS or any other service that would be nice to have running when the main server is down for any reason.
On your first part, clarifying your intent, I think that you are overcomplicating yourself by expecting traffic to come to the server via domain name (pass through proxy) from Router A
network and by IP:Port
from Router B
network, you can access all, from anywhere through domains and subdomains, and avoid using numbers.
If you can’t set up a DNS directly on Router A
, you can set it per device you would want to access the server through port forwarding of Router B
, meaning setting the laptop to use itself as primary DNS and as secondary use external, and any other device you would want in that LAN do the same (laptop as primary), It is a bit tedious to do per device instead but still possible.
Wouldn’t this link to the 192.168.0.y address of router B pass through router A, and loop back to router B, routing through the slower cable? Or is the router smart enough to realize he’s just talking to itself and just cut out `router A from the traffic?
No, the request would stop on Router B
, and maintain all traffic, on the 10.0.0.*
network it would not change subnets, or anything.
In other words any device on 10.0.0.*
will do a DNS request, ask the Router where the DNS server is, then the DNS query itself is sent directly to the server on port 53, then when the response of the DNS is received, via domain, query the server again, but on port 80|443
, and then receiving the HTTP/HTTPS response.
Remember that all my advice so far is so you don’t use any IP or Port anywhere, and your experience is seamless on any device using domains, and subdomains, the only place where you would need to put IP or ports, is on the reverse proxy itself, to tell anything reaching it, where the specific app/service is, as those would need to be running on different ports but be reached through the reverse proxy on defaults 80 or 443, so that you don’t have to put numbers anywhere.
If you decide on doing the secondary local DNS on the server on Router B
network, there is no need to loop back, as that DNS will maintain domain lookup and the requests on 10.0.0.x
all internal to Router B
network.
On Router B
then you would have as primary DNS the Server IP, and as secondary an external one like Cloudflare or Google.
You can still decide to put rules on the reverse proxy if the origin IP is from 192.168.0.*
or 10.0.0.*
if you see the need to differentiate traffic, but I think that is not necessary.
Do yourself a favor and use the default ports for HTTP(80)
, HTTPS(443)
or DNS(53)
, you are not port forwarding to the internet, so there should be no issues.
That way, you can do URLs like https://app1.home.internal/
and https://app2.home.internal/
without having to add ports on anything outside the reverse proxy.
From what you have described your hardware is connected something like this:
Internet -> Router A (192.168.0.1
) -> Laptop (192.168.0.x
), Router B (192.168.0.y
You could run only one DNS on the laptop (or another device) connected to Router A
and point the domain to Router B
, redirect for example the domain home.internal
(recommend <something>.internal
as it is the intended one to use by convention), to the 192.168.0.y
IP, and it will redirect all devices to the server by port forwarding.
If Router B
has Port Forwarding of Ports 80
and 443
to the Server 10.0.0.114
all the request are going to reach, no matter the LAN they are from. The devices connected to Router A
will reach the server thanks to port forwarding, and the devices on Router B
can reach anything connected to Router A
Network 192.168.0.*
, they will make an extra hop but still reach.
Both routers would have to point the primary DNS to the Laptop IP 192.168.0.x
(should be a static IP), and secondary to either Cloudflare 1.1.1.1
or Google 8.8.8.8
.
That setup would be dependent on having the laptop (or another device) always turned ON and connected to Router A
network to have that DNS working.
You could run a second DNS on the server for only the 10.0.0.*
LAN, but that would not be reachable from Router A
or the Laptop, or any device on that outer LAN, only for devices directly connected to Router B
, and the only change would be to change the primary DNS on Router B
to the Server IP 10.0.0.114
to use that secondary local DNS as primary.
Lots of information, be sure to read slowly and separate steps to handle them one by one, but this should be the final setup, considering the information you have given.
You should be able to setup the certificates and the reverse proxy using subdomains without much trouble, only using IP:PORT on the reverse proxy.
Most routers, or devices, let you set up at least a primary and secondary DNS resolver (some let you add more), so you could have your local one as primary and an external one like google or Cloudflare as secondary. That way, if your local DNS resolver is down, it will directly go and query the external one, and still resolve them.
Still. Thanks for the tips. I’ll update the post with the solution once I figure it out.
You are welcome.
Should not be an issue to have everything internally, you can setup a local DNS resolver, and config the device that handles your DHCP (router or other) to set that as the default/primary DNS for any devices on your network.
To give you some options if you want to investigate, there is: dnsmasq, Technitium, Pi-Hole, Adguard Home. They can resolve external DNS queries, and also do domain rewrite/redirection to handle your internal only domain and redirect to the device with your reverse proxy.
That way, you can have a local domain like domain.lan
or domain.internal
that only works and is managed on your Internal network. And can use subdomains as well.
I’m sorry if I’m not making sense. It’s the first time I’m working with webservers. And I genuinely have no idea of what I’m doing. Hell. The whole project has basically been a baptism by fire, since it’s my first proper server.
Don’t worry, we all started almost the same, and gradually learned more and more, if you have any questions a place like this is exactly for that, just ask.
Not all services/apps work well with subdirectories through a reverse proxy.
Some services/apps have a config option to add a prefix to all paths on their side to help with it, some others don’t have any config and always expect paths after the domain to not be changed.
But if you need to do some kind of path rewrite only on the reverse proxy side to add/change a segment of the path, there can be issues if all path changes are not going through the proxy.
In your case, transmission internally doesn’t know about the subdirectory, so even if you can get to the index/login from your first page load, when the app itself changes paths it redirects you to a path without the subdirectory.
Another example of this is with PWAs that when you click a link that should change the path, don’t reload the page (the action that would force a load that goes through the reverse proxy and that way trigger the rewrite), but instead use JavaScript to rewrite the path text locally and do DOM manipulation without triggering a page load.
To be honest, the best way out of this headache is to use subdomains instead of subdirectories, it is the standard used these days to avoid the need to do path rewrite magic that doesn’t work in a bunch of situations.
Yes, it could be annoying to handle SSL certificates if you don’t want or can’t issue wildcard certificates, but if you can do a cert with both maindomain.tld
and *.maindomain.tld
then you don’t need to touch that anymore and can use the same certificate for any service/app you could want to host behind the reverse proxy.
If your concern is IoT devices, TVs, and the like sniffing on your local traffic, there are alternatives, and some of them are:
The simplest (really the simplest) would be to do a git init --bare
in a directory on one machine, and that way you can clone, push or pull from it, with the directory path as URL from the same machine and using ssh from the other (you could do this bare repo inside a container but really would be complicating it), you would have to init a new bare repo per project in a new directory.
If a self-hosted server meaning something with a web UI to handle multiple repositories with pull requests, issues, etc. like your own local Github/Gitlab. The answer is forgejo (this link has the instructions to deploy with docker), and if you want to see how that looks like there is an online public instance called codeberg where the forgejo code is hosted, alongside other projects.
I don’t know if SoftEther has an option so you don’t tunnel everything, and just use the virtual LAN IPs for games, file transfers, etc.
And I don’t know your actual technical level or the people you play with, but, for people that can go as far as opening ports and installing a server on your own machine, and getting others to connect to it, I would suggest Headscale (the free self-hosted version of Tailscale) as a next step, or if inclined to learn something a bit more hands on Wireguard.
With those you can configure it so, only the desired traffic goes through (like games or files sharing using the virtual LAN IP), and the rest goes out normally, or configure exit nodes, so if/when desired, all traffic is tunneled like what you have now.
If you have any question about Headscale you could ask in !selfhosted@lemmy.world
This would be my choice as well, as I went with Dockge exactly because it works with your existing docker-compose files, and there are no issues if you manage with either Dockge or with the terminal.
If you add Ntfy or Gotify then you should be set.
But I think I’m understanding a bit! I need to literally create a file named “/etc/radicale/config”.
Yes, you will need to create that config
file, on one of those paths so you then continue with any of the configuration steps on the documentation, you can do that Addresses
step first.
A second file for the users is needed as well, that I would guess the best location would be /etc/radicale/users
For the Authentication part, you will need to install the apache2-utils
package with sudo apt-get install apache2-utils
to use the htpasswd
command to add users
So the command to add users would be htpasswd -5 -c /etc/radicale/users user1
and instead of user1, your username.
And what you need to add to the config file for it to read your user file would be:
[auth]
type = htpasswd
htpasswd_filename = /etc/radicale/users
htpasswd_encryption = autodetect
Replacing the path with the one where you created your users file.
I’m trying to follow the tutorial on the radicale website but am getting stuck in the “addresses” part.
From reading from the link you provided, you have to create a config file on one of two locations if they don’t exist:
“Radicale tries to load configuration files from /etc/radicale/config
and ~/.config/radicale/config
”
after that, add what the Addresses
sections says to the file:
[server]
hosts = 0.0.0.0:5232, [::]:5232
And then start/restart Radicale.
You should be able to access from another device with the IP of the Pi and the port after that
Yeah, I started the same, hosting LAN parties with Minecraft and Counter Strike 1.6 servers on my own Windows machine at the time.
But what happens when you want to install some app/service that doesn’t have a native binary installer for your OS, you will not only have to learn how to configure/manage said app/service, you will also need to learn one or multiple additional layers.
I could have said “simple bare metal OS and a binary installer” and for some people it would sound as Alien, and others would be nitpicky about it as they are with me saying docker (not seeing that this terminology I used was not for a newbie but for them), If the apps you want to self-host are offered with things like Yunohost or CasaOS, that’s great, and there are apps/services that can be installed directly on your OS without much trouble, that’s also great. But there are cases where you will need to learn something extra (and for me that extra was Docker).
XKCD 2501 applies in this thread.
I agree, there are so many layers of complexity in self-hosting, that most of us tend to forget, when the most basic thing would be a simple bare metal OS and Docker
you’ll probably want to upgrade the ram soon
His hardware has a max ram limit of 4, so the only probable upgrade he could do is a SATA SSD, even so I’m running around 15 docker containers on similar specs so as a starting point is totally fine.
I get your point, and know it has its merits, I would actually recommend Proxmox for a later stage when you are familiar with handling the basics of a server, and also if you have hardware that can properly handle virtualization, for OP that has a machine that is fairly old and low specs, and also is a newbie, I think fewer layers of complexity would be a better starting point to not be overwhelmed and just quit, and then in the future they can build on top of that.
I have a Dell Inspiron 1545, that has similar specs to yours running Debian with Docker and around 15 services in containers, so my recommendation would be to run Debian server (with no DE), install docker, and start from there.
I would not recommend proxmox or virtual machines to a newbie, and would instead recommend running stuff on a bare metal installation of Debian.
There are a bunch of alternatives to manage and ease the management of apps you could choose from like, yunohost, casaOS, Yacht, Cosmos Cloud, Infinite OS, cockpit, etc. that you can check out and use on top of Debian if you prefer, but I would still recommend spending time on learning how to do stuff yourself directly with Docker (using docker compose files), and you can use something like Portainer or Dockge to help you manage your containers.
My last recommendation would be that when you are testing and trying stuff, don’t put your only copy of important data on the server, in case something break you will lose it. Invest time on learning how to properly backup/sync/restore your data so you have a safety net in case that something happens, you have a way to recover.
I have no experience with this app in particular, but most of the time there is an issue like this that you can’t reach an app or any other path besides the index, is because the app itself doesn’t work well with path redirection of subfolders, meaning the app expects paths to be something like domain.tld/index.html
instead of domain.tld/subfolder/index.html
for all its routes.
Some apps let you add a prefix to all its routes they can work, so you not only have to configure nginx but the app itself to work with the same subfolder.
Other apps will work with the right configuration in nginx if they do a new full page load every time the page changes its path/route.
If it is a PWA that doesn’t do a page load every time the path is changed, it’s not going to work with subfolders as they don’t do any page refresh that goes through nginx, and just rewrite the visible URL on the browser
What I can recommend is to switch to a subdomain like 2fa.domain.tld
instead of a subfolder and test if it works, as subdomains are the modern standard for this kind of thing these days, to avoid this type of issues.
Edit: looking at the app demo, it seems to be a vue.js PWA that doesn’t do any full page refreshes on a path change, so as stated you will probably have to switch to a subdomain to make it work.
but isn’t the DE (or WM if you abhor DEs) more important?
Yes, the most important thing is actually the DE/WM, most mainstream distros are exactly the same in how they are structured/configured, and the variations if you ignore the DE/WM come mostly from package manager, same packages with different names, and different default apps/configuration installed by default.
You don’t need to backup all your 24TB of data, you can have a copy of a subset of your important data on another device, if possible the best would be a 3-2-1 approach.
“RAID is not a backup”, is something that is mentioned a lot, as you can still lose data on a RAID setup.