The only externally accessible service is my wireguard vpn. For anything else, if you are not on my lan or VPN back into my lan, it’s not accessible.
This is the way.
Funnily enough it’s exactly the opposite way of where the corporate world is going, where the LAN is no longer seen as a fortress and most services are available publically but behind 2FA.
Corporate world, I still have to VPN in before much is accessible. Then there’s also 2FA.
Homelab, ehhh. Much smaller user base and within smackable reach.
Oh right. The last three business I’ve worked in have all been fully public services; assume the intruder is already in the LAN, so don’t treat it like a barrier.
Can I ask your setup? I’d like to get this for myself as well.
Try pivpn. It is meant to run on a raspberry pi, but it should work on most Ubuntu and Debian based distributions.
Not OP but… I have an old PC as a server, Wireguard in docker container, port-forward in the router and that’s it
Which image? I’ve seen a few wireguard options on docker hub
Linuxserver
Not OP, but I just use ZeroTier for this since it’s dead simple to setup and free. I’m sure there’s some 100% self-hosted solutions, but it’s worked for me without issue.
Sorry, haven’t logged on in a bit. I use OPNSense on an old PC for my firewall with the wireguard packet installed.
Then use the wireguard client on my familys phones/laptops that is set to auto connect when NOT on my home wifi. That way media payback, adguard-home dns and everything acts as seamless as possible even when away while still keeping all ports blocked.
Everything is behind a wireguard vpn for me. It’s mostly because I don’t understand how to set up Https and at this point I’m afraid to ask so everything is just http.
I’ve been using YunoHost, which does this for you but I’m thinking of switching to a regular Linux install, which is why I’ve been searching for stuff to replace YunoHost’s features. That’s why I came across Nginx Proxy Manager, which let’s you easily configure that stuff with a web UI. From what I understand it also does certificates for you for https. Haven’t had the chance to try it out myself tho because I only found it earlier today.
NPM is the way. SSL without ever needing to edit a config file.
NPM is nice and easy to use.
Its not hard really, and you shouldn’t be afraid to ask, if we don’t ask then we don’t learn :)
Look at Caddy webserver, it does automated SSL for you.
Thank you. It was mostly ment as a joke tho. I’m not actually afraid to ask, but more ignorant because it’s all behind VPN and that’s just so much easier and safer and I know how to do it so less effort. Https is just magic for me at the moment and I like it that way. Maybe one day I’ll learn the magic spells but not today.
Careful with Caddy as its had a few security issues.
All software has issued, such is the nature of software. I always say if you selfhost, at least follow some security related websites to keep up to date about these things :)
Do you have any suggestions for reputable security related websites?
too many :) Here is a snippet of my RSS feed, save it as an xml file and most rss reeders should be able to import it :) https://pastebin.com/q0c6s5UF
few days late here, but that pastebin had some really good feeds 🙏 I noticed the OPML file was labeled FreshRSS and I also use FreshRSS. So I fixed up the feeds and configured FreshRSS to scrape the full articles (when possible) and bypass ads, tracking and paywalls.
I figured I’d pay it forward by sharing my revised OPML file.
I also included some of my other feeds that are related (if you or anyone else is interested).
Some of the feeds are created from scratch since a few if these sites don’t offer RSS, so if the sites change their layout the configs may need to be adjusted a bit, but in my experience this rarely happens.
I had to replace some of the urls with publicly hosted versions of the front-ends I host locally and scrape, but feel free to change it up however you like.
https://gist.akl.ink/Idly9231/22fd15085f1144a1b74e2f748513f911
Thank you :)
Everything is accessible through VPN (Wireguard) only
Same. Always on VPN on phone for on the go ad blocking via pihole.
Same here. Taught my wife how to start WireGuard on her android phone and then access any of the services I run. This way I only have one port open and don’t have to worry too much.
How about running your wireguard server on a VPS and then connecting to the same interface as clients from your mobile and home network? No ports open on your side!
That’s what I do. The beauty of wireguard is that it won’t respond at all if you don’t send the right key. So from the outside it will appear as if none of your ports are open.
Nothing I host is internet-accessible. Everything is accessible to me via Tailscale though.
I had everything behind my LAN, but published things like Nextcloud to the outside after finally figuring out how to do that even without a public IPv4 (being behind DS-Lite by my provider).
I knew about Cloudflare Tunnels but I didn’t want to route my stuff through their service. And using Immich through their tunnel would be very slow.
I finally figured out how to publish my stuff using an external VPS that’s doing several things:
- being a OpenVPN server
- being a cert server for OpenVPN certs
- being a reverse proxy using nginx with certbot
Then my servers at home just connect to the VPS as VPN clients so there’s a direct tunnel between the VPS and the home servers.
Now when I have an app running on 8080 on my home server, I can set up nginx so that the domain points to the VPS public IPv4 and IPv6 and that one routes the traffic through the VPN tunnel to the home server and it’s port using the IPv4 of the VPN tunnel. The clients are configured to have a static IPv4 inside the VPN tunnel when connecting to the VPN server.
Took me several years to figure out but resolved all my issues.
What benefit does it have instead of getting a dynamic DNS entry and port forwarding on your internet connection?
With DS-Lite you don’t have a public IPv4. Not a static one but also not a dynamic one. The ISP just gives you a public IPv6. You share your IPv4 address with other users. This is done to use less IPv4s. But not having a dynamic IPv4 causes you to be unable to use DynDNS etc. It’s simply not possible.
You could publish your stuff via IPv6 only but good luck accessing it from a network without IPv6.
You could also spin up tunnels with SSH actually between a public server and the private one (yes SSH can do stuff like that) but that’s very hard to manage with many services so you’re better of building a setup like mine.
Thanks for the great explanation!
I’m interested in why you’re terminating TLS on your VPS instead of doing it on your home network
100% is lan only cause my isp is a cunt
Tailscale with the Funnel feature enabled should work for most ISPs, since it’s setup via an outbound connection. Though maybe they’re Super Cunts and block that too.
Prompt: Super Cunt, photorealistic, in the style of Jill Greenberg.
Ah, CG-NAT, is it? There are workarounds
NAT to extremes… it’s Starlink so I think I’m almost completely obfuscated from the internet entirely.
quite frankly i don’t really host anything that needs to be accessible from the general Internet so I never bothered with workarounds.
I had the same issue. Wrote another comment here explaining my setup to solve my ISP issue.
I currently keep everything LAN-only because I haven’t figured out how to properly set up outside access yet.
(I would like to have Home Assistant available either over the Internet or via VPN so that automations keyed off people’s location outside the home would work.)
I have used DuckDNS and Nginx to get Home Assistant outside but it was horrible, just constantly breaking. Around Christmas time I bought myself a domain name for a few years and Cloudflare to access it, and it’s been night and day since.
Sure it cost me money but it was far cheaper than a Nabu Casa account.
Tailscale plugin for HA works flawlessly for me.
Yeah, same, except I tunneled HA out via that Cloudflare daemon. Kinda janky because I cannot use the app with it to do locations, but I can check in on the pets from anywhere.
I’m planning to set up a legit VPN sometime soon.
Why can’t you use the app to do locations?
I cannot get the app to connect to my HA with the current setup. I have Cloudflare doing email verification, and the app doesn’t understand how to collect the cookies to make that possible.
Just recommendes something that could help you to someone else here
deleted by creator
There’s a wid range of opinions on this. Some people only access their services via tunnel, some people open most of their services up to the internet, as long as they’re authenticated. One useful option for https services is to put them behind a reverse proxy that require oauth authentication, which allows you to have services over the internet, without increasing your attack surface. But that breaks apps like Nextcloud and Lemmy, so it’s not a universal option.
deleted by creator
Available to the internet via reverse proxy:
- Jellyfin
- Navidrome
- Two websites
- matrix chat server
- audiobookshelf
LAN only:
- homepage
- NGINX Proxy Manager
- Portainer
There’s more in both categories but I can’t remember everything I have running.
What is homepage? I’m testing homarr right now (assuming it’s similar) but haven’t set on it yet
I believe it’s this
I’ve been eyeing it myself
Woo thank you!
That it is!
It’s another dashboard like homarr. I set up homarr and homepage side by side to pick one and landed on homepage. No specific reason, I just gravitated to it over homarr.
Thanks, I’ll check it out :D
You’re welcome!
All of it is LAN only except Wireguard and some game servers.
Everything exposed except NFS, CUPS and Samba. They absolutely cannot be exposed.
Like, even my DNS server is public because I use DoT for AdBlock on my phone.
Nextcloud, IMAP, SMTP, Plex, SSH, NTP, WordPress, ZoneMinder are all public facing (and mostly passworded).
A fun note: All of it is dual-stacked except SSH. Fail2Ban comparatively picks up almost zero activity on IPv6.
Unlike most here, I’m not as concerned with opening things up. The two general guidelines I use are 1. Is it built by a big organization with intent to be exposed, and 2. What’s the risk if someone gets in.
All my stuff is in docker, so compartmentalized with little risk of breaking out of the container. Each is on it’s own docker network to the reverse proxy, so no cross-container communication unless part of the same stack.
So following my rules, I expose things like Nextcloud and Mediawiki, and I would never expose Paperless which has identity documents (access remotely via Tailscale). I have many low-risk services I expose on demand. E.g. when going away for a weekend, I might expose FreshRSS so I can access the feed, but I’d remove it once I got home.
Doesn’t Nextcloud running in Docker want the socket exposed?
I googled around for an example https://book.hacktricks.xyz/linux-hardening/privilege-escalation/docker-security/docker-breakout-privilege-escalation.
Ignore me if you’ve already hardened the containers.
I’ve never known a reason to expose the docker socket to Nextcloud. It’s certainly not required, I’ve run Nextcloud for years without ever granting it socket access.
Most of the things on that linked page seem to be for Docker rather than Nextcloud, and relate to non-standard configuration. As someone who is not a political target, I’d be pretty happy that following Nextcloud’s setup guide and hardening guide is enough.
I also didn’t mention it, but I geoblock access from outside my country as a general rule.
I was looking into setting up Nextcloud recently and the default directions suggest exposing the socket. That’s crazy. I checked again just now. I see it is still possible to set it up without socket access, but that set of instructions isn’t as prominent.
I linked to Docker in specific because if Nextcloud has access to the socket, and hackers find some automated exploit, they could easily escalate out of the Docker container. It sounds like you have it more correctly isolated.
Was it Nextcloud or Nextcloud All in One? I’ve just realised that the Nextcloud docker image I use is maintained by Docker, not Nextcloud. It’s this one: https://hub.docker.com/_/nextcloud/
I use Docker-compose and even the examples there don’t have any socket access.
The all in one image apparently uses Traefik, which seems weird to use an auto configuring reverse proxy for an all in one image where you know the lay of the land. Traefik requires access to the docker socket for auto configuration. But you can proxy the requests to limit access to only what it needs if you really want to use it.
What I was looking at was the All in One, yes. I didn’t realize there was a separate maintained image, thank you! I’d much rather have a single image without access to the socket at all, I’ll give that a shot sometime.
One warning: in my experience, you can not jump two major versions. Not just it won’t work, but that if you try it everything will break beyond repair and you’ll be restoring from a backup.
Two major versions can sometimes be a matter of a few months apart, so make sure you have a regular update schedule!
(Also, people say never update to a X.0 release, the first version of a major release often has major bugs).
TL;DR don’t take too long to update to new releases, and don’t update too quickly!
Also, the docker image is often a day or so behind the new release, soNextcloud tells you an update is available but often you then need to wait until the next day to get the updated docker image. I guess this is because (as I’ve just learnt) the image is built by Docker not Nextcloud.
Nothing outside the LAN. Just Tailscale installed on my Synology NAS, on HomeAssistant and on all my machines.