Since Google is getting rid of my unlimited Gdrive and my home internet options are all capped at 20 megabits up, I have resorted to colocating my 125 terabyte Plex server currently sitting in my basement. Right now it is in a Fractal Define 7 XL, but I have order a Supermicro 826 2U chassis to swap everything over to.

This being my first time colocating I’m not quite sure what to expect. I don’t believe I will have direct access since it is a shared cabinet. Currently it is running Unraid, but I’m considering switching to Proxmox and virtualizing TrueNAS. Their remote hands service is quite expensive, so I’d like to have my server as ready to go as possible. I’m not even sure how my IP will be assigned: is DHCP common in data centers or will I need to define my IP addresses prior to dropping it off?

If anyone has any lessons learned or best practices from colocating I would be really interested in hearing them.

  • themoonisacheese@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    1 year ago

    I’m sorry for the non-answsers in advance but here goes:

    If you won’t have easy access, consider server motherboards with KVM over IP capabilities. They really can get you far.

    IP is generally Managed DHCP but I have seen DCs that just tell you your IP and good luck and it’s on the honor system. Some of them even let you publish your own BGP blocks.

    Basically best practices boil down to:

    Data centers are businesses and as a costumer they should be answering your questions about their operating policies. If they aren’t consider a different DC.

    Don’t be a dick to them, and don’t be a dick to your network neighbors.

    You’re no longer behind a home router with a firewall that has sensible rules, so it is now up to you to avoid getting pwned and footing the power bill. It is also up to you to avoid spamming out stray traffic.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      Never colocated, but did rent baremetal from OVH back when they didn’t have any KVM and all you could do is wipe/reinstall, reboot and boot into a 2-3 releases old Debian recovery.

      Definitely seconding the KVM remote access part: you really, really want that, or at least some way to hard reset your server if it crashes. I can’t stress this enough. Even if you think you’ll never need it, you never know when you’ll have a kernel panic or need to do some boot troubleshooting, even just to run fsck. It’s absolutely nerve-wracking to reboot a server you don’t have any way to access other than SSH and looking at that ping window for 2-5 minutes while the thing boots back up and wondering it it will come back online or not.

      If you don’t have IPMI and can’t have some sort of KVM for your server, I highly recommend having at least a PiKVM or something in there to be able to do remote troubleshooting. Ideally I also recommend (if no IPMI) setting up some sort of preboot environment you know will reliably boot (maybe something entirely in initramfs) that will boot up, get network and listen for SSH for a couple minutes before chainloading back into the main OS so that you can at least turn off firewall/reset network to known good. Anything that will give you remote access independently of your main OS.


      At least I had access to the recovery environment from OVH, but even then, that thing took a full boot cycle to boot up + some more time for them to deliver the credentials by email (that better not be hosted on that box itself), change a config file, reboot again. Legit 10-15 minutes between each attempt and little to no way of knowing what happens until you boot the recovery again. It was horrifying, can’t recommend.

      IPMI saved my ass a few times and I’m never getting another box without it.

      • themoonisacheese@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        Tbh I worked on a campus where we had total free access to our bays in the local DC (like 5 minutes away by car), even in the dead of night we just had to make a call to not get stopped at the door, and even then IPMI is still just so much more convenient than sitting on the floor with your laptop, a VGA screen and PS2 keyboard among your tools in a loud DC with mandatory earplugs and an eye on the nitrogen fire supression that really has no reason to trigger but it could and that is terrifying.

        Or you could have IPMI and be sat at your desk with coffee and listening to music. Your choice really, I wonder why iLO licenses are so expensive :P

        • Notorious@lemmy.linkOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I have a spare Pi4 sitting around the house that I could pretty cheaply turn into a PiKVM. Looks like there are some slick hats to install into a PCI-E slot so I don’t have a Pi and a bunch of wires hanging out in the chassis. Looks like I’ll be going that route. Just need to figure out how to power it (they all seem to require external 5v or POE).

          • themoonisacheese@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            Consumer motherboards have some USB ports that have standby power @2A. Or the power supply has a 5vsb rail as well, that’s where that comes from.

      • Notorious@lemmy.linkOP
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Does the IPMI or KVM go on a private network of some sort? Surely you wouldn’t want to expose that to the internet.

        • themoonisacheese@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Usually you define a VLAN dedicated to your IPMI devices, only accessible through an access-controlled way (usually, VPN served by the firewall but don’t do that if you’re virtualizing the firewall for obvious reasons). The DC might offer a VPN of their own specifically for this purpose, or you can pay them for more space to install a physical firewall but that’s a more significant investment.

          Ultimately best practices say not to expose the IPMI to the internet, but if you really have no choice and your thing is up to date then you only need to fear 0-days and brute force attacks, the login pages are usually pretty secure since access is equivalent to physical access. You will attract a lot of parasite traffic probing for flaws though.

        • Max-P@lemmy.max-p.me
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Usually yes. That’s something you might want to discuss with the datacenter what they have to offer that way, some will give you a VPN to be able to reach it. But I don’t have experience with that, my current servers came with IPMI and I can download a Java thing from OVH to connect to it.

    • Notorious@lemmy.linkOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      This is good info! I’ll follow up with the provider. Unfortunately even though I live in a large city, of the two dozen or so places I contacted only two of them would consider less than a half rack.

      consider server motherboards with KVM over IP capabilities

      I had not considered this. My plan was to initially just swap the consumer grade stuff I have over to the 826 since it supports ATX, but now I’ll reconsider. Remote KVM has come in handy a few times with my dedicated servers over the years, so lacking that would suck pretty bad. I don’t know that I won’t have access, but several of the other providers stated on their websites that shared cabinets won’t have physical access (which I honestly would prefer since I’ll have several thousand dollars in hardware sitting in there).

      Data centers are businesses and as a costumer they should be answering your questions about their operating policies. If they aren’t consider a different DC.

      Great point and I totally agree! Just didn’t want to walk in like a complete noob asking a bunch of dumb questions if I could prevent it.

      You’re no longer behind a home router with a firewall that has sensible rules, so it is now up to you to avoid getting pwned and footing the power bill. It is also up to you to avoid spamming out stray traffic.

      Thankfully I’ve got quite a bit of experience hardening servers exposed directly to the internet. *knocks on wood* So far I’ve managed to not get pwned by turning on automatic security updates, keeping open ports limited to ssh with password/root login disabled and reverse proxying everything. If I need access to something that doesn’t need to be exposed I just port forward through ssh.

  • ZeroCooler@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    This really depends on the services you’re paying for from the colo.

    Assuming they offer internet services, you can probably chooses between a static or dynamic IP for your WAN IP. For your internal network, you would be responsible for DHCP or static assignment.

    You’ll also need a security device like a firewall or router that can perform NAT for your internal addresses.

    This info is assuming a lot, I’m not sure if you’re paying for a service that might include the WAN networking component, or if you’re just paying for power and real estate.

    Happy to help with any more info if you have specific questions.

    Also, you should be able to physically access your gear yourself so you’re not paying for smart hands. I would ask the colo of their access hours are anything other than 24/7.

    • Notorious@lemmy.linkOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      You’ll also need a security device like a firewall or router

      This is one of the major reasons I’m moving to Proxmox. I’m going to virtualize OPNsense or pfSense and put everything behind that. I guess I should have said that I’ve host multiple dedicated servers over the decades, so from a security standpoint I’m pretty familiar. Really just trying to focus on the hardware side since this is the first time I will actually be responsible for managing and maintaining the hardware.

      • ZeroCooler@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Ok, cool. If you’re just paying for the rack space and power, make sure you know what the rates are for going over power allotment (and bandwidth if it includes burst, some ISPs might still charge extra of you burst above the bandwidth you’re paying for). Confirm if you’ll have access to 120v or 240v or both. What power cables you’ll need for your PDU or servers if they’re providing the PDU.

  • Brkdncr@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    You want to use as little space as possible tonsave on cost.

    A server with ipmi is ideal.

    A hardware vpn firewall is a good idea.

    Do you need to provide your own router or switch?