• 0 Posts
  • 46 Comments
Joined 2 years ago
cake
Cake day: August 3rd, 2023

help-circle
rss

  • I just built a NAS not too along ago, so I’ll just say what I would have changed in my build. Maybe it will help you.

    1. Get a server you can manage over something like ipmi or similar. Look at ASRock rack or super micro.
    2. Try and get something with a lot of pci lanes or at least bifurcation. This way you can expand and use more nvmes
    3. Go with nvmes first if you can. Depending on what you go with for a filesystem, will sort of depend how many you should start with.
    4. If you go for 10gbe, don’t use ethernet unless it’s onboard, otherwise go with sfp card and switch. It runs cooler.
    5. Try to find something with rdimm. Cheaper than ecc dimm and easier to find.
    6. Don’t forget a UPS. Protect your investment.

    One thing to watch out for. Some of these server motherboards expect the smaller higher rpm fans. This means you may have to fiddle to get the fan curve corrected with normal fans.

    If using something like zfs, you may want to have a bigger raid to start with. Otherwise you can do mirrored vdevs and combine them. But it can be a bit costly since you need 2 drives every time you want to expand and won’t get as much space as say something like z2 (2 parity drives)

    Nvmes will run cooler, longer, and faster. They also take up way less space.

    Make sure you have backups. You could probably use your Synology for this or some old computer parts you have lying around. But if you do use something like trunas, it makes it very easy to backup. This server you can use platter drives.

    16gh of memory is not enough. If you want to run workloads on there and if you do use something like trunas, you want as much memory as you can get for caching.

    Also, just buy used. You can find cheap servers online or just the parts on like eBay or even AliExpress.

    Good luck!


  • There is no one answer that fits all. Where cloud will always be cheaper is data storage.

    If you were to host everything on-prem, that would be a lot of capex. It would cost to maintain that as well. For on-prem, you have to think more about electricity, redundancy, backups, security, and so on. Anything you would need to do to build out a data center. Once you have it set up though, yes it would be cheaper.

    For tech companies, this is already a non starter as they want to scale and scale fast. They also can’t just spend all their investors money, so they convert capex into opex instead.

    Also, historically, IT is slow. Very slow. This is why there is a world of DevOps because developers became increasingly frustrated with how slow it is to provision infrastructure for them. To fix this, you could probably hire more people, but again, that’s an extra expensive that you can just now offset to cloud.

    With cloud you can set up something in multiple data centers within minutes. If on-prem, you would need to have multiple physical locations of your own.

    Another option is to rent out space in a data center, then you just buy your own hardware and do not have to worry about 80% of what would go into a data center. You would still need to set up these systems in a way that can scale for future use, which means more capex up front.

    At the end of the day, there is no one size fits all. As you mentioned, most businesses could benefit in the long run by hosting their own stuff. I will say though, managing things like your own email server has become a nightmare. This is just a lot easier to let someone else manage. Then again, you have the concern of data storage, this is just easier and cheaper to host in cloud. Something like Google workspace or m365.

    To put it another way, go to your boss and tell him you need to pay $2,000,000 up front for IT hardware. Now tell him you’ll need to pay $250,000 a year for the same services in cloud. What do you think they will go with?

    I do hate that it’s come to this though, because I feel like people are losing knowledge. Only the people that build data centers these days will have that IT knowledge and you have people that can no longer tinker like we used to.


  • This is not coming from me btw.

    VAERS is not designed to assess cause and effect, VAERS reports alone cannot be used to determine if a vaccine caused or contributed to an adverse event (AE) or illness. Some reports may contain information that is incomplete, inaccurate, coincidental, or unverifiable. Most reports to VAERS are voluntary, which means they are subject to bias. Data from VAERS reports should be interpreted with these limitations in mind.

    VAERS reporting can be done by anyone: Vaccine providers, other health care professionals, local public health agencies, vaccine recipients, patients, and/or family members of people who have received a vaccine are encouraged to submit a VAERS report when an adverse event occurs after vaccination.

    Seems pretty clear to me. Not sure how you can sit there and say that this is a reliable data source. This may help detect certain issues and patterns, but it’s not a data source in which you can call a source of truth. Someone would need to further investigate each report to make a better determination.

    You say it’s against the law to make false statements, but this doesn’t matter when there is no control and these statements can be made with complete bias. It’s not that people are lying, they believe what they are saying, but it doesn’t mean the information is actually true.

    There are plenty of other studies around vaccines that show the risks like you mentioned, but they also show that the benefits far outweigh the downside. I don’t even know why politics has anything to do with this. We are talking about facts and science and I don’t understand how politics has anything to do with this.

    I honestly don’t even understand what you are even arguing for. You just sound like an old guy yelling at the kids to get off his lawn.



  • In the tech world, some companies want you to do a take home project. They say things like, this will only take an hour, yet it normally takes like 4+ hours for most people. Anyway, I finally got one interview like this and I will not work for a company that pulls this crap.

    I decided to go through with it and just post it online for anyone to see. I did the project, sent them the GitHub URL and did everything exactly as they asked. They responded with, can you make that a private repo and give the people permissions that need it. I told them I would rather keep it public and that I won’t be doing that. I never signed an NDA, so there was no way for them to force me to take it down.

    Fuck all these places that do this type of shit.


  • I have a loan right now that is at like 3%. I can pay it off now if I wanted to, but it’s so low that I can easily make more money by putting it away and collect interest on it. It would be kinda nice to get a new car, but where this country is headed, it’s not worth it. I feel bad for the younger generations and what lies ahead for them.









  • You will get different answers. Some people like proxmox with ZFS. You can run vms and lxc containers pretty easily. Some people like running everything in a container and using podman or docker. Some people like to raw dog it and just install everything on bare metal ( I don’t recommend this approach though).

    The setup I currently have are 3 servers. One server for compute. This is where I run all my services from. 1 server for storage. 1 server for backup storage.

    The compute server is set up with an NFS share that connects to the storage server. These all have a 10gbe nic on a 10gbe switch.

    If I could go back and redo this setup again, I would make a few changes. I do have a few NVMe drives in my storage server for the NFS share. The compute server has the user home directories on there, as well as the permanent files for the containers that have volumes. This makes it easy for me to backup that data to the other server as well.

    With that said, I kinda wish I went with less storage and built out a server using mostly nvmes. My mobo doesn’t do bifurcation on its x16 slots and so I can only get 1 NVMe per slot. It’s a waste. Nvmes can run somewhat hot, but are smaller and easier to cool than platters. Plus it’s faster to rebuild if something were to happen. You could probably get away with using 1 parity drive because of this.

    I would still need a few big drives for my media, but that data is not as critical to me in the event I lost something there.

    What I would look for in a storage system are the following:

    • mobo with rdimm memory
    • bifurcation pcie slots to add adapter cards for NVMe drives or lots of NVMe slots on the mobo.
    • if doing 10gbe, use sfp+ nics and a sfp+ switch (runs cooler). Then you would just get sfp cables instead of cat6/6e.
    • management port (ipmi)
    • as much memory as you can afford

    With those requirements in mind, something like an ASRock server motherboard using an AMD epyc would normally fit the bill. I have seen bundles go for about 600-700 on AliExpress.

    As far as the OS. I treat the storage server as an appliance. I have truenas on there. This is also the reason I have a separate computer server as it makes it easier for me to manage services the way I want, without trying to hack the truenas box. This makes it easy to replicate to my backup since that is also truenas. I have snapshots every hour and those get backed up. I also have cloud backup for critical data every hour.

    Last, but not least, I have a vps server so I can access my services from the internet. This uses a wireguard tunnel and forwards from the vps to the compute server.

    For the compute server, I am managing mostly everything with saltbox. Which uses ansible and docker containers for most services.

    No matter what you choose, I highly recommend ZFS for your data. Good luck!





  • I decided instead to use ZFS. Better protection than just letting something sit there. Your backups are only as good as your restores. So, if you are not testing your restores, those backups may be useless anyway.

    ZFS with snapshots, replicated to another ZFS box. The replicated data also stores the snapshots and they are read-only. I have snapshots running every hour.

    I have full confidence that my data is safe and recoverable.

    With that said, you could always use M-disk.


  • Any reason why that board? Not 100% sure what you are trying to do, but it seems like an expensive board for a home NAS. I feel like you could get more value with other hardware. Again, you don’t need a raid controller these days. They are a pain to deal with and provide less protection when compared to software raid these days. It looks like the x16 can be split on that board to be 8/8, so if needed you can add an adapter to add 2 nvmes.

    You can just get an HBA card and add a bunch of drives to that as well if you need more data ports.

    I would recommend doing a bit more research on hardware and try and figure out what you need ahead of time. Something like an ASRock motherboard might better in this case. The epyc CPU is fine. But maybe get something with rdimm memory. I would just make sure it has a Management port like ipmi on the supermicro.