• 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle











  • exi@feddit.detoSelfhosted@lemmy.worldSelfhosted Monitoring Tools
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    For a handful of servers, try zabbix. Every distribution has a packaged zabbix agent. It has everything: web ui, a way to Auto discover things with a bit of setup, nice graphs, alerting, LDAP User Management if you need it, a way to define per person/group alerting/notification schedules. And the community is big enough that many common services (fail2ban/postfix/MySQL/etc.) have premade custom monitoring scripts. Adding your own metrics is also very easy.





  • exi@feddit.detoLinux@lemmy.mlWhat Filesystem?
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    1 year ago

    Not really. You can still use dm-verity for a normal raid and get checksumming and normal performance, which is better and faster than using btrfs.

    But in any case, I’d recommend just going with zfs because it has all the features and is plenty fast.


  • exi@feddit.detoLinux@lemmy.mlWhat Filesystem?
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    From arch wiki:

    Disabling CoW in Btrfs also disables checksums. Btrfs will not be able to detect corrupted nodatacow files. When combined with RAID 1, power outages or other sources of corruption can cause the data to become out of sync.

    No thanks


  • exi@feddit.detoLinux@lemmy.mlWhat Filesystem?
    link
    fedilink
    arrow-up
    11
    arrow-down
    2
    ·
    1 year ago

    If you are planning to have any kind of database with regular random writes, stay away from btrfs. It’s roughly 4-5x slower than zfs and will slowly fragment itself to death.

    I’m migrating a server from btrfs to zfs right now for this very reason. I have multiple large MySQL and SQLite tables on it and they have accumulated >100k file fragments each and have become abysmally slow. There are lots of benchmarks out there that show that zfs does not have this issue and even when both filesystems are clean, database performance is significantly higher on zfs.

    If you don’t want a COW filesystem, then XFS on LVM raid for databases or ext4 on LVM for everything else is probably fine.


  • Let’s do the math.

    Crude oil is 85% carbon. Dry wood is about 50% carbon. Average oil production worldwide in the last 20 years is about 73million barrels per day or 26.6 billion barrels per year.

    So just in oil, we produced around 26.6*20 = 532 billion barrels of oil. At a weight of 136kg/barrel, that equals 72billion tonnes of oil. Because it’s 85% carbon, that equals 61.5billion tonnes of pure carbon.

    Converting this into wood would require 123 billion tonnes of wood. At an average density of 650kg/m3 for Oak, which grows reasonably fast, that equals 189 billion cubic meters of wood. That’s a solid 1 meter thick square of wood with an edge length of 434km that we would need to STORE indefinitely to offset just the crude oil of the last 20 years. That’s 53% of the surface area of Germany in 1 meter thick wood and every year we’d additionally need to grow, harvest and store enough wood to cover 46% of Wales in 1 meter of solid dry wood.

    Seems doable 🤣