• 8 Posts
  • 40 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • Few weeks late to pitch in now but I can +1 docker-mailserver.

    It has almost everything included and the configuration files are quite straightforward and flexible enough that you can drop little edits into the individual services if you need to tweak something.

    My setup is very close to what you want: I use fetchmail to pull in from my old gmail and yahoo inboxes, I also have my own domain so I configured the MX records so that emails go straight to my server, with a fallback to my email provider (any mail that doesn’t make it directly to my mailserver will still get pulled into my inbox with fetchmail when it comes back online).

    Docker-mailserver allows you to set the SMTP of your instance to use your provider. This is important because it means that they do all the reputation stuff so that your emails work properly (and both my home ISP and my VPS provider don’t do sending over pot 25 anyway).

    So when I need to connect a new client (like Thunderbird) to my email I don’t need to manually config anything as docker-mailserver has all the auto config messages so its really seamless. At the same time my risk is low because even if my sever is off my provider will still receive anything on my behalf. I can only send using the username I have paid for from my provider and switching between gmail and yahoo is not possible without rewriting configs and restarting services but its not something I want anyway. On the receiving side I can have any number of aliased usernames that will all be received by my server (but only when its on so i use them rarely and for disposable addresses).

    Big downsides are:

    • backups are now my problem as I don’t keep duplicates.
    • I route my traffic via a VPS+VPN to get a static public IP address - this was a headache to get all the little details just right but its stable now
    • I have to host separately my own webmail, although I’m mostly using my phone with k-9 Mail and Thunderbird (I use roundcube)
    • Getting server side filtering rules working was also annoying and so far I still have to add new rules through roundcube (there was a plugin for Thunderbird but I don’t want to open the additional ports required)
    • !!! Spam !!! Docker-mailserver has a great Rspamd default settings out the box so its actually fine but now I have to manage all the additional rules and its not super intuitive especially because I am doing all this just for myself (yay!) but the tools are clearly meant for managing a fleet of inboxes so everything takes me longer to figure out
    • integrating contacts is not included and might be important for your experience (again I was able to add this as a plugin to roundcube … eventually)

    Most of my complaints stem from the fact that I’m not very good at this but in the end it has been very satisfying to drop the occasional: “I host my own email BTW”

    Good luck! Let us know how you get along!



  • I’m trying to do a 3-2-1 but instead I’m doing a 4-3-0. Original is on SSD with scheduled backups to two separate HDs so that I have 3 copies on two different media (if SSD + HD counts as distinct enough) so then I added in BDR as an infrequent 4th manual copy for my most irreplaceable data (and I’m very strict with what counts as irreplaceable so that the total is just over 100GB at this point). Eventually I need to get a copy of the disks off site but for now they are in the basement.

    I have no illusions about how long the BDRs will last. (Seems like it is anywhere between 100 days and 100 years).My aim is to just have another copy that is distinct from magnetic or flash storage. My plan is to burn new updated copies so that any data on an old disk will get burned to a newer disk at some point. Maybe in ten years I’ll abandon this approach but for now it makes me feel better.














  • I had a similar idea: Could search engines be broken up and distributed instead of being just a couple of monoliths?

    Reading the HN thread, the short answer is: NO.

    Still, its fun to imagine what it might look like if only…

    I think the OP is looking for an answer to the problem of Google having a monopoly that gives them the power to make it impossible to be challenged. The cost to replicate their search service is just so astronomical that its basically impossible to replace them. Would the OP be satisfied if we could make cheaper components that all fit together to make a competing but decentralized search service? Breaking down the technical problems is just the first step, the basic concepts for me are:

    Crawling -> Indexing -> Storing/host index -> Ranking

    All of them are expensive because the internet is massive! If each of these were isolated but still interoperable then we get some interesting possibilities: Basically you could have many smaller specialized companies that can focus on better ranking algorithms for example.

    • What if crawling was done by the owners of each website and then submitted to an index database of their choice? This flips the model around so things like robots.txt might become less relevant. Bad actors and spam however now don’t need any SEO tricks to flood a database or mislead as to their actual content, they can just submit whatever they like!. These concerns feed into the next step:
    • What if there were standard indexing functions similar to how you have many standard hash functions. How a site is indexed plays an important role in how ranking will work (or not) later. You could have a handful of popular general purpose index algorithms that most sites would produce and then submit (e.g. keywords, images, podcasts, etc.) combined with many more domain specific indexing algorithms (e.g. product listings, travel data, mapping, research). Also if the functions were open standards then it would be possible for a browser to run the index function on the current page and compare the result to the submitted index listing. It could warn users that the page they are viewing is probably either spam or misconfigured in some way to make the index not match what was submitted.
    • What if the stored indexes were hosted in a distributed way similar to DNS? Sharing the database would lower individual costs. Companies with bigger budgets could replicate the database to provide their users with a faster service. Companies with fewer resources would be able to use the publicly available indexes yet still be competitive.
    • Enabling more competition between different ranking methods will hopefully reduce the effectiveness of SEO gaming (or maybe make it worse as the same content is repackaged for each and every index/rank combination). Ranking could happen locally (although this would probably not be efficient at all but that fact that it might even be possible at all is quite a novel thought)

    Sigh enough daydreaming already…







  • I should have prefaced my situation better: I live in a country where the ISP censors certain websites and online services. The closest Linode is not on my continent (so the latency is noticeable). So my need to be connected to the Wireguard VPN really depends on what I’m doing. Having a split DNS system is seamless and I only activate the VPN manually as needed (both at home and when I’m out) Otherwise I would have just asked my ISP for a static IP, opened some ports and installed tailscale for everything else.






  • I recently made the switch to Vaultwarden when I read a series of articles making predictions about passkeys and how they are lining up to replace passwords. Bitwarden apparently is ready to implement whatever standard becomes most popular and I had FOMO of being left behind if I stuck with keepass only. Previously I was using various keepass compatible apps and then syncing the KDBX database with my Nextcloud. (Vaultwarden is the selfhosted fork of Bitwarden)