Half of these exist because I was bored once.

The Windows 10 and MacOS ones are GPU passthrough enabled and what I occasionally use if I have to use a Windows or Mac application. Windows 7 is also GPU enabled, but is more a nostalgia thing than anything.

I think my PopOS VM was originally installed for fun, but I used it along with my Arch Linux, Debian 12 and Testing (I run Testing on host, but I wanted a fresh environment and was too lazy to spin up a Docker or chroot), Ubuntu 23.10 and Fedora to test various software builds and bugs, as I don’t like touching normal Ubuntu unless I must.

The Windows Server 2022 one is one I recently spun up to mess with Windows Docker Containers (I have to port an app to Windows, and was looking at that for CI). That all become moot when I found out Github’s CI doesn’t support Windows Docker containers despite supporting Windows runners (The organization I’m doing it for uses Github, so I have to use it).

  • olympicyes@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    55 minutes ago

    I have about that many. Looks good to me! I have two Windows VMs. One for work and presentations. One for games and Adobe. A bunch of random Linux VMs trying to get a FireWire card to work and a Windows 7 VM for the same reason. I’ve also for several Linux VMs trying out new versions of Fedora, Ubuntu, or Debian. A couple servers. Almost none of them are ever turned on because my real virtualized workloads run in docker or LXC! I never could get Mac VM to work but I have an AMD CPU and a MacBook so not too high priority.

  • Raccoonn@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    5 hours ago

    GPU passthrough has always been one of those exciting ideas I’d love to dive into one day. My current GPU being a little older, has only 4GB of RAM. Oh the joy’s of being a budget PC user. Thankfully it’s more of a “would be nice rather” than an “actually need”…

    • olympicyes@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      47 minutes ago

      Very few people need it but it’s awesome and a lot of fun and lets you spend more time in Linux than dealing with Windows. The VFIO Reddit and Arch wiki are great resources. I have GPU, USB, and Ethernet pass through on my Ubuntu machine and it works great, but I needed the Arch wiki to really figure out what I was doing wrong when I first set it up. Level1Techs is also a good resource on YouTube and forums because they are big into VFIO and SR-IOV. Next time you get a PC, make sure to look for more PCI lanes and bifurcation support on your motherboard. Gen 4 is a great option because it generally has enough lanes and the ram and ssd are much cheaper than Gen 5. GPU choice doesnt matter much but if you’ve got AMD watch out for the reset bug. Basically you can start a VM but once you quit it the cards state is unavailable for further use (eg a second VM session or reopening your DE if you’re using a single GPU setup) unless you restart your host. There are some workarounds but personally I’d avoid it if possible. Onboard graphics (iris or amd APU) are recommended. Older hardware can get cheap so good luck saving up if this is something you want to do!

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 hours ago

    Well I do but I have a machine with 3/4 of a terabyte of memory on it.

    Work scraps are great sometimes.

    How are you running the MacOS VMs. The machine I have is a cheese grater so that makes it easier.

    • olympicyes@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      45 minutes ago

      Are you running macOS or Linux as your host? My MacBook is M1 and I found the performance running ARM windows and ARM Fedora via UTM (qemu) to be pretty good.

      • billwashere@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        31 minutes ago

        On the cheesegrater(2019 MacPro) it’s a little convoluted. During covid times it was my single box lab since it had so much memory (768TB). So I was running nested ESXI hosts and then VMs under that. I also have a M1 MacBook Pro that I had parallels run ARM VMs (mostly MacOS, Windows, and a couple of Debian installs I think).

        I have been looking at VMWare alternatives at work so for the hypervisors I’ve been playing around.

        I do this stuff for a living but I also do it home for fun and profit. Ok not so much profit. Ok no profit but definitely for the fun. And because I love large electric bills.

        • olympicyes@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          19 minutes ago

          That’s a beast of a Mac. Wake on lan is your friend. I have the same problem with my Threadripper. I wrote a script that issues a WOL command to either start/unsuspend my Ubuntu machine so I can turn it off when not in use. It’s probably $70/month difference for me. Most of my virtualization is on Linux but I’ve moved away from VM Ware because QEMU/KVM has worked so well for me. You should check out UTM on the Mac App Store and see if that solves any of your problems.

          ETA: https://mac.getutm.app/

      • billwashere@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 hour ago

        Ok I’ll have to try this. The weird thing is my little test proxmox server is a 2013 trashcan. So this would be like a hackintosh running on Mac hardware. Would that technically be a hackintosh? I’m not really sure. According to the Apple license you can virtualize MacOS if it’s running on Mac hardware. I’m not sure if that requires MacOS as the hypervisor. Regardless this is not something I knew about. Very cool. Thanks for the info.

  • veroxii@aussie.zone
    link
    fedilink
    arrow-up
    4
    ·
    10 hours ago

    Not VMs but I have way more docker containers. I run most things as containers which keeps the base OS nice and clean and free from dependency hell.

  • wulf@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    9 hours ago

    I run a different LXC on Proxmox for every service, so it’s a bunch. Probably a better way to do it since most of those just run a docker container inside them.

    • WasPentalive@lemmy.one
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      Why mix docker and VMs? Isn’t docker sort of like a VM, an application-level VM maybe? (I obviously do not understand Docker well)

      • olympicyes@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        39 minutes ago

        I have a real use case! I have a commercial server software that can run on Ubuntu or RHEL compatible distributions. My entire environment is Ubuntu. They also allow the server software to run in a docker container but the container must be running RHEL. Furthermore, their license terms require me to build the docker container myself to accept the EULA and the docker image must be built on RHEL! So I have an LXC container running Rocky Linux that gets docker installed for the purpose of building RHEL (Core is 8) imaged docker containers. It’s a total mess but it works! You must configure nested security because this doesn’t work by default.

        Instructions here: https://ubuntu.com/tutorials/how-to-run-docker-inside-lxd-containers#1-overview

      • Kovukono@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        Serious answer, I’m not sure why someone would run a VM to run just a container inside the VM, aside from the VM providing volumes (directories) to the VM. That said, VMs are perfectly capable of running containers, and can run multiple containers without issue. For work, our Gitlab instance has runners that are VMs that just run containers.

        Fun answer, have you heard of Docker in Docker?

      • lazynooblet@lazysoci.al
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        I like to run a hypervisor host as just that, a hypervisor host. The host being stable is important, and also reduce attack surface by only having it as that.

        An LXC per service is somewhat overkill. A docker host running on LXC could likely run all the docker containers.

        • olympicyes@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          37 minutes ago

          I mentioned above, and not to spam, but there might be a use case that requires a different host distribution. Networking isolation might be another reason why. For 90% of use cases, you’re correct.