Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 31 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle
  • The main issue you’ll run into is nicher proprietary software being hard to install, but that’s what containers are for. The main one I see is if you need to install some proprietary VPN client it gets annoying, but since you’ll be running a VM anyway you can do some network trickery. My work’s antivirus only works on Ubuntu and RHEL, proprietary kernel modules so it’s got to be at least one of those kernels.

    Linux is Linux, nothing’s impossible to solve even with Bazzite’s immutability. Worst comes to worst you make your own images and it’s not that hard, you basically just fork it on GitHub and let the CI do its thing.

    But do you have time to fiddle to make it work and take the risk, or do you want to play it safe? How confident are you with Bazzite’s more advanced topics?












  • Max-P@lemmy.max-p.metoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    3 months ago

    In that specific context I was still thinking about how you need to run mysql_upgrade after an update, not the regular post upgrade scripts. And Arch does keep those relatively simple. As I said, Arch won’t restart your database for you, and also won’t run mysql_upgrade because it also doesn’t preconfigure a user for itself to do that. And it also doesn’t initialize /var/lib/mysql for you either upon installation. Arch only does maintenance tasks like rebuild your font cache, create system users, reload systemd. And if those scripts fail, it just moves on, it’s your job to read the log and fix it. It doesn’t fail the package installation, it just tells you to go figure it out yourself.

    Debian distros will bounce your database and run the upgrade script for you, and if you use unattended upgrades it’ll even randomly bounce in the middle of the night because it pull a critical security update that probably don’t apply to you anyway. It’ll bail out mid dist-upgrade and leave you completely fucked, because it couldn’t restart a fucking database. It’s infuriating, I’ve even managed to get apt to be incapable of deleting a package (or reinstalling it)/because it wanted to run a pre-remove script that I had corrupted in a crash. Apt completely hosed, dpkg completely hosed, it was a pain in the ass.

    With the Arch philosophy I still need to fix my database, but at least the rest of my system gets updated perfectly and I can still use pacman to install the tools I need to fix the damn database. I have all those issues with Debian because apt tries to do way too fucking much for its own good.

    The Arch philosophy works. I can have that automated, if I asked for it and set up a hook for it.


  • Pacman just does a lot less work than apt, which keeps things simpler and more straightforward.

    Pacman is as close as it gets to just untar’ing the package to your system. It does have some install scripts but they do the bare minimum needed.

    Comparatively, Debian does a whole lot more under the hood. It’s got a whole configuration management thing that generates config files and stuff, which is all stuff that can go wrong especially if you overwrote it. Debian just assumes apt can log into your MySQL database for example, to update your tables after updating MySQL. If any of it goes wrong, the package is considered to have failed to install and you get stuck in a weird dependency hell. Pacman does nothing and assumes nothing, its only job is to put the files in the right place. If you want it to start, you start it. If you want to run post-upgrade, you got to do it yourself.

    Thus you can yank an Arch system 5 years into the future and if your configs are still valid or default, it just works. It’s technically doable with apt too but just so much more fragile. My Debian updates always fail because NGINX isn’t happy, Apache isn’t happy, MySQL isn’t happy, and that just results in apt getting real unhappy and stuck. And AFAIK there’s no easy way to gaslight it into thinking the package installed fine either.



  • I also wanted to put an emphasis on how working with virtual disks is very much the same as real ones. Same well known utilities to copy partitions work perfectly fine. Same cgdisk/parted and dd dance as you otherwise would.

    Technically if you install the arch-install-scripts package on your host, you can even install ArchLinux into a VM exactly as if you were in archiso with the comfort of your desktop environment and browser. Straight up pacstrap it directly into the virtual disk.

    Even crazier is, NBD (Network Block Device) is generic so it’s not even limited to disk images. You can forward a whole ass drive from another computer over WiFi and do what you need on it, even pass it to a VM boot it up.

    With enough fuckery you could even wrap the partition in a fake partition table and boot the VM off the actual partition and make it bootable by both the host and the VM at the same time.


  • What you’re trying to do is called a P2V (Physical to Virtual). You want to directly copy the partition as going through a file share via Linux will definitely strip some metadata Windows wants on those files.

    First, make a disk image that’s big enough to hold the whole partition and 1-2 GB extra for the ESP:

    qemu-img create -f qcow2 YourDiskImageName.qcow2 300G
    

    Then you can make the image behave like a real disk using qemu-nbd:

    sudo modprobe nbd
    sudo qemu-nbd -c /dev/nbd0 YourDiskImageName.qcow2
    

    At this point, the disk image behaves like any other disk at /dev/nbd0.

    From there create a partition table, you can use cgdisk or parted or even the GUI GParted will work on it.

    And finally, copy the partition over with dd:

    sudo dd if=/dev/sdb3 of=/dev/nbd0p2 bs=4M status=progress
    

    You can also copy the ESP/boot partition as well so the bootloader works.

    Finally once you’re done with the disk image, unload it:

    sudo qemu-nbd -d /dev/nbd0
    

  • I think it counts. You always have the option of taking your data with you and go elsewhere which is one of the main points of self-hosting, being in control of your data. If they jack up the prices or whatever, you just pack up, you never have to pay or else.

    Also hosting an email server at home would be an absolute nightmare, took me 10+ years to get that IP rep and I’m holding on to it as long as I can.

    I have a mix of it: private services run at home, public ones run on a bare metal server I rent. I still get the full benefits of having my own NextCloud and all. Ultimately even at home, I’d still be renting an Internet connection, unless you have a local only server.




  • It’s hard to give concrete advice without knowing the specs or the software you want to run on this, but for tiny Linux systems there’s Buildroot so you can compile just the bare minimum you need and not use a distro at all (unless you could Buildroot as a distro). This is what OpenWRT uses to build all the router firmwares among other things.

    For something that would go in a car that seems pretty ideal to me. Skip initializing things you won’t use, make something that boots to GUI in 3 seconds. When you want to update the software you flash it as a new firmware image, no on-device installing or anything.

    Depending on what you run, ideally you’d skip Xorg/Wayland and use the framebuffer directly. But if you need to run a more standard environment, that’s what things like Cage are designed for. Single app, always full screen. It’s called a kiosk environment.