the f*ck rants about stuff

sysadmin

Latest posts related to :



  1. How to clone a server using just rsync

    In the past I needed more space in the server and so i had to upgraded it to a more expensive option, without option of going back

    Now the basic server option is cheaper and is enough for me. Plus there were some black friday discounts :)

    So I decided to move the server with all my services to a cheaper option and save 75% of what i was spending with more or less the same features

    Unfortunately, this is not supported by default and theres no one button way to do it. Fortunately, this is very easy to do using linux!

    People fighting over products in black friday fashion

    This is how i did it in 6 easy steps:

    Step 1

    • Reboot booh machines using a live image and have a working ssh server on the target server
    • Mount the server disk on both servers on /mnt

    Step 2

    • rsync -AHXavP --numeric-ids --exclude='/mnt/dev' --exclude='/mnt/proc' --exclude='/mnt/sys' /mnt/ root@ip.dest.server:/mnt/

    Step 3

    • ssh on the target server. Bind /proc /dev /sys to /mnt/ and chroot it
    • grub-install /dev/sdb && update-grub
    • ack ip.orig.server /etc/ and change it where appropiate
    • reboot

    Step 4

    • Change DNS

    Step 5

    • ????

    Step 6

    • Profit!
    Conclusion
    A couple of hours to do the whole thing including buying the new server and everything seems to be working as if nothing happened. Copying directly from server to server helped with the downtime too. Aint linux wonderful?
    
  2. Get a nearly fresh debian install without reinstalling

    I was recently asked how to get rid of the old and unused packages without having to reinstall?

    Debian have the mechanisms to deal with this and more. Unfortunately for new people, its not as automated and a little more obscure that i would like

    Anyway, heres what i would do:

    # apt-mark showmanual
    # apt-mark auto <packages you dont recognize>
    # apt purge <packages you recognize but dont want anymore>
    # apt autoremove --purge
    
  3. Destructive git behaviour

    fun with git

    I destroyed all the work I had done in a project for the last 2 months

    tl;dr:
    GIT doesnt consider the files in .gitignore important and will happily replace them

    Im pretty careless with my local git commands

    Ive been trained by git to be this careless. Unless i use --force on a command, git will always alert me if im about to do something destructive. Even then, worse case scenario, you can use git reflog to get back in time after a bad merge or something not easily accesible with a normal git flow

    What happened?

    I had a link to a folder in my master branch. I branched to do some work and decided to replace the link with the actual folder to untangle some other mess and added it to .gitignore to avoid git complaining about it

    Then happily worked on in for 2 months

    I was ready to merge it, so I made a final commit and I checked out master

    So far, pretty normal git flow… right?

    But wait, something was wrong. My folder was missing!

    Wait, what?! what happened!

    The folder existed as a syslink on master, so git happily replaced my folder with a now broken syslink

    It seems git doesnt consider files under .gitignore as important

    You can see by yourself and reproduce this behaviour by typing the following commands. It doesnt matter if links doesnt exists:

    [~/tmp]
    $ mkdir gitdestroy/
    
    [~/tmp]
    $ cd gitdestroy/
    
    [~/tmp/gitdestroy]
    $ cat > file1
    hi, im file1
    
    [~/tmp/gitdestroy]
    $ ln -s nofile link
    
    [~/tmp/gitdestroy]
    $ ll
    total 48K
    drwxr-xr-x. 26 alberto alberto  36K Jan 29 15:18 ..
    -rw-r--r--   1 alberto alberto   13 Jan 29 15:19 file1
    lrwxrwxrwx   1 alberto alberto    6 Jan 29 15:19 link -> nofile
    drwxr-xr-x   2 alberto alberto 4.0K Jan 29 15:19 .
    
    [~/tmp/gitdestroy]
    $ git init
    Initialized empty Git repository in /home/alberto/tmp/gitdestroy/.git/
    
    [~/tmp/gitdestroy (master #%)]
    $ git add -A
    
    [~/tmp/gitdestroy (master +)]
    $ git status
    On branch master
    
    No commits yet
    
    Changes to be committed:
      (use "git rm --cached <file>..." to unstage)
    
        new file:   file1
        new file:   link
    
    
    [~/tmp/gitdestroy (master +)]
    $ git commit -m "link on repo"
    [master (root-commit) 5001c61] link on repo
     2 files changed, 2 insertions(+)
     create mode 100644 file1
     create mode 120000 link
    
    [~/tmp/gitdestroy (master)]
    $ git checkout -b branchwithoutlink
    Switched to a new branch 'branchwithoutlink'
    
    [~/tmp/gitdestroy (branchwithoutlink)]
    $ git rm link 
    rm 'link'
    
    [~/tmp/gitdestroy (branchwithoutlink +)]
    $ mkdir link
    
    [~/tmp/gitdestroy (branchwithoutlink +)]
    $ cat >link/file2
    hi im file2
    
    [~/tmp/gitdestroy (branchwithoutlink +%)]
    $ cat > .gitignore
    link
    
    [~/tmp/gitdestroy (branchwithoutlink +%)]
    $ git status
    On branch branchwithoutlink
    Changes to be committed:
      (use "git reset HEAD <file>..." to unstage)
    
        deleted:    link
    
    Untracked files:
      (use "git add <file>..." to include in what will be committed)
    
        .gitignore
    
    
    [~/tmp/gitdestroy (branchwithoutlink +%)]
    $ git add -A
    
    [~/tmp/gitdestroy (branchwithoutlink +)]
    $ git commit -m "replace link with folder"
    
    [branchwithoutlink 2cfb06c] replace link with folder
     2 files changed, 1 insertion(+), 1 deletion(-)
     create mode 100644 .gitignore
     delete mode 120000 link
    
    [~/tmp/gitdestroy (branchwithoutlink)]
    $ ll
    total 60K
    drwxr-xr-x. 26 alberto alberto  36K Jan 29 15:18 ..
    -rw-r--r--   1 alberto alberto   13 Jan 29 15:19 file1
    drwxr-xr-x   2 alberto alberto 4.0K Jan 29 15:21 link
    drwxr-xr-x   4 alberto alberto 4.0K Jan 29 15:22 .
    -rw-r--r--   1 alberto alberto    5 Jan 29 15:22 .gitignore
    drwxr-xr-x   8 alberto alberto 4.0K Jan 29 15:22 .git
    
    [~/tmp/gitdestroy (branchwithoutlink)]
    $ git checkout master
    Switched to branch 'master'                                        <--- NO ERROR???
    
    [~/tmp/gitdestroy (master)]
    $ ll
    total 52K
    drwxr-xr-x. 26 alberto alberto  36K Jan 29 15:18 ..
    -rw-r--r--   1 alberto alberto   13 Jan 29 15:19 file1
    lrwxrwxrwx   1 alberto alberto    6 Jan 29 15:22 link -> nofile    <--- WHAT
    drwxr-xr-x   8 alberto alberto 4.0K Jan 29 15:22 .git
    drwxr-xr-x   3 alberto alberto 4.0K Jan 29 15:22 .
    
    [~/tmp/gitdestroy (master)]
    $ git checkout branchwithoutlink 
    Switched to branch 'branchwithoutlink'
    
    [~/tmp/gitdestroy (branchwithoutlink)]
    $ ll
    total 56K
    drwxr-xr-x. 26 alberto alberto  36K Jan 29 15:18 ..
    -rw-r--r--   1 alberto alberto   13 Jan 29 15:19 file1
    -rw-r--r--   1 alberto alberto    5 Jan 29 15:23 .gitignore
    drwxr-xr-x   8 alberto alberto 4.0K Jan 29 15:23 .git
    drwxr-xr-x   3 alberto alberto 4.0K Jan 29 15:23 .
    

    Aftermath

    I analyzed what git was doing underneath in hopes to gain some insight on how to recover these files. It seems git unlinkat(2) everyfile and finally rmdir(2) the folder

    By contrasts rm(1) just uses unlinkat(2) in every file and folder

    Not sure what difference this makes, but it was quite useless. I tried some EXT undelete tools to try to recover the missing files, but everything was gone

    Actually I was able to undeleted some files i had removed 3 years ago that i didnt need :/

    Future

    This directory was under git as well and remotely hosted. But my last push was 2 months ago. I will be more careful on the future

    Recently theres been some discussion on git about something that could prevent this behaviour. They are introducing the concept of “precious ignored” files

    But for me the damage was done

    This was unexpected behaviour for me. Maybe it was also for you. Be safe out there!

  4. Copy list of packages installed to another debian machine

    In this day and age, reading debian forums, I still see $ dpkg --get-selections as the recommended way to copy the list of packages installed on one machine in order to install the same packages on another machine

    This list misses vital information… such as which of those packages were automatically installed as dependence!!!

    If you dont want to break your new installation so early on, use $ apt-mark showmanual instead for the list of packages. It will show only packages that you manually installed. You should get the rest as dependences

  5. No more bash

    bash logo crossed

    I recently stopped my (imho bad) habit of starting shell scripts in bash; no matter how small the task at hand feels originally

    I had an epiphany

    The number of bash scripts that grew out of control was just too damn high

    ive been told that

    it's a difficult balance
    

    But is it really?

    Its always the same story

    1. Well, I only have to run the same handful of commands multiple times in different directories, a shell script will do
    2. Except, sometimes it fails when…/this special case if../oh, never considered this… I will just add a couple more lines and fix it
    3. Script explodes, and gets rewritten in python

    It rarely had exceptions for me. Almost every .sh (if its intended to automate something) had to do sanity checks, error control/recovery and probably special case scenarios… eventually

    Im aware that if you are well versed in bash, you can do a lot. It has arrays and all kind of (imho weird) string mangling to make advanced use of variables, but it always felt like bash was filled with pitfalls that you have to learn to route around

    Writting the same thing in python takes about the same time. Maybe a couple more lines to write a few imports

    Im aware that python comes with its own pitfalls, but at least you can actually scale it when needed. And you save the rewritting part

    This is really hard for me to say

    I grew to love long one liners of pipes that solve complex problems. Also, most of the time you only seem to want to run a couple of commands on tandem

    But I think its time for me to say goodbye. In the same way I said goodbye to perl (and thats a rant for another day :))

    No more shell scripts

    No matter how small

  6. Backup fixes!

    A year ago I made an automatization solution for a backup. Very basic approach but it got the job done

    It started to fail randomly, so I had to take a look. I fixed it and took the oportunity to add a few features while debugging it

    Overall improved resilience. Now it can recover from most errors and inform properly when it can not

    Changelog:

    • FIX: Backup file geting corrupted on email transit. It seems google was mangling .gpg files
    • FIX: Add clean up section to ensure the resources are consumed. Systemd.path works like a spool. Also needs to sync at the end because systemd relaunch the file as soon as is done. The OS didnt even have time to write to disk
    • FIX: Clean up service on restart that auto remove mail lock created and never removed if computer loses power in the middle of the sending
    • FIX: Systemd.path starts processing as soon as the path is found. I had to ensure the file was done written before processing it
    • FIX: Systemd forking instead of oneshot. I was leaving the process ligering for the pop up windows to finish. This is what Type=forking does

    • FEAT: Checksums included in the backup to be able to auto verify integrity when recovering and be able to properly fail when the IN and OUT files are different

    • FEAT: Add proper systemd logging. Including checksums
    • FEAT: Show POP-UPs to the final users showing star/stop of the service and notifiying them of errors
    • FEAT: Add arguments to ease local debugging including --quiet option added for debugging remotely without showing POP UPS

    No repo! but heres the code so you take a peak or reuse it. POP-UPS are in spanish

    code
    backup.py
    
    #!/usr/bin/env python3
    
    from datetime import datetime, timedelta
    from os import path, remove, fork, _exit, environ
    from subprocess import run, CalledProcessError
    from sys import exit, version_info
    from systemd import journal
    from hashlib import md5
    import argparse
    
    
    def display_alert(text, wtype="info"):
        journal.send("display: {}".format(text.replace("\n", " - ")))
        if(not args.quiet):
            if(not fork()):
                env = environ.copy()
                env.update({'DISPLAY': ':0.0', 'XAUTHORITY':
                            '/home/{}/.Xauthority'.format(USER)})
                zenity_cmd = [
                    'zenity', '--text={}'.format(text), '--no-markup', '--{}'.format(wtype), '--no-wrap']
                run(zenity_cmd, env=env)
                # let the main thread do the clean up
                _exit(0)
    
    
    def md5sum(fname):
        cs = md5()
        with open(fname, "rb") as f:
            for chunk in iter(lambda: f.read(4096), b""):
                cs.update(chunk)
        return cs.hexdigest()
    
    
    # Args Parser init
    parser = argparse.ArgumentParser()
    parser.add_argument(
        "-q", "--quiet", help="dont show pop ups", action="store_true")
    parser.add_argument("-u", "--user", help="user to display the dialogs as")
    parser.add_argument("-p", "--path", help="path of the file to backup")
    parser.add_argument("-t", "--to", help="who to send the email")
    parser.add_argument(
        "-k", "--keep", help="keep output file", action="store_true")
    parser.add_argument(
        "-n", "--no-mail", help="dont try to send the mail", action="store_true")
    args = parser.parse_args()
    
    # Globals
    USER = 'company'
    if(args.user):
        USER = args.user
        journal.send("USER OVERWRITE: {}".format(USER))
    
    TO = "info@company.com"
    if(args.to):
        TO = args.to
        journal.send("EMAIL TO OVERWRITE: {}".format(TO))
    BODY = "mail.body"
    FILENAME = 'database.mdb'
    PATH = '/home/company/shared'
    if(args.path):
        PATH = args.path
        journal.send("PATH OVERWRITE: {}".format(PATH))
    
    if(args.quiet):
        journal.send("QUIET NO-POPUPS mode")
    
    FILE = path.join(PATH, FILENAME)
    FILEXZ = FILE + ".tar.xz"
    now = datetime.now()
    OUTPUT = path.join(PATH, 'backup_{:%Y%m%d_%H%M%S}.backup'.format(now))
    CHECKSUM_FILE = FILENAME + ".checksum"
    
    error_msg_tail = "Ejecuta $ journalctl -u backup.service para saber más"
    
    LSOF_CMD = ["fuser", FILE]
    XZ_CMD = ["tar", "-cJC", PATH, "-f", FILEXZ, FILENAME, CHECKSUM_FILE]
    GPG_CMD = ["gpg", "-q", "--batch", "--yes", "-e", "-r", "backup", "-o", OUTPUT, FILEXZ]
    
    error = ""
    
    
    # Main
    display_alert('Empezando la copia de seguridad: {:%Y-%m-%d %H:%M:%S}\n\n'
                  'NO apagues el ordenador todavia por favor'.format(now))
    
    # sanity file exists
    if(path.exists(FILE)):
        journal.send(
            "New file {} detected. Trying to generate {}".format(FILE, OUTPUT))
    else:
        exit("{} not found. Aborting".format(FILE))
    
    # make sure file finished being copied
    finished_copy = False
    while(not finished_copy):
        try:
            run(LSOF_CMD, check=True)
            journal.send(
                "File is still open somewhere. Waiting 1 extra second before processing")
            run("sleep 1".split())
        except CalledProcessError:
            finished_copy = True
        except Exception as e:
            display_alert(
                "ERROR\n{}\n\n{}".format(e, error_msg_tail), "error")
            exit(0)
    
    filedate = datetime.fromtimestamp(path.getmtime(FILE))
    
    # sanity date
    if(now - timedelta(hours=1) > filedate):
        error = """El fichero que estas mandando se creó hace más de una hora.
    fecha del fichero: {:%Y-%m-%d %H:%M:%S}
    fecha actual     : {:%Y-%m-%d %H:%M:%S}
    
    Comprueba que es el correcto
    """.format(filedate, now)
    
    # Generate checksum file
    csum = md5sum(FILE)
    journal.send(".mdb md5: {} {}".format(csum, FILENAME))
    
    with open(CHECKSUM_FILE, "w") as f:
        f.write(csum)
        f.write(" ")
        f.write(FILENAME)
    
    # Compress
    if(path.isfile(FILEXZ)):
        remove(FILEXZ)
    
    journal.send("running XZ_CMD: {}".format(" ".join(XZ_CMD)))
    run(XZ_CMD)
    csum = md5sum(FILEXZ)
    journal.send(".tar.xz md5: {} {}".format(csum, FILEXZ))
    
    # encrypt
    journal.send("running GPG_CMD: {}".format(" ".join(GPG_CMD)))
    run(GPG_CMD)
    csum = md5sum(OUTPUT)
    journal.send(".gpg md5: {} {}".format(csum, OUTPUT))
    
    remove(FILEXZ)
    
    # sanity size
    filesize = path.getsize(OUTPUT)
    if(filesize < 5000000):
        error += """"El fichero que estas mandando es menor de 5Mb
    tamaño del fichero en bytes: ({})
    
    Comprueba que es el correcto
    """.format(filesize)
    
    subjectstr = "Backup {}ok con fecha {:%Y-%m-%d %H:%M:%S}"
    subject = subjectstr.format("NO " if error else "", now)
    body = """Todo parece okay, pero no olvides comprobar que
    el fichero salvado funciona bien por tu cuenta!
    """
    if(error):
        body = error
    
    with open(BODY, "w") as f:
        f.write(body)
    
    journal.send("{} generated correctly".format(OUTPUT))
    try:
        if(not args.no_mail):
            journal.send("Trying to send it to {}".format(TO))
            MAIL_CMD = ["mutt", "-a", OUTPUT, "-s", subject, "--", TO]
    
            if(version_info.minor < 6):
                run(MAIL_CMD, input=body, universal_newlines=True, check=True)
            else:
                run(MAIL_CMD, input=body, encoding="utf-8", check=True)
    except Exception as e:
        display_alert(
            "ERROR al enviar el backup por correo:\n{}".format(e), "error")
    else:
        later = datetime.now()
        took = later.replace(microsecond=0) - now.replace(microsecond=0)
        display_alert('Copia finalizada: {:%Y-%m-%d %H:%M:%S}\n'
                      'Ha tardado: {}\n\n'
                      'Ya puedes apagar el ordenador'.format(later, took))
    
    finally:
        if(not args.keep and path.exists(OUTPUT)):
            journal.send("removing gpg:{}".format(OUTPUT))
            remove(OUTPUT)
    
    unbackup.py
    #!/usr/bin/env python3
    
    from os import path, remove, sync, fork, _exit, environ
    from subprocess import run, CalledProcessError
    from glob import glob
    from sys import exit
    from systemd import journal
    from hashlib import md5
    import argparse
    
    
    def display_alert(text, wtype="info"):
        if(not args.quiet):
            if(not fork()):
                env = environ.copy()
                env.update({'DISPLAY': ':0.0', 'XAUTHORITY':
                            '/home/{}/.Xauthority'.format(USER)})
                zenity_cmd = [
                    'zenity', '--text={}'.format(text), '--no-markup', '--{}'.format(wtype), '--no-wrap']
                run(zenity_cmd, env=env)
                # Let the main thread do the clean up
                _exit(0)
    
    
    def md5sum(fname):
        cs = md5()
        with open(fname, "rb") as f:
            for chunk in iter(lambda: f.read(4096), b""):
                cs.update(chunk)
        return cs.hexdigest()
    
    
    # Args Parser init
    parser = argparse.ArgumentParser()
    parser.add_argument(
        "-q", "--quiet", help="dont show pop ups", action="store_true")
    parser.add_argument("-u", "--user", help="user to display the dialogs as")
    parser.add_argument("-p", "--path", help="path of the file to unbackup")
    parser.add_argument(
        "-k", "--keep", help="keep original file", action="store_true")
    args = parser.parse_args()
    
    # Globals
    USER = 'company'
    if(args.user):
        USER = args.user
        journal.send("USER OVERWRITE: {}".format(USER))
    
    PATH = '/home/rk/shared'
    if(args.path):
        PATH = args.path
        journal.send("PATH OVERWRITE: {}".format(PATH))
    
    if(args.quiet):
        journal.send("QUIET NO-POPUPS mode")
    
    OUTPUT_FILE = 'database.mdb'
    error_msg_tail = "Ejecuta $ journalctl -u unbackup.service para saber más"
    CHECKSUM_FILE = OUTPUT_FILE + ".checksum"
    
    
    # Main
    try:
        input_file = glob(path.join(PATH, 'backup*.backup'))[0]
    except IndexError as e:
        display_alert("ERROR\nEl fichero de backup no existe:\n{}\n\n{}".format(
            e, error_msg_tail), "error")
        exit(0)
    except Exception as e:
        display_alert(
            "ERROR\n{}\n{}".format(e, error_msg_tail), "error")
        exit(0)
    else:
        display_alert(
            "Se ha detectado {}. Empiezo a procesarlo".format(input_file))
    
        output_path = path.join(PATH, OUTPUT_FILE)
        output_pathxz = output_path + ".tar.xz"
    
        LSOF_CMD = ["fuser", input_file]
        GPG_CMD = ["gpg", "--batch", "-qdo", output_pathxz, input_file]
        XZ_CMD = ["tar", "-xf", output_pathxz]
    
    # make sure file finished being copied. Systemd triggers this script as soon as the file name shows
    try:
        finished_copy = False
        while(not finished_copy):
            try:
                run(LSOF_CMD, check=True)
                journal.send(
                    "File is still open somewhere. Waiting 1 extra second before processing")
                run("sleep 1".split())
            except CalledProcessError:
                finished_copy = True
            except Exception as e:
                display_alert(
                    "ERROR\n{}\n\n{}".format(e, error_msg_tail), "error")
                exit(0)
    
        csum = md5sum(input_file)
        journal.send(".gpg md5: {} {}".format(csum, input_file))
    
        if(path.exists(output_pathxz)):
            journal.send("{} detected. Removing".format(output_pathxz))
            remove(output_pathxz)
    
        journal.send("running GPG_CMD: {}".format(" ".join(GPG_CMD)))
        run(GPG_CMD, check=True)
    
        csum = md5sum(output_pathxz)
        journal.send("tar.xz md5: {} {}".format(csum, input_file))
    
        journal.send("running XZ_CMD: {}".format(" ".join(XZ_CMD)))
        run(XZ_CMD, check=True)
    
    # Check Checksum
        with open(CHECKSUM_FILE) as f:
            target_cs, filename = f.read().strip().split()
        actual_cs = md5sum(filename)
        journal.send(".mdb md5: {} {}".format(actual_cs, filename))
        if(target_cs == actual_cs):
            journal.send("El checksum interno final es correcto!")
        else:
            display_alert("ERROR\n"
                          "Los checksums de {} no coinciden"
                          "Que significa que el fichero esta dañado"
                          .format(filename), "error")
    
    except Exception as e:
        display_alert("ERROR\n{}\n\n{}"
                      .format(e, error_msg_tail), "error")
        exit(0)
    else:
        display_alert("{} generado con exito".format(output_path))
    finally:
        if(not args.keep and path.exists(input_file)):
            journal.send("CLEAN UP: removing gpg {}".format(input_file))
            # make sure the file is not open before trying to remove it
            sync()
            remove(input_file)
            # sync so systemd dont detect the file again after finishing the script
            sync()
    
    backup.path 
    [Unit]
    Description=Carpeta Compartida backup
    
    [Path]
    PathChanged=/home/company/shared/database.mdb
    Unit=backup.service
    
    [Install]
    WantedBy=multi-user.target
    
    backup.service
    [Unit]
    Description=backup service
    
    [Service]
    Type=forking
    ExecStart=/root/backup/backup.py
    TimeoutSec=600
    
    unbackup.path
    [Unit]
    Description=Unbackup shared folder
    
    [Path]
    PathExistsGlob=/home/company/shared/backup*.backup
    Unit=unbackup.service
    
    [Install]
    WantedBy=multi-user.target
    
    unbackup.service
    [Unit]
    Description=Unbackup service
    [Service]
    Type=forking
    Environment=DISPLAY=:0.0
    ExecStart=/root/company/unbackup.py
    
  7. Gmail mangles .gpg files

    Why?

    I dont know

    If you change bytes in a .gpg somebody is bound to notice. Right?

    Im using a 3rd party to send a .gpg to a gmail account and the checksums before and after simply dont match

    I dont want to really assume evilness, since modifiying bytes on the attachments seems pretty sketchy

    Maybe im doing something wrong

    The fact that the checksums are okay if I send it to my personal server using the same 3rd party mail provider its a little suspicious tho

    Ive been told that

    [...], email and gmail are different things.  So I wouldn't be surprised if they are not 100% compatible ;-)
    

    Funny, or is it? google has most of my email because it has all of yours. They have a lot of leverage to define the email experience

    In the end, just renaming the .gpg to something else fixed it (What??)

    And while we are still ranting about google, lets finish with a pet peeve of mine

    I hate that they virtually remove the concept of domain or email address. Not that they are just the f*cking anchor point to security

    Security and, you know, knowing where the f*ck you are going or who you are trying to get in contact with

    Instead they hide this info as much as possible in labels so you learn to trust them, instead of learning to trust something as simple and as ubiquitous as a domain or an email

  8. Virus, Qubes-OS and Debian

    computer problems that people attribute to virus doesnt overlap with real problems caused by virus

    This is the virus venn diagram. Its pretty accurate and many people, including people that gets along with technology, is oblivious to it. Voluntarily installing crap by installing random programs you just googled in your computer hardly counts as a virus

    Sometimes they overlap tho. What I call “trawling viruses”. Using some very old exploit that should hardly work on anybody and spamming it, you can still get lots of people that never update. In this case, you dont care about anything, you just try get a quick profit and you dont really care if you slow down the target machine

    But by and large, virus try to be as invisible as possible, do their bussiness and go undetected for as long as possible. If they can make an optimization to your system, like patching how they got in, they will

    Using debian is one way to protect yourself… but they still fall short because it still uses a very old authorization model

    Authorization model in computers is old

    Its no secret that the authorization model in computers is really old

    Qubes-os is a system that tries to mitigates that problem quite sucessfully. Qubes-os 4.0 rc1 has been released recently. Im currently testing it on my mediabox, and will probably use it in my main machine soon

    Holger gave a talk a few weeks ago named “Using qubes os from the pov of a debian developer”. In debconf fashion you can watch it online

  9. Automatize wildcard cert renewal

    problem definition

    I host one instance of sandstorm. Id like to use my own domain AND HTTPS

    Sandstorm uses a new unguessable throw-away host name for every session as part of its security strategy, so in order to host your own under your own domain, you need a wildcard DNS entry and a wildcard cert for it (a cert with a *.yourdomain that will be valid for all your subdomains)

    I use certbot (aka letsencrypt) to generate my certificates. Unfortunately, they have stated that will not emit wildcard certificates. Not now, and very likely, not in the future

    Sandstorm offers a free DNS service using sandcats.io with batteries included (free wildcard cert). But this makes the whole site looks like they are not running under your control when you share a link to it to a third party (even tho is not true). This being one of the main points of running my own instance makes this solution not suitable for me

    For reasons that deserver its own rant, I will not buy a wildcard cert

    This only left me with the option of running sandstorm in a local port, have my apache proxy petitions and present the right certs. I will be using the sandcats.io DNS + wilcard cert for websockets, which are virtually invisible to the final user

    The certbot cert renovation is easy enough to automate, but I need to automate the renewal of the sandcats.io cert, which lasts for 9 days

    solution

    A service will run weekly to renew the cert. For this, It will use a configuration faking using one of those free sandcats.io free certs so sandstorm renew the cert. Parse the new cert and tell apache to use it

    shortcomings

    Disclaimer: This setup is not officially supported by sandstorm

    The reason is that some apps doesnt work well due to some browsers security policies. Just like sandstorm guys, I had to make a compromise. The stuff I use works for me and I have to test it before I use something new :)

    code
    updatecert.py
    
    #!/usr/bin/env python3
    import json
    from subprocess import call,check_call
    from glob import glob
    from shutil import copy
    from time import sleep
    from timeout import timeout
    
    TIMEOUT = 120
    
    SSPATH = '/opt/sandstorm'
    CONF = SSPATH + '/sandstorm.conf'
    GOODCONF = SSPATH + '/sandstorm.good.conf'
    CERTCONF = SSPATH + '/sandstorm.certs.conf'
    CERTSPATH = SSPATH + '/var/sandcats/https/server.sandcats.io/'
    APACHECERT = '/etc/apache2/tls/cert'
    APACHECERTPUB = APACHECERT + '.crt'
    APACHECERTKEY = APACHECERT + '.key'
    
    RESTART_APACHE_CMD = 'systemctl restart apache2'.split()
    RESTART_SS_CMD = 'systemctl restart sandstorm'.split()
    
    @timeout(TIMEOUT, "ERROR: Cert didnt renew in {} secs".format(TIMEOUT))
    def check_cert_reply(files_before):
        found = None
        print("waiting for new cert in" + CERTCONF, end="")
        while not found:
            print(".", end="", flush=True)
            sleep(5)
            files_after = set(glob(CERTSPATH + '*.response-json'))
    
            found = files_after - files_before
        else:
            print("")
        return found.pop()
    
    def renew_cert():
        files_before = set(glob(CERTSPATH + '*.response-json'))
        copy(CERTCONF, CONF)
        call(RESTART_SS_CMD)
        try:
            new_cert = check_cert_reply(files_before)
        finally:
            print("Restoring sandstorm conf and restarting it")
            copy(GOODCONF, CONF)
            call(RESTART_SS_CMD)
            print("Restoring done")
        return new_cert
    
    def parse_cert(certfile):
        with open(certfile) as f:
            certs = json.load(f)
    
        with open(APACHECERTPUB, 'w') as cert:
    
            cert.write(certs['cert'])
    
            ca = certs['ca']
            ca.reverse()
            for i in ca:
                cert.write('\n')
                cert.write(i)
    
        copy(certfile[:-len('.response-json')], APACHECERTKEY)
    
    if __name__ == '__main__':
        new_cert = renew_cert()
        parse_cert(new_cert)
        try:
            check_call(RESTART_APACHE_CMD)
        except:
            # one reason for apache to fail is to try to parse the json before is completely written
            # try once again just in case
            print("failed to restart apache with the new cert. Trying once more")
            sleep(1)
            parse_cert(new_cert)
            call(RESTART_APACHE_CMD)
    
    updatecert.service
    
    [Unit]
    Description=tries to renew ss cert
    OnFailure=status-email-admin@%n.service
    
    [Service]
    Type=oneshot
    ExecStart=/root/updatecert.py
    
    updatecert.timer
    
    [Unit]
    Description=runs ss cert renewal once a week
    
    [Timer]
    Persistent=true
    OnCalendar=weekly
    Unit=updatecert.service
    
    [Install]
    WantedBy=default.target
    
  10. Small automatic Backup using python

    EDIT: Newer version available

    problem definition

    An automatic backup of a database file inside a legacy windows virtual machine without internet access.

    The client doesnt have a dedicated online machine for backups and the backup should “leave the building”

    solution

    A shared folder using virtuabox shared folders facilities. The database will be copied once a day

    Outside the VM systemd will monitor the copy and launch the backup script

    The backup will be compress using XZ and encrypted using gpg with an asymetric key

    Finally, it will be sent for storage to one mail account where they can check if the backup was made

    code
    backup.py
    
    #!/usr/bin/env python3
    
    from datetime import datetime, timedelta
    from os import path, remove
    from subprocess import run
    from sys import exit
    
    now = datetime.now()
    
    TO = "info@company.com"
    BODY = "mail.body"
    FILENAME = 'database.mdb'
    PATH = '/home/company/shared'
    FILE = path.join(PATH, FILENAME)
    FILEXZ = FILE + ".xz"
    OUTPUT = path.join(PATH, 'backup_{:%Y%m%d_%H%M%S}.gpg'.format(now))
    
    XZ_CMD = "xz -k {}"
    GPG_CMD = "gpg -q --batch --yes -e -r rk -o {} {}"
    MAIL_CMD = "mutt -a {} -s '{}' -- {} < {}"
    
    error = ""
    
    # sanity file exists
    if path.exists(FILE):
        print("New file {} detected. Trying to generate {}".format(FILE, OUTPUT))
    else:
        exit("{} not found. Aborting".format(FILE))
    
    
    filedate = datetime.fromtimestamp(path.getmtime(FILE))
    
    # sanity date
    if now - timedelta(hours=1) > filedate:
        error = """The file you are sending was created 1+ hour ago
    file date   : {:%Y-%m-%d %H:%M:%S}
    current date: {:%Y-%m-%d %H:%M:%S}
    
    Please check if its the correct one
    """.format(filedate, now)
    
    # Compress
    if path.isfile(FILEXZ):
        remove(FILEXZ)
    
    run(XZ_CMD.format(FILE).split())
    
    # encrypt
    run(GPG_CMD.format(OUTPUT, FILEXZ).split())
    remove(FILEXZ)
    
    # sanity size
    filesize = path.getsize(OUTPUT)
    if filesize < 5000000:
        error += """"The size of the file you are sending is < 5Mb
    File size in bytes: ({})
    
    Please, Check if its the correct one
    """.format(filesize)
    
    subjectstr = "Backup {}ok with date {:%Y-%m-%d %H:%M:%S}"
    subject = subjectstr.format("NOT " if error else "", now)
    body = """Everything seems okay, but dont forget to check
    manually if the saved file works okay once in a while!
    """
    if error:
        body = error
    
    with open(BODY, "w") as f:
        f.write(body)
    
    print("{} generated correctly. Trying to send it to {}".format(OUTPUT, TO))
    run(MAIL_CMD.format(OUTPUT, subject, TO, BODY), shell=True)
    remove(OUTPUT)
    

    Inside the VM using the scheduler

    backup.bat
    
    @echo off
    xcopy /Y C:\program\database.mdb z:\
    

    mutt conf file

    .muttrc
    
    set sendmail="/usr/bin/msmtp"
    set use_from=yes
    set realname="Backup"
    set from=backup@company.com
    set envelope_from=yes
    

    systemd files

    shared.service
    
    [Unit]
    Description=company backup service
    
    [Service]
    Type=oneshot
    ExecStart=/root/backup/backup.py
    
    shared.path
    
    [Unit]
    Description=shared file
    
    [Path]
    PathChanged=/home/company/shared/database.mdb
    Unit=shared.service
    
    [Install]
    WantedBy=multi-user.target
    
  11. Look at that nice looking FreedomBox!

    I’m rebuilding my home server and decided to take a look at freedombox project as the base for it

    0.6 version was recently released and I wasnt aware of how advanced the project is already!

    They have a virtualbox image ready for some quick test. It took me longer to download it than to start using it

    Here’s a pic of what it looks like to entice you to try it :)

    freedombox snapshot

    All this is already on debian right now and you can turn any debian sid installation into a freedombox just by installing a package

    The setup generates everything private on the first run, so even the virtualbox image can be used as the final thing

    They use plinth (django) to integrate the applications into the web interface. More info on how to help integrate more debian packages here

    A live demo is going to be streamed this friday and a hackaton is scheduled for this saturday

    Cheers!

    Original post at Laura Arjona’s Blog on 30 October 2015. Thanks for first hosting it!

¡ En Español !