Tag: linux

  • How-to: Host web services out of your residence

    Why?

    Because its super fun and super educational! It teaches you a little bit about a lot of different things. It also paves the way for taking back control of your own data, by hosting the services you use on your own equipment. Lets get started!

    You will need

    • Access to network equipment provided by your ISP (modem web interface)
    • A computer, Raspberry Pi, whatever
    • Some Linux install media
    • Some dynamic DNS service
    • A domain (optional, but very helpful)

    STEP 1: Install Linux on something

    Something can be literally anything that can run Linux. Ideally, that something should have an Ethernet port, but that’s not absolutely required. Old laptops are great because they’re small, quiet, and don’t consume much power. Starting out, specs aren’t so important.

    Headless (no desktop) is best, but if you’re not super comfortable with only having the command-line available, install with a desktop. It is also possible to install with a desktop, but disable the desktop at startup. This way, you have a desktop available when you need it, but its not consuming resources when you dont.

    I’m partial to Ubuntu, especially for those starting out because of the wealth of documentation available for doing various things.

    At a minimum, I usually start with SSH (default port 22) so I can conveniently access my server remotely, either from within my home network or elsewhere. To increase security, disable root login via ssh, and disable password auth for all users (only allow key-based auth). Otherwise use a strong password for your non-root user.

    Once you have your base system installed, note your IP address assigned to you by your DHCP server. Ideally, your server should be connected directly to the modem provided by your ISP.

    STEP 2: Port-forwarding and DHCP reservation

    For this you will need access to the web configurator interface for your cable/DSL/whatever modem provided by your ISP. If you do not know the username and password of your modem’s web interface, try searching online for “<modem model number> <ISP name> username/password.” Usually these are generic across all modem models provided by your ISP, but not always. If not, you may need to call your ISP to get your credentials.

    Port-forwarding will allow traffic coming to your IP to get sent to your new server. Sometimes this feature is called port-forwarding, sometimes it is called virtual servers. The name can vary by device manufacturer. If you are unsure about how to configure this feature, run a search query like “<your modem model number> <your isp> port forwarding” and you should be able to find what you need.

    To configure port forwarding, you’ll need the inside port, the outside port, protocol (TCP, UDP, or both), and the internal IP of your server. In the case of SSH, the default port is 22. HTTP is 80, and HTTPS is 443.

    I do NOT recommend setting your new Linux server as the default host or DMZ. Only forward the ports you want exposed.

    I also recommend configuring DHCP reservations. A DHCP reservation causes your DHCP server to always give the same IP to your Linux server. This prevents the problem of your Linux server getting a different IP and breaking port-forwarding.

    Alternatively, you may setup a static IP on your Linux server. If you do this, be sure to set the IP to an address outside of the range of IPs the DHCP server is assigning from.

    STEP 3: Dynamic DNS

    Assuming your residential IP is like most residential IPs, the IP assigned to your modem will change periodically. Normally this is not a problem because normally, you just need to get from inside your LAN to the rest of the public Internet.

    This presents a problem if you’re trying to go the other direction, that is, accessing your new Linux server inside your LAN from somewhere outside on the public Internet.

    You could pay for a static IP, but thats a bit unnecessary when there are plenty of free dynamic DNS options available.

    Dynamic DNS works by some agent running somewhere inside your network reporting the public IP of your modem back to some DNS service and updating an A record (an A record is a resource record associating some domain to an IP). As long as everything is working correctly, your dynamic DNS domain will always be pointing to whatever IP your modem has and you can always hit your residential IP.

    I personally use No-IP. afraid.org is another popular option, as is DynDNS. Agent setup is usually very simple.

    STEP 4: Test

    Now it’s time for us to test traffic flow to make sure we can actually hit our server from the outside.

    Assuming you setup ssh as your first service, try to ssh to your server using your dynamic DNS domain as the host. If you get a login prompt, congratulations! You did it!

    Troubleshooting

    If not, don’t fret. Go through the steps again and double check your port-forwarding settings.

    • Does the IP in the port-forwarding configuration match the IP currently assigned to your server?
    • Are there any intermediate networking devices between your modem (where you setup port-forwarding) and your server?

    If all your port-forwarding configuration looks good, check the ssh service.

    • Is ssh actually running?
    • Is there a firewall rule blocking ssh? (by default there shouldn’t be)
    • Try to ssh into the server from within the server. While logged into the server via the console, try to ssh into localhost. If you get a login prompt, ssh is running.

    If ssh is running properly, then test to make sure that your dynamic DNS service is working.

    • Check the administrative console for your dynamic DNS provider. Somewhere, it should tell you what your dynamic DNS record is currently set to (what IP). If it shows an IP, then it should be working. Make sure that its NOT something like 192.168.x.x, or some other private, internal IP. The IP shown in your dynamic DNS console should be the IP assigned to your modem by your ISP.

    STEP 5: Use your own domain with the dynamic DNS domain

    I highly recommend using your own domain with your services running on your server. You can accomplish this by creating CNAME records under your own domain pointing to the A record provided by your dynamic DNS service.

    When I first setup my residential server in Taiwan, I created the dynamic DNS domain “keelung1.ddns.net.” While I already had my own domain, travnewmatic.com, I did not initially use it with my apartment server in Keelung. Instead, I added services as subdirs after keelung1.ddns.net (keelung1.ddns.net/wordpress, keelung1.ddns.net/plex, keelung1.ddns.net/mastodon, etc). This is a BIG PAIN IN THE BUTT because it requires a LOT of messy configuration with the webserver (nginx, apache, etc). Some web applications really don’t like to be served out of a path like that.

    A MUCH EASIER method is to create CNAME records for each of your services. Currently I’m running a Mastodon instance in my apartment. Its domain is nangang.travnewmatic.com. That record is a CNAME pointing to my dynamic DNS A record keelung1.ddns.net.

    You do not have to do this starting out, but as your collection of services hosted in your residence grows, creating subdomains for each of your webservices makes life WAY EASIER.

    STEP 6: More services

    Think about what services you use on a regular basis. I’m particularly fond of RSS. One of my most heavily used self-hosted services is FreshRSS. Nextcloud is a very popular self-hosted web application. It provides features similar to Dropbox, as well some groupware features. You may be surprised how many quality, self-hosted, alternatives there are for the services you currently use.

    Email is not something you can, or want, to self-host out of a residential IP. Email is ridiculously complicated and most major email providers (Gmail, Yahoo!, etc.) block email traffic from residential IPs because they’re so often the source of spam. If you would like to provide some sort of messaging service, I recommend something like XMPP/Jabber, Matrix, Mattermost, or RocketChat.

    Additional Resources:

  • Docker

    Docker

    Life before Docker

    After ages musing about learning Docker, I have finally made the switch, and now I can never go back. I’d previously played with Docker on some of my Raspberry Pi 3’s and a surplus laptop, but I wasn’t happy with the pace of my learning. I figured that the best way to learn Docker quickly was to nuke my existing setup so that I’d have no choice but to rebuild.

    I backed up the data for my most important services, namely Mastodon and Synapse, and ferried it off of the apartment server. Reformatted and installed the most recent Ubuntu LTS, which happened to be 19.04 Disco. Then, I installed the usual No-IP update client so I could reliably SSH in from outside. Installed Docker via the convenience script, and I was off to the races.

    tt-rss FreshRSS

    Tiny Tiny RSS had been a favorite of mine for a long time, and its something that I used regularly. Also, its reasonably simple: PHP stuff, a database, a webserver, and a daemon that periodically scrapes feeds for new articles. Its utility and simplicity made it a good first Docker project. After some searching around, I landed on a now removed tt-rss image from linuxserver.io. It was a great first success. This provided good experience with docker commands and docker-compose syntax. Recently, it seems that there was a bit of drama between the tt-rss maintainer and the linuxserver.io people, and the tt-rss image has been removed from docker hub. As there is no official tt-rss image, and the docker solution that is provided is a bit messy and involves building an image, I’ve switched to FreshRSS. It has a nearly perfect-for-me docker-compose.yml that took just a few minutes to get running. Highly recommended.

    traefik

    Early on I adopted traefik as a reverse proxy to handle incoming connections to my containers. The timing was a bit interesting, as traefik v2 had just been released, and most of the examples online used the old v1 configuration. For my simple purposes, the tutorial and the Let’s Encrypt guide were enough to get me going.

    gitea

    I also knew that I should be managing stuff in a git repo. I’d looked at GitLab, tried to set it up a few times, and was unsuccessful. I eventually settled on Gitea. Gitea is similar to tt-rss in terms of complexity. Gitea was easy to get going, and also provided an opportunity to become familiar with version control (other than just ‘git clone’). All of my dockerized services are in Gitea repositories. I guess every professional programmer is already familiar with version control, but as I’m not a developer by trade, I’m only now realizing its benefits. Along with managing changes to code, I also like that I can leave notes (open issues) with myself. Having a place to chronicle and document my services has been so wonderful.

    watchtower

    Watchtower is a thing to automatically keep your containers up to date. Watchtower periodically checks for new versions of the image your containers are using, downloads the new image, and restarts your container using the new image. There are some risks associated with having your containers update automatically, but this gets into the ‘art’ of doing docker things. From what I’ve read, if stability is more important than newness, don’t use the ‘latest’ image tag. Use the image tagged with the specific version of the software you want, that way you wont be surprised when a new ‘latest’ comes out and your configuration doesn’t work and all of your sites go down (see: traefik v2 release). In my configurations, I sometimes specify a version, in others I do not. I like to live on the edge, and I’m the only one that uses the services on my server. So far, no surprises.. yet.

    portainer

    Portainer is a web interface to manage your docker hosts. It can manage images, containers, volumes, and networks. It can also show graphs of CPU, RAM, and network usage. Everything you would do with docker commands and docker-compose can be done using the Portainer interface. Currently, I have Portainer running on my apartment server, but I’ve exposed the docker API on my Dallas server (following this guide). This way, I can manage multiple docker hosts from a single interface. While it is possible to use Portainer to create multi-container services (stacks), I prefer to write docker-compose.yml files because they can be managed in version control.

    Conclusion

    If I am setting up services for myself, there is no going back. Everything is tidier and easier to manage. Services are isolated in their own network. Configurations are managed in version control. The difference is night and day. I had grown restless with my apartment server. It was doing everything I needed, but after a while I was too afraid to do anything with it because I’d forgotten how anything worked. This new combination of docker and version control has given me more confidence to manage my server. The problem of “how the hell did I do that” when troubleshooting an old service has largely been alleviated.

    I believe that I was able to become familiar with docker reasonably quickly because I’d already spent so much time doing things “the old way.” It feels like docker provides layer of abstraction over the usual administration of services. I was already familiar with how these services are supposed to be setup, I just needed to learn how make docker do it. I already know wordpress needs the php source, a webserver, something to process the php, and a database. That part wasn’t new. It was just the docker bits that were new. Previous knowledge accelerated the process immensely.

    If you haven’t taken the docker plunge, do it now. If you want to learn quickly, start by tearing down what you have so you have to rebuild. Start with a simple service you use regularly. Take time, bang on it, and eventually you’ll get it.

    Additional Resources

  • KDE Neon on ZFS

    Based on this excellent guide.

    WARNING:  You’re going to be messing with hard drives.  You’ll probably be doing most of this as root.  As the old adage goes, “The good thing about Linux is that it does exactly what you tell it to do.  The bad thing about Linux is that it does exactly what you tell it to do.”  I strongly recommend you to disconnect/remove drives you don’t want touched during this procedure so you don’t inadvertently wipe a disk you don’t want wiped.

    Note: at the time of this writing, KDE Neon is based on Ubuntu 16.04 Xenial.

    The whole process looks something like this:

    • Boot into Xenial Desktop live environment
    • Install ZFS packages in live environment
    • Prepare array
    • Debootstrap base system into array
    • Chroot into array
    • Install the rest of KDE Neon
    • Tweak
    • Reboot into new KDE Neon system on ZFS

    (more…)

  • Chinese Input + KDE Neon


    Managed to get Chinese input on KDE Neon using fcitx. These are the packages I have installed:

    sudo apt install fcitx fcitx-bin fcitx-config-common fcitx-config-gtk fcitx-data fcitx-frontend-all fcitx-frontend-gtk2 fcitx-frontend-gtk3 fcitx-frontend-qt4 fcitx-frontend-qt5 fcitx-module-dbus fcitx-module-kimpanel fcitx-module-lua fcitx-module-x11 fcitx-modules fcitx-ui-classic im-config libfcitx-core0 libfcitx-gclient0 libfcitx-qt5-1 libgeoclue0 libgettextpo0 libjavascriptcoregtk-4.0-18 libpresage-data libpresage1v5 libtinyxml2.6.2v5 libwebkit2gtk-4.0-37 libwebkit2gtk-4.0-37-gtk2 presage zenity zenity-common fcitx-libpinyin

    To start at login:

    sudo cp /usr/share/fcitx/xdg/autostart/fcitx-autostart.desktop /etc/xdg/autostart/

    I was having an issue where the fcitx configurator window would pop up at login, which I did not want. I edited the Exec= line as shown, which seems to have alleviated the problem:

    ...
    #Exec=fcitx-autostart
    Exec=fcitx
    ...

    Not sure why fcitx-autostart also opens fcitx-configtool at login. If I close everything and run fcitx-autostart from xterm, it does start the little systray applet thing, but not the configurator. From what I can tell, it only opens the configurator when it runs at login.

    Fcitx Homepage

  • KDE Neon System Settings Workspace Theme Picker Missing

    sudo apt update
    sudo apt dist-upgrade
    sudo apt install qml-module-org-kde-kcm

    Source: https://muhdzamri.blogspot.tw/2018/01/problem-with-kde-look-and-feel-and.html

    More info:

    # apt show qml-module-org-kde-kcm
    Package: qml-module-org-kde-kcm
    Version: 5.42.0-0neon+16.04+xenial+build54
    Priority: optional
    Section: libs
    Source: kdeclarative
    Maintainer: Neon CI <[email protected]>
    Installed-Size: 63.5 kB
    Depends: libkf5declarative5 (>= 5.42.0-0neon+16.04+xenial+build54), qml-module-org-kde-kirigami2, libc6 (>= 2.14), libkf5quickaddons5, libqt5core5a (>= 5.9.3+dfsg), libqt5qml5 (>= 5.9.3), libstdc++6 (>= 4.1.1)
    Homepage: https://projects.kde.org/projects/frameworks/kdeclarative
    Download-Size: 15.6 kB
    APT-Manual-Installed: yes
    APT-Sources: http://archive.neon.kde.org/user xenial/main amd64 Packages
    Description: provides integration of QML and KDE Frameworks - kconfig
     This import contains KDE extras that are visually similar to Qt Quick
     Controls.
     .
     This package contains the QML files used by libkf5declarative.
  • Local git repo + ssh keys

    Handy little guide on setting up git repos locally.

    If you’re you’ve got PasswordAuthentication set to no in your sshd_config, you need to disable it temporarily to copy the keys as the user you normally use using ssh-copy-id git@localhost.  Once the key is successfully copied, then you will no longer be prompted for a password..  and you can re-disable Password authentication.

  • Apartment and keelung1.ddns.net Maintenance

    Geeked out pretty hard this weekend!  Bit of a convoluted process but I’m happy with the way things are going.  Laptop is free to run Windows for vidya.  Raspberry Pi 3 does my dinky Linux server stuff.

    Re-partitioned the drive in the Windows laptop to have a bit more room for the break-in-case-of-emergency Linux partition (which I still need to install).

    Bought a Raspberry Pi (and related party supplies) near GuangHua Tech Plaza in the underground market.  Split off everything but /boot onto a 500GB external HDD so as not to kill the MicroSD card.

    While the laptop is a less-than-ideal piece of gaming hardware, it is sufficient for my purposes, especially now that it doesn’t need to do Linux duties anymore.

    Still need to iron out Bluetooth connectivity with the PS4 controller..

  • Stuck on the last step of Openfire installation?

    update `ofUser` set plainPassword = ‘admin’, encryptedPassword = null where username = ‘admin’;

    See if this helps 😉

  • ELK stack made

    Following this guide I set up ELK stack in a VM.  Currently only getting logs from my www server, but will add others later.  Also need to mess with grok.