[ ] DHCP reservation for X230 server (if not already configured)
[ ] DHCP reservation for Xperia Z3 Compact
[ ] SSH port forwarding for Z3C (find appropriate port)
[ ] Zabbix agent + logwatch + sendmail config for Z3C
[ ] ???
[ ] Profit!
[ ] DHCP reservation for X230 server (if not already configured)
[ ] DHCP reservation for Xperia Z3 Compact
[ ] SSH port forwarding for Z3C (find appropriate port)
[ ] Zabbix agent + logwatch + sendmail config for Z3C
[ ] ???
[ ] Profit!
Managed to get this setup in a VM very easily. I think its really neat that it sets up so many different VPN protocols in one fell swoop. Also like the added touch of fetching clients in the event you can’t trust the publicly available clients in your country.
Managed to connect to L2TP/IPsec via Android.. once. Not sure what the deal is but for some reason I wasn’t able to connect again. Haven’t attempted to do much troubleshooting, but likely will later. Did establish a connection from Kubuntu 16.10 via WireGuard. Stupid easy that. Kind of amazed how simple the WireGuard connection is. Also having difficulty fully establishing a connection via OpenVPN. It gets super close to connecting then fails. Again, haven’t done much troubleshooting, but will do more later. UPDATE: OpenVPN works if you’re not an idiot.
I get that they want to make this an easy community thing that is meant to be shared but.. Would be nice (I realize this might be a tall order) if there were user management. From what i can see it only makes one user per VPN protocol. Considering how much it sets up, I could see how it would potentially be difficult to maintain users across that many different protocols/services, but again, would be nice to share a hand-rolled VPN VPS among a group of friends.
Pondering setting up a pfSense box for the apartment. Would be interesting to get OpenVPN running on pfSense.. and use the OpenVPN client functionality in pfSense to connect to the VPN server. VPN daisy chain of sorts.
This was my first experience with Ansible. Curious to read through the Ansible code to try to re-create even one of the services by hand how Streisand sets it up (like on a Pi, maybe?).
Will update further as I test, but so far, super intriguing project.
Initially discovered via The impossible task of creating a “Best VPNs” list today
Source: GitHub – jlund/streisand
In this article, we will show you how to disable or prevent directory listing of files and folders on your Apache web server using .htaccess file.
Source: Disable Apache Web Directory Listing Using .htaccess File
Got my LSI 9207 delivered and installed today!
Here is the approximate sequence of events:
Curious to see if this will remedy the strange hangs my server has been experiencing lately.
The whole process took a handful of hours, but it mostly spent waiting on things (booting into live media, etc). Very simple procedure. We’ll see how things go from here!
In preparation for the arrival of my HBA, I’m creating a backup of my server. As things currently stand:
Filesystem Size Used Avail Use% Mounted on udev 79G 0 79G 0% /dev tmpfs 16G 86M 16G 1% /run rpool/root 6.9T 1.4T 5.5T 21% / tmpfs 79G 28K 79G 1% /dev/shm tmpfs 5.0M 8.0K 5.0M 1% /run/lock tmpfs 79G 0 79G 0% /sys/fs/cgroup cgmfs 100K 0 100K 0% /run/cgmanager/fs tmpfs 16G 0 16G 0% /run/user/1000
I’m using 1.4T. Thats less than a formatted 2T hard drive. That’ll definitely fit!
Except it doesn’t. Left the rsync running overnight, got to work today, and the drive was full at approximately 1.8T.
Why? Because apparently ZFS compression is doing it’s job..
That was a question I had regarding disk usage measurement with ZFS compression enabled. du output is (surprise) how much space is used on the disk, not how much data you actually have. In my case:
root@tnewman0:~# zfs get all rpool | grep compressratio rpool compressratio 1.17x - rpool refcompressratio 1.00x -
1.17 x 1498796032 kilobytes is 1753591357 kilobytes, or 1.8T. Tight fit. Probably could have done a bit of slimming down and squeezed it in, but wheres the fun in that.
My solution:
root@tnewman0:~# zpool status pool: backup state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM backup ONLINE 0 0 0 wwn-0x5000cca22de70c5e-part1 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scan: scrub repaired 0 in 2h15m with 0 errors on Mon Jan 9 22:03:44 2017 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 scsi-22f6baa1200d00000-part1 ONLINE 0 0 0 scsi-22f4b9a2e00d00000-part1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 scsi-22f4be2f000d00000-part1 ONLINE 0 0 0 scsi-22f5b32bc00d00000-part1 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 scsi-22f5b92a900d00000-part1 ONLINE 0 0 0 scsi-22f5bc2a900d00000-part1 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 scsi-22f6b1ee800d00000-part1 ONLINE 0 0 0 scsi-22f6b5eb900d00000-part1 ONLINE 0 0 0 logs mirror-4 ONLINE 0 0 0 scsi-22f7b0a1900d00000 ONLINE 0 0 0 scsi-22f7b4a0d00d00000 ONLINE 0 0 0 cache scsi-22f7bda1b00d00000 ONLINE 0 0 0 spares scsi-22f4b4ac400d00000-part1 AVAIL errors: No known data errors
Make a compression enabled pool on the external!
Aaaand now we wait for rsync to do its business..
UPDATE: Interesting change in I/O wait time between filesystems. When going from ZFS pool to EXT4, the average I/O wait percentage is ~13.14%. When going from ZFS pool to ZFS pool, the I/O wait percentage is ~6.58%.
Didn’t take a screencap, but I now have XMPP and video chat functionality using Openfire with the Openfire Meetings plugin.
A few things I discovered.. During installation, you do need to create the database (MariaDB in my case) using:
mysql> CREATE DATABASE openfire CHARACTER SET utf8 COLLATE utf8_general_ci;
Simply ‘create database openfire;’ was not sufficient. It was obvious that the application could connect to the database, but it couldn’t finish the rest of it’s business setting things up.
Also, read the stuff that it says during the setup wizard. The last step has you create an administrative user. The fields are ‘Email’, ‘Password’, and ‘Confirm Password’ (or something like that). Then it takes you to the login screen for the first time. The username is ‘admin’ and the password is whatever you supplied in that previous step. I had trouble logging in because I thought that the username was my email. It isn’t. It’s ‘admin’. Of course, you’re free to create other administrative users once you’re logged in.
The other bump I ran into had to do with SSL. I’m a fan of free and letsencrypt works very well. I put the key in like I was supposed to, put the full chain in for the cert (didn’t put anything in for ‘Password’ since the key was created without one), and whacked save. But when I clicked on the cert I just added to view information about it (like you can do with the two existing self-signed certs) it threw up a bunch of Java errors. systemctl restart openfire solved that problem.
Still need to do a bit more research into certs though. When I installed the letsencrypt cert, there were three listed (letsencrypt + 2 self-signed). I’m not sure exactly how to tell openfire which cert to use. My low-tech solution was to (make a backup of the keystore beforehand and do a test restoration) remove the two self-signed certificates so that only my letsencrypt cert remained. systemctl restart openfire, and the letsencrypt cert was the cert that was used for everything that used a cert, including the administrative web interface, and xmpp (I checked the cert provided in pidgin and it was using the same letsencrypt cert the browser was using). I’m sure there’s a way to tell Openfire, “Of the certificates you have in your store, use X cert for Y service,” I’m just not sure where that is in the control panel.
UPDATE: confirmed Openfire Meetings can be used to transmit hand farts over the internet
For whatever reason, the maas-image-builder script for CentOS 7 does not like our local mirror. We’ll investigate further. But the problem is that it leaves the qemu vm in a bit of a half baked state. The process fails, but it doesn’t clean up after itself. No matter, as we can just as easily do that ourselves.
root@tnewman3:~/maas-image-builder# ./bin/build centos amd64 –centos-edition 7
Formatting ‘/tmp/img-builder-3Eef5T/disk.img’, fmt=raw size=5368709120
WARNING KVM acceleration not available, using ‘qemu’
ERROR Guest name ‘img-build-centos7-amd64’ is already in use.
Traceback (most recent call last):
File “./bin/build”, line 45, in
sys.exit(main(args))
File “/root/maas-image-builder/builder/main.py”, line 108, in main
osystem.build_image(args)
File “/root/maas-image-builder/builder/osystems/centos.py”, line 101, in build_image
super(CentOS, self).build_image(params)
File “/root/maas-image-builder/builder/osystems/__init__.py”, line 158, in build_image
extra_args=extra_arguments)
File “/root/maas-image-builder/builder/virt.py”, line 76, in install_location
subprocess.check_call(args)
File “/usr/lib/python2.7/subprocess.py”, line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command ‘[u’virt-install’, u’–name’, u’img-build-centos7-amd64′, u’–ram’, u’2048′, u’–arch’, u’x86_64′, u’–vcpus’, u’1′, u’–os-type’, u’linux’, u’–os-variant’, u’rhel5′, u’–disk’, u’path=/tmp/img-builder-3Eef5T/disk.img,format=raw’, u’–network’, u’bridge=virbr0,model=virtio’, u’–location’, u’http://yum.tamu.edu/centos/7/os/x86_64′, u’–initrd-inject=/root/maas-image-builder/contrib/centos/centos7/centos7-amd64.ks’, u’–extra-args=console=ttyS0 inst.ks=file:/centos7-amd64.ks text inst.cmdline inst.headless’, u’–noreboot’, u’–nographics’, u’–force’]’ returned non-zero exit status 1
root@tnewman3:~/maas-image-builder# virsh list –all
Id Name State
—————————————————-
4 img-build-centos7-amd64 running
root@tnewman3:~/maas-image-builder# virsh undefine img-build-centos7-amd64
Domain img-build-centos7-amd64 has been undefined
root@tnewman3:~/maas-image-builder# virsh list –all
Id Name State
—————————————————-
4 img-build-centos7-amd64 running
root@tnewman3:~/maas-image-builder# virsh destroy img-build-centos7-amd64
Domain img-build-centos7-amd64 destroyed
root@tnewman3:~/maas-image-builder# virsh list –all
Id Name State
—————————————————-
root@tnewman3:~/maas-image-builder#
Now we can set about the task of building the image once more.
Ubuntu recently announced support for other operating systems to be deployed with their Metal as a Service provisioning utility. The hosting company that I work for is primarily a CentOS and Windows shop, so I was curious to experiment with this utility. While it is still in a testing phase for us, this shows great promise as a replacement for our existing PXE server.
I just used MaaS to successfully install CentOS on one of the nodes in my test network about 30 minutes ago, so bear with me. This will likely be a working document.
My test setup consists of:
Installing and getting MaaS up and running is straightforward enough, and Ubuntu’s documentation is sufficient. Initially, I tried to model my setup after their topology diagram on their site, with a ‘Region Controller’ manipulating satellite ‘Cluster Controllers’. I may come to back to that (my setup was torn down and rebuilt a few times during this whole process), but for our testing purposes, one MaaS server to start.
I did a couple of Ubuntu installs just to make sure all the vanilla MaaS functionality was working as it should. Again, for our purposes, an Ubuntu-only provisioning system would not be practical. The focus then shifted to how to use MaaS to provision CentOS.
I suspect that because this feature is relatively new, there is not a lot of documentation on this subject.
The kind folks over at #maas on freenode directed me to a script on Github. I was unable to use the script directly, but after close examination discovered something interesting.
command ‘sudo ./bin/build centos amd64 –centos-edition 7‘
HMMMM. So I got to googling.. and found this: maas-image-builder So, I downloaded it, and poked around a bit. From the readme:
root@tnewman3:~/maas-image-builder# cat README
Automated building system for tarball images used by the curtin installer.
Supported Operating Systems:
– CentOS 6 (i386, amd64)
– CentOS 7 (amd64)
– RHEL 7 (amd64)
Well that seems really flipping promising. So, using that previously mentioned script as a template, I issued the following command to install the necessary dependencies:
./bin/build –install-deps
I let that complete, then for the magical part:
./bin/build centos amd64 –centos-edition 6
I should note that we doctored /maas-image-builder/builder/osystems/centos.py to use our own mirrors at http://dist1.800hosting.com/centos/ This sped up the process considerably.
Basically, an image gets installed into a QEMU VM, a snapshot is taken, and that is the image that MaaS pushes to provision servers.
In the script the author jjasghar suggests grabbing a coffee. On the hardware that I was using, it took a VERY VERY long time. Though I can’t say exactly how long it took, because I left the image to bake over night. It was on 2 hours when I left for the day.
When I came back in the morning, I checked to see if the process had completed, and sure enough:
root@tnewman3:~/maas-image-builder# ls -al build-output/
total 540132
drwxr-xr-x 2 root root 4096 Feb 20 10:17 .
drwxr-xr-x 7 root root 4096 Feb 19 22:02 ..
-rw-r–r– 1 root root 276538776 Feb 19 22:02 centos6-amd64-root-tgz
-rw-r–r– 1 root root 276538776 Feb 20 10:17 centos6-amd64-root-tgz.bak
(I immediately made a backup of the image just to be safe)
Wonder of wonders, the process completed!
Referring back to centos.rb, the next step is to add the newly created image to the MaaS service itself. This was achieved by:
root@tnewman3:~# maas root boot-resources create name=centos/centos6 architecture=amd64/generic content@=/root/maas-image-builder/build-output/centos6-amd64-root-tgz
Again, slightly modified from the version in the centos.rb script to use the MaaS user (‘root’) that I had created and to reflect our choice of CentOS 6 and not CentOS 7.
From there, I went into the MaaS options in the web interface, specified ‘CentOS’ as the ‘Default operating system used for deployment’ and CentOS 6.0 the Default OS release used for deployment and went about commissioning and provisioning a server as usual!
I logged in with
root@tnewman3:~# ssh [email protected]
No password is required, as it uses the SSH key that it would normally use for any other provision.
I hope that this write-up of my MaaS CentOS experience is helpful to others and I welcome feedback! I will update this post with tweaks and modifications as we tailor MaaS to fit into our existing workflow.
TL;DR – It is quite possible to provision CentOS with Ubuntu’s MaaS service.
Non-update Update 4-2-15:
When I was working on this I got kind of consumed by it. Very intense work on it.. and then.. nuthin. I am sure we will come back to this at some point. Thank you very much for all the interest in my post, but at least as of today, this post is not being maintained. I do hope that Ubuntu/Canonical adds some serious documentation as the MaaS feature matures.