IT Infrastructure and Software Development from the Customer's Perspective
Way back in time, someone thought it would be a good idea for the Java run-time to cache DNS look-ups itself. Once it has an IP address for a name, it doesn’t look up the name again for the duration of the Java run-time process.
Fast forward a decade, and the Java run-time is the foundation of many web sites. It sits there running, and caches DNS lookups as long as the web site is up.
On my current project, we’re changing the IP address of every device we move, which is typical for a data centre relocation. We have a number of Java-based platforms, and they’re well integrated (read interconnected) with the rest of our environment, and we’re finding we have to take an outage to restart the Jave-based platforms far too often.
In hindsight, it would have been far simpler to change the Java property to disable DNS caching. Run this way for a while in the old environment to be sure there are no issues (highly unlikely, but better safe than sorry). Then you can start moving and changing IPs of other devices knowing your Java-based applications will automatically pick up the changes you make in DNS.
In case the link above goes stale, the four properties you want to look at are:
<pre style="background-color: white; font-family: "Lucida
Console", "Courier New", Courier, monospace; font-size:
12px; line-height: 16px; overflow-x: auto; overflow-y:
auto;">networkaddress.cache.ttl
networkaddress.cache.negative.ttl
sun.net.inetaddr.ttl
sun.net.inetaddr.negative.ttl
</pre>Look them up in your Java documentation and decide which caching option works best for you. (Normally I’d say how to set the parameters, but I’ve never done Java and I fear I’d say something wrong.)
In 2006, I was project manager on a VMware implementation for a health care organization. We virtualized 200 servers in six weeks, after a planning phase of about 2 months. Out of that experience I wondered, “Did virtualization have anything to offer a smaller business?” So I set up a box at home and converted my home “data centre” into a virtualized data centre using VMware’s Server product, which was the free product at the time.
After five years it’s been an interesting experience and I’ve learned a lot. At the end of the day, I’m pretty convinced that the small business that has a few servers running in a closet in their office doesn’t have a lot to gain from virtualizing within the “closet”. (I’m still a big fan of virtualization in a medium or large organization.) I’m going to switch back to running a single server with all the basic services I run (backup, file share, DNS, DHCP, NTP) on a single server image.
I had one experience where the VM approach benefited me: As newer desktops and laptops came into the house, the version of the backup client installed on them by default was newer than the backup master on my backup server (I use Bacula). Rather than play around with installing and updating different versions of the backup client or master, I simply upgraded the backup master VM to a new version of Ubuntu and got the newer version of Bacula. I didn’t have to worry about what other parts of my infrastructure I was going to affect by doing the upgrade.
The down side was that I spent a lot of time fooling around with VMware to make it work. Most kernel upgrades require a recompile of the VMware tools on each VM, which was a pain. I spent a fair bit of time working through an issue about timekeeping on the guests versus the VMware host that periodically caused my VMs to slow to a crawl.
Connecting to the web management interface and console plug-in always seemed to be a bit of a black art, and it got worse over time. At the moment, I still don’t think modern versions of FireFox can connect to a running VM’s console, so I have to keep an old version around when I need to do something with a VM’s console (before ssh comes up).
My set-up wasn’t very robust in the face of power failures. When the power went off, the VMs would leave their lock files behind. Then, when the power came back, the physical machine would restart but the VMs wouldn’t. I would have to go in by hand and clean up the lock files. And often I wouldn’t even know there’d been a power failure, so I’d waste a bit of time trying to figure out what was wrong. I should have had a UPS, but that wouldn’t solve all the instances where something would crash leaving a lock file behind.
All in all, and even if I had automated some of that, the extra level of complexity didn’t buy me anything. In fact, it cost me a lot of time.
Some of these problems would have been solved by using the ESX family of VMware products, but the license fees guarantee that the economics don’t work for a small business.
I originally started out planning to give Xen a try, but it turned out not to work with the current (at the time) version of Ubuntu. Today I would try KVM. I played around with it a bit last year and it looked fine for a server VM platform. I needed better USB support, so I switched to VirtualBox. VirtualBox worked fine for me to run the Windows XP VM I used to need to run my accounting program, but it has the free version/enterprise version split that makes me uncomfortable for business use.
So my next home IT project will be to move everything back to a simpler, non-virtualized platform. I’ll still keep virtualization around for my sandbox. It’s been great to be able to spin up a VM to run, say, an instance of Drupal to test upgrades before rolling out to my web site, for example, or to try out Wordpress, or anything else I need to try.
My blog posts about interesting steps along the virtualization road are here.
I’m not a network expert by any stretch of the imagination, but I’ve occasionally solved problems by poking around a bit with Wireshark.
Of course, if my network is down I’m not going to be able to download Wireshark. Fortunately, I remembered to re-install Wireshark on my new computer before I needed it. I installed it using the Ubuntu Software Centre.
A new feature of Wireshark that I didn’t know about: If you add yourself to the “wireshark” group, you can do live captures without running Wireshark as root.
sudo adduser --disabled-login --disabled-password wiresharksudo chgrp
wireshark /usr/bin/dumpcapsudo chmod 754 /usr/bin/dumpcapsudo setcap
'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap
Now add yourself to the wireshark group and log out. When you log back in you should be able to do live captures without root privileges. To add yourself to the wireshark group in a terminal, type:
adduser your-user-name wireshark
The Wireshark documentation for this is here (scroll down a bit).
My new laptop had a way-too-sensitive touchpad. So much so that I installed Touchpad Indicator so I could turn it off. Interestingly, I couldn’t use its “turn off touchpad when mouse plugged in” feature, because it seemed to always think the mouse was plugged in.
That led me to realize that I also didn’t have the touchpad tab in the mouse control panel. Googling, I found that this was a common problem with ALPS touchpads, like the one I had.
The fix is here: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/550625/comments/492. An updated driver that allows you to get at the touchpad control in the mouse control panel. Download the .deb file, then double-click it and wait a bit for the Software Centre to run. Click install, enter your password, wait, then restart, and you’ll have the touchpad tab in the mouse control panel. On the touchpad tab, you can turn off mouse clicks when typing, and suddenly typing isn’t a pain.
I have to resist the urge to rant a bit. I bought an Ubuntu-certified laptop. This is the sort of pissing around fixing that I was hoping to avoid. Sigh!