TCLP 2012-03-04 Combating Geek Fatigue

This is a feature cast, an episode of The Command Line Podcast.

Listener feedback this week is from Eric, Bryan and Will all in some way responding to the last feature about switching my wife back to Linux.

The hacker word of the week this week is for free.

The feature this week is a further discussion of a topic I’ve visited before, geek fatigue, this time focusing more on combating it, especially the form the arises from experiences like my return to Linux.

[display_podcast]

View the detailed show notes online. You can grab the flac encoded audio from the Internet Archive.

Creative Commons License

This work is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License.u

2 Replies to “TCLP 2012-03-04 Combating Geek Fatigue”

  1. Tom,

    I know you were focused on geek fatigue in this episode, but you did focus a lot on system upgrades. It seems to me that there is an 800 lb elephant in the room that you forgot to mention: virtualization.

    In brief, virtualization allows you to run one operating system inside another, giving it access to some subset of the overall shared hardware on a single machine. I run Virtual PC “way back when” on my G4 & G5 macs and today I run Virtual Box, VM Ware, and Parallels Desktop on my Core i7 based Mac.

    Many of the geek fatigue issues resulting from system upgrades with Linux could be contained and their effects on your critical path could be eliminated if you utilized some method of virtualization. Simply put, if you virtualize your critical path system, you could make an exact copy of it, and run it at the same time as you are running your operational system. On one copy you could keep performing your critical path work (podcasting, paying bills, playing angry birds, etc). On the other, you could begin down the path of upgrading various parts of your Linux OS, all the while taking notes. Then you could stop at any point you started to feel fatigue and set that virtual machine (VM) aside.

    Should you wreck the viability of a VM you could simply trash it and re-clone your operational VM. This would save you quite a bunch of time trying to get the trashed system operational again and thereby alleviate the source of some major geek fatigue.

    I understand that newer hardware virtualizes better than older hardware, but if I remember correctly, you have a dual G5 system. I expect you should be able to dedicate one entire processor to your operational VM while sharing the other with the host operating system and your experimental VM.

    This would work even better for someone running one of the newer Core i3, i5, or i7 processors. These machines can deliver 2, 4, or 8 very capable real or virtual processor cores for not a lot of money. Over a year ago I helped spec out a new desktop machine for a customer. We paid extra for a hefty graphics card. The systems had 8 GB of RAM, 500 GB HDD, and a 2.4 GHz Core i7 processor with 4 physical and 8 virtual CPU cores. The total cost of each machine was around $1300.

    A quick look at the Dell Small Business website (the cheapest way to buy a Dell) shows an i7 equipped machine available for only $750.

    I understand some people don’t have the resources for newer machines like these, but even an older Core 2 Duo, dual core CPU machine should provide sufficient power to make virtualization a real option for people with even older hardware.

    I’d love to hear your thoughts on how virtualization might help to constrain or even eliminate some geek fatigue.

    Sincerely,
    Paul Fischer
    Host, The Balticon Podcast

    1. This is certainly an interesting take and similar to a suggestion that another friend made, to use RAID for a similar purpose. The suggestion was to break a mirror before upgrading then, if the upgrade takes, restore the array from the known good copy as a quick way to revert.

      The problem with these in general is that there is an incremental drain in the form of digging into LVM/RAID/etc. and virtualization. The tools for both are getting easier but it is still a non-zero cost. It also requires forethought, like my simpler take of having a backup/restore plan.

      More specifically, though, I’d be incredibly hesitant to virtualize my critical path system since the key piece is my audio toolchain. It was hard enough to sort out the drivers, modules and configs on a bare metal OS. I have *no* idea how any of this would translate, if at all, to a VM. Worse, even with a kernel compartment approach, I fear that the effect would be on the latency of my tool chain. There is more than just the processor involved here.

      For folks not doing media production or something else so hardware dependent, I think your suggestions are excellent assuming the hardware resources sufficient to the task of running a VM at good enough speed.

Leave a Reply

Your email address will not be published. Required fields are marked *