Search results for: “feed”

  • Data Recovery | Hard Drive Failure Monitoring | Data Recovery and Rescue Tools

    Recovering Data from a Dying Hard Drive. This is something that I probably do at least once a month. Is it really possible to recover data from a hard drive that is failing? It depends. It depends on the state of the drive and what kind of tools you have available at hand. Many people move their data over to USB drives not realizing that they are as likely to fail as their internal drive and they have to face the prospect of USB Data Recovery. One thing that is better than hard drive crash data recovery is a good backup strategy (well implemented.)

    The following suggestions and procedures basically work as well for a desktop hard drive as a notebook hard drive or even a portable external hard drive.

    [ad#adsensesquare]

    Hard Drive Failure Monitoring

    To me this is my first line of defense against hard drive failure. IF you can have even 12 hours warning that a drive is about to fail wouldn’t it be worthwhile? YES, absolutely. That’s why I have setup many systems I administer with hard drive monitoring and the ability to send a scripted email when they “don’t feel well.”

    I’ve found that if you can catch the failure early using something like SmartMonTools to give you a warning when the drive is not healthy, you can usually replace a drive before it is too far lost. So, do I wait until the S.M.A.R.T. monitoring tells me that the drive is failing it’s health assessment? No, I have a fairly low threshold for tolerating S.M.A.R.T. errors. Even if it’s a simple failure to run the complete S.M.A.R.T. test I will usually start planning for the drive replacement. I have a tendency on the machines that I administer to want to install blat as well as smartmontools if it’s a Windows based machine and configure the smartd to email failure reports to me. Now, admittedly I see false alarms, but 1 pending sector that can’t be read, if it resolves itself in a few days is probably not enough to make me replace a drive. If I see that number start to increase though, or if I see reports of ATA errors start to go up, then it’s time to swap the drive.

    So, that’s what I use for early reporting. On my linux machines I always have a mailer installed so they can email me problem reports as well. It’s worth configuring smartd to report trouble and test (at least once) that you can receive the warning messages. Learn a little bit about the types of error reports that you may see and you’ll get your own feel for what is a message you should be concerned about.

    Cloning a Hard Drive

    If you want a good clean image of a drive (like a snapshot in time). Of course, having a snapshot of the working system after you’ve installed all of your software is a great restore solution to quickly get you back up to speed. It may not replace your data, but I highly encourage you to take a system snapshot of a newly provisioned machine. It’s really easy to accomplish. I use Clonezilla for hard drive cloning. Here’s what’s great about Clonezilla. 1) It’s open-source. Freely available and redistributable. You can download it for free. 2) It can clone disk to disk or disk to file. That means that if you want to just make a FILE on a memory stick or other storage device with the disk image you can. 3) It supports many different network communication types. If you have an ftp server or samba server (windows file and print sharing) or ssh server on your network you can use those to store your disk image. 4) Fast if it understands the filesystem. If it’s a supported filesystem NTFS, Fat32, Reiserfs, Ext2, Ext3, most other linux filesystems, then it only copies the data. It doesn’t bother cloning the empty space. This makes it fast and makes the image files relatively small. If it can’t recognize the file system it still works, it just reverts back to a bit for bit copy of the disk. I’ve seen 10 minute disk imaging before OVER the network. I’ve also done an image to a USB thumb drive for a particular custom built kiosk machine. It’s possible to automate the process of restoring an image from a boot cd as well.

    Yes, but that’s before your disk fails, what if it’s failing?

    Recovering Data from a Sick Disk

    If the disk is starting to give the early warning signs of failure through something like SmartMonTools, or you see an increasing number of bad blocks in your chkdsk scans. My first tool of choice is still Clonezilla. I’ve had good luck with it if I can catch the failure of the drive early. If this program fails to rescue your data you will need to move to the next step though.

    Recovering Data from a VERY Sick Disk

    Ghost4Linux has been a somewhat controversial project. It was originally copied almost directly from Ghost4Unix without attribution (a big faux pas in the open source world (and most any other)). However, it has since been handed off to a new developer and rewritten from scratch. I used to make use of the direct disk to disk copy function of Ghost4Linux for healthy disks or those that I caught in the early stages of failure. That was before I discovered clonezilla, now I make use of Ghost4Linux only when the disk is beyond Clonezilla’s ability to copy. When a disk fails you will many times have parts of the disk that simply cannot be read. This can cause some copy processes to simply freeze or to stop altogether. Under the Utilities section of ghost4linux though there is a tool called dd_rescue. dd_rescue essentially starts a full bit for bit copy of the old drive to the new one, but it realizes that it may find a place on the disk that it can’t read from and so instead of freezing or bailing out entirely, it decides to skip to the end of the disk and then start working backwards. Then it finds another part of the disk it can’t read and it skips again to another area, continually narrowing the unreadable area of the disk. Obviously if there are parts of the disk that can’t be read there is a chance that you’re going to lose data, but I’ve had pretty good sucess with this method.

    In some cases I’ve had success at least getting the data to a stable drive where it can then be copied from without fear that it’s crumbling during every file copy process.

    Cloning affects on Windows

    Windows will likely still boot after a Clonezilla copy of a disk if it booted before the Clonezilla copy. It’s not something that Windows would “freak out” about because it’s installed on a different hard drive. (Put it in a different computer and it will panic and cause all sorts of pain.) But, if you’ve had the drive start to fail and take data with it causing either a Windows boot problem before the cloning, or you had to use a tool like dd_rescue, then you will likely have to reinstall Windows if you want to boot from that disk.

    If the challenge and expense in time and money of installing everything from scratch is too much and you just want to move the data to a new PC, then you can of course use the tools above to just move your data to a “stable” drive and then you can copy the data over at your leisure. If this is the situation, you may need to run chkdsk on the copied data a few times to make sure the filesystem is in good shape.

    Lifeboat off a dying Hard Drive

    In some cases I’ve had dd_rescue fail a complete run too. I don’t know why, I haven’t taken time to try and figure out why, when there’s a drive failing at a quick pace. For this I usually punt to my absolute last chance. Hook the drive up via a usb adapter and try to copy the data using a linux system.

    Why linux? Because when I hook a usb drive up to a Windows machine it scans the contents of the entire drive to figure out what options to give me, (play music, view pictures, browse folders, etc.) I’ve seen one iffy drive pushed over the edge by this sudden burst of activity. Linux systems and boot cds don’t have the same quick ransack of the disk contents and I’ve had much better luck with them just being able to connect the drive and surgically retrieve a few data files.

    Advanced Data Rescue Techniques

    If it’s still possible to read from the drive and the filesystem is trashed you could try a hex editor to read the data directly from the drive and try to reconstruct. If you want to do this you’ve probably had fun reconstructing paper run through a crosscut shredder though.

    If the drive fails to spin up it’s possible to put it in the freezer over night and get a last bit of life out of it the next day. (I’ve seen it work once or twice.) Make sure that you take steps to seal up the drive so that humidity isn’t a problem.

    Data Recovery Companies

    If all the above fails and you’re to the last line of defense it’s really up to the data recovery companies. All the above options will probably cost you less than $400 if you pay someone to try those to reclaim lost data from a hard drive. Data Recovery companies can cost well over $1000, many start around $2000. So, that’s out of reason and reach for many people. If your data is worth that much and you don’t have time to waste and have the money then your best bet is to skip most all of the above suggestions and get it to a data recovery company before more damage is done. You might want to ask yourself if it’s worth it to use one of these data recovery services. If it’s irreplaceable data, it may be worth it.

    Yes, every moment can count and if the drive is failing quickly you might make their job easier (read that as cheaper) if you can get it to them before it’s been making that strange grinding noise for two weeks now.

    [ad#adsensesquare]

    After you’ve got your system running again, make sure to make backing up your data a regular routine so that next time you have a hard drive fail it won’t be a matter of great emergency to get something off of THAT drive. Yes, it’s more convenient if you can recover the drive even if you have backups, but you’ll at least have the peace of mind that if there’s now way any data can come off of it, you still have a plan B.

    [xls]

  • BBPress

    BBPress is forum software written by the makers of WordPress. It can share user registration databases with wordpress, supports rss feeds and can be nice and lean. I’ve used it for a couple sites and will use this page to keep track of the official site, support forums, themes and extensions.

    bbpress.org – Official bbpress.org site.

    bbpress forums – Official forums at bbpress.org

    bbpress plugin directory – Official bbpress plugin directory

    bbshowcase forums – bbshowcase forums. This is a great source for ideas and help with bbpress and also a good resource for extensions or plugins for bbpress.

    Forum thread on post-notification plugin.

  • Firefox Extensions

    One of the things I think I like most about the web browser Mozilla Firefox is the flexibility of using extensions to give you more features. Some of my current favorites are the Google Calendar extension that shows you’re calendar events in the status bar. The Gmail notifier is also a favorite of mine. It’s really the kind of thing that has saved me some time in my daily routine by putting information in plain view without having to open up a new web browsing tab and log in and then check. It helps me to use my time better.

    I’ve toyed around with a few ideas for my own firefox extensions and who knows, this may be the page I post them in the future. For now though, I wanted to collect my references here for how to go about writing a firefox extension.

    From lifehacker: how to write a firefox extension. Good general overview with links to a few other resources. Includes a great tip to manage separate profiles to do your extension developing so you don’t trash your default profile. (Plan for the worst, hope for the best…)

    This how to is probably quite dated now that we’re up to firefox 3.0 but it was one of the first “how to create firefox extension guides out there and may still have some use.

    Mozilla of course has a “building an extension” page. This really should be THE authoritative source…

    There is a great extension wizard online that will help you construct a skeleton for a new extension which should make getting a new one started a bit quicker than a manual construction of the folder/file structure.

    To make life easier there’s also an extension developers extension which packages some tools to make your life easier if you’re writing firefox extensions.

    Here is a link to XUL Planet. XUL is the XML markup flavor that firefox uses to build the menu items and dialog boxes that you’ll see in the browser.

    Mozillazine’s extension development tutorials.

    Mozillazine also has a tutorial on getting started with extension development.

    Some people have put together their lists of top firefox extensions.

    [xls]

  • Openvpn

    I make use of openvpn almost on a daily basis when I’m out in the world and use my laptop to connect to the internet. I’ve done several projects related to openvpn which I’ll detail in this page.

    For starters:

    openvpn.net and their howto. If you’re not familiar with openvpn it is an open source vpn implementation and is cross platform. I’ve had good success with it and it’s fairly easy to setup TLS authentication.

    [ad#adsensesquare]

    Update 6-17-10 ….

    Big openvpn/dd-wrt project lately that has taken a lot of time, but it has solved an issue that I’m sure a lot of network admins have run into. When designing networks and looking to bridge offices with openvpn network admins are advised to pick unique subnets so that 192.168.1.1 in one office can route well over the vpn to 192.168.2.1 in the other office. If both networks (or multiple) use 192.168.1.0/24 there is network address collision – packets get lost and things don’t work. Well, it is possible with the right setup to do NAT on the packets that are traveling over the vpn. Why? Well, let’s say you’re a client of this 192.168.1.0 office network and are out at a wifi hotspot that also happens to be a 192.168.1.0 – you can’t exactly make them change their addressing to avoid conflicts with your business network and migrating an established business network can be a big task. Of course, you could start out your network design by choosing a different subnet and I’ve used this approach several times, but it’s really just a matter of time until you stumble across someone else with the same subnet that needs to vpn into the network and you run into the hairy address conflict problem.

    So, we’ve designed a box based on dd-wrt openvpn edition…. This box has a vpn “personality” (client key and configuration to connect to a server out in the internet (a linux vps is the hub of the wheel for our topology and our openvpn server.) That server identifies the box by it’s certificate and gives it an address at 10.111.1.254. It also pushes routes to 10.111.2.0/24 with 10.111.2.254 as the gateway and 10.111.1.0/24 with gw of 10.111.1.254 to our second box which is given a 10.111.2.254 address. On each device in addition to the vpn personality there is a special brew of firewall rules which handles the packet rewriting such that any device that is attached to our two vpn boxes are accessible from the other side even though internally they can share the same 192.168.2.0/24 network. So, each client has it’s own network address (192.168.34.1) and it’s vpn address 10.111.1.1 This has worked well – it did take a lot of time to initially design but we’ve now rolled out two initial installs of it. (Not bad considering that it’s all done with ~$60 dollar router hardware.) In the future I may provide more details on the setup here because as I researched this I found NO ONE explaining step by step how to design this kind of a setup. At this point the only negative with our setup is that two devices behind the same box will not see each other via their vpn address(10.111.1.1/10.111.1.2), but their lan address (192.168.34.1/192.168.34.2) Of course, this plan also allows for mobile vpn clients that aren’t “behind the box” and they register in the 10.111.0.0/24 subnet and they are all screened with the wider subnet via the server so that anything in the 10.111.0.0/16 is pingable from each vpn subnet.

    As I said, it’s been a big project and I may be detailing it here, but want to wait until all the dust settles on our setup.

    [xls]

  • WordPress

    WordPress is the software I use for this and several other sites. It’s traditionally considered blogging software, but can be used as CMS (Content Management Software) as well. It’s a powerful platform with a wide variety of publicly available themes and extensions to extend functionality. It’s also dead-easy to use and it makes it very easy to add content to your sites (as the thousands of posts/pages here are testament to.) (Of course it can’t help you with writing your blog content….)

    In short if you’re wondering “how do I start blogging?” Then wordpress is a good starting point. In fact, wordpress makes it extremely easy to create a blog.

    I’ll use this page to get a round up of wordpress related links (main site/forums/themes/plugins).

    For starters, you can download and install wordpress yourself from WordPress.org.

    Do you want to create wordpress templates? Then you might be interested in Artisteer. It can be used to create wordpress templates.

    Calendar Plugin – this displays a calendar on a page and let’s you insert events through the admin panel.

    On wordpress.com they’ve integrated the polldaddy plugin to all wordpress.com blogs which enables you to add polls and surveys from polldaddy.com (looks pretty nice as polls go.)

    I’ve never been big on web site based polls of course, mainly because if it’s a poll on a controversial topic people can easily game it and tell all their friends to go and vote, but…. that can have it’s advantages too…. (traffic.)

    [ad#adsensesquare]

    Here’s one that took me a bit of searching: I wanted to have a page with a slightly different template file. I copied the file to a new name and made the changes to the template and thought that would be enough, but it wasn’t. It still didn’t show up in my drop down list of templates in the page editing screen. So, I found this:

    After you’ve done all that, go to the very top of the bio.php page and insert the following code above the call for the header:

    <?php
    /*
    Template Name: Bio
    */
    ?>

    Once you get this code at the top of the template, save your file and upload it to your server.

    From this site.

    WordPress Plugins

    About the same time I was looking to replace the built in wordpress search with Google’s custom search.

    I found this good guide. That guide suggests the mightysearch plugin. Essentially I made a new page for the search results (stripped out contextual ads to make room for googles “stuff”.) I also put the search box code from google in the searchform “page” in my template. I then installed and configured mightsearch. (You need to add the search box and results form script to the correct place.) Then you can insert it easily in your template with the following:

    <!–-mightysearch–-> - to call both search form and search result on the page <!–-mightysearch_form–-> - to call only search form on the page < !–mightysearch_result–> - to call only search result on the page

    (There should be two hyphens before and after mightsearch in each of the above.)

    I’ve also found a great List subpages plugin. It’s called Xavins List Subpages. I don’t know why, but many theme editors seem to devalue pages. I love the pages because they exist outside the date based post hierarchy and I feel like they have better visibility. I say many theme makers devalue pages because they’ll give enough space in their template for 3 or 4 pages without breaking the theme and requiring a bit of surgery or they’ll limit it to only viewing one level of pages. My problem with that has always been how people can find the sub pages.

    So… what this plugin does for you is it let’s you put a left bracket xls and right bracket in any page and it will automatically generate a list of sub-pages. This is a great timesaver for me because I have many pages with many subpages and each time I create a new sub-page I have to go back and add a link in to the parent page. This will do it automatically even if you forget which is nice. I still like to get a link in the context of a paragraph in the parent page, or at least write out a description, but this way I can filter the page list to one or two levels deep and then won’t have an orphaned page in my layout. (As long as I remember to insert the subpage code in each page.) It is also possible to use code in the template to do the same thing, but I prefer it in the page content show that its more easily found.

    Along the same lines, here is the homepage of the plugins author with more detail on the subpage plugin. Also, more info on the wp_list_pages function in wordpress. The sub pages plugin supports all of the options of wp_list_pages with one additional option.

    You might also be interested in this listing of top wordpress plugins.

    If you’re looking for How to Market Your Blog, you might be interested in some of my SEO Tools for traffic building.

    Utility Links:

    There’s a great walkthrough of moving your wordpress blog to a new domain that may come in handy someday.

    Of course there’s also this tutorial on the wordpress codex for restoring your database from a backup.

    And this page on backing up your wordpress database.

    Finally here is a guide for installing/setting up a wordpress database and database user using the mysql cli client (command line interface client.)

    I also ran across this interesting WordPress Remix that makes wordpress look more like a standard cms than blog.

    If you’re looking for a wordpress magazine theme that link has quite a few highlighted that look fairly slick.

    If you’re looking to make money with adSense on your wordpress site you may want to look into a theme like bluesense.

    Displaying RSS in a WordPress post or page

    There are a number of ways to draw from rss feeds to display their output in your worpdress site. Feedforall is a way to do that with any php page (including wordpress.) There is a builtin method though. Here’s a good writeup on it. It’s called the wp_rss function and if I recall there is a newer method as well. (As I said there are several ways.) fetch_rss is another method and this writeup details the use of both.

    Autoblogging Plugins

    Autoblogging can be a controversial topic – but there are good uses for autoblogging plugins (autoblogging is not necessarily a scraper site.)

    wp-robot – cost varies depending on what features you are looking for.

    wp-o-matic – free plugin with a good range of features.

    [xls]

  • Virus, Spyware and Malware Removal Toolkit

    Articles on this site form the rogue antivirus category:

    Virus Removal

    The absolute BEST way to make absolutely certain that a system is clean is to wipe it fresh and reinstall Windows XP. Unfortunately, many people lose their Windows XP install cds, or never received it from the pc manufacturer (which is worth a several page RANT all of it’s own!) So, for them the option is 1)clean up their existing install, 2)buy another windows xp license and disk 3) buy a new computer. Usually the cheapest option turns out to be #1.

    Since I have to do a lot of virus cleanup I’ve got a collection of favorite virus removal tools that I make use of. Why so many utilities here? They each have their strengths. Antivirus software of course specializes in cleaning out viruses but in recent years is doing better in the spyware removal. Antispyware software usually has better antispyware coverage, but the landscape is so diverse that it still seems no single product is a bulletproof answer. Also, I like to think of it as getting two or three opinions on if the system is clean. If I can get each of these tools claiming the system is clean, it’s a pretty good bet that we’re finally cleaned out.

    Virus Removal Tools

    First off, is an essential. A freely available antivirus download. If I’m working on a persons home computer the first choice is usually the free AVG antivirus. If it’s a business I usually steer people towards the pay version of AVG. If they already have antivirus that they want to keep, that’s fine, but it usually has to be reinstalled or renewed because many pests disable or maim your existing antivirus. 8.0 of AVG does a good job identifying and getting out all of the myriad trojans and “possibly unwanted programs” that do sneak installs that previous Antivirus versions didn’t seem to blink an eye at.

    Spybot Search and Destroy – this is another good removal tool in the arsenal. This is good to cover the classic spy and adware programs. This includes many rogue antivirus and rogue anti-spyware programs.

    Malware bytes anti-malware this is a nice recent addition to my toolkit for cleaning up systems. It seems to be able to pick up some things I’ve missed with spybot and or AVG. In recent months this has become my virus removal tool of choice.

    SuperAntiSpyware is another good malware and virus removal tool. They have a portable scanner which is saved to a new random filename each time it’s downloaded and includes all their latest definitions. So, download this on a clean pc to a flash drive, boot up into safe mode if necessary and clean. SuperAntiSpyware also has a standard free edition available for download as well that is a standard installable application. If it were me, for a system cleanup, I would opt for the portable edition as it’s more likely to be able to work due to the random filename.

    For some of those specific bugs it can be handy to have a tool for a specific type of bug.

    CWShredder is the tool of choice to remove CoolWebSearch.

    To the surprise of many, the antivirus companies typically have standalone FREE virus removal tools for various viruses and baddies out there:

    Symantec FREE virus removal tools

    McAfee Free virus removal tools (this may not have been updated in a while.)

    They also have a removal tool called Stinger for a variety of bugs.

    Kaspersky has a raft of removal tools too for free download.

    As does Grisoft (AVG).

    [ad#adsensesquare]

    As time allows I’ll be adding more of the handy virus removal utilities I use here. A good place to start for general system utilities though is the sysinternals utilities which are currently owned by Microsoft.

    Free Online Virus Scans

    I’ve not typically been VERY enthused about “online virus scans” because of some potentially fundamental drawbacks with such, but from time to time I make use of Trend Micro’s Housecall (online java/web based malware scanner) as a quick first or second opinion on a system’s status.

    Here’s Panda Antivirus ActiveScan – only cleans out what it finds after registration. (And some things are not cleaned out by the online scan, but by the paid software.)

    Kaspersky Virusscanner – another good online scanner.

    F-secure has an online scanner.

    SuperAntiSpyware has a free home edition.

    Other Virus Removal Tools

    Other tools are SDFix and Combofix.

    Another useful tool for finding hidden registry entries and the like (possible rootkit activity) is RootkitRevealer.

    Finally, a very powerful tool for finding running processes:

    Process Explorer (link to sysinternals download.)

    Virus Removal Toolkit

    Now, since many rogue antivirus or malware infestations will prevent you from downloading from the websites of legitimate security tools you will want to develop a toolkit in one of a number of ways. First you may wish for a cd. For many years this was my favorite method because I could just keep a folder on my desktop with the current version of the security tools listed above (or whatever I was using the most at the time.) I could even script updates of them and then I could burn a fresh cd to take out with me.

    The other option and probably the better choice today is the USB flash drive. They are cheap and most of these utilities are fairly small. For $20 you can get a 2GB memory stick to put all of your virus removal tools and even have room left to copy data off for forensic analysis (whether it’s log files or other suspicious files that your removal tools did not detect.)

    Subpages of this page that may have more detail on some of the tools listed:

    [xls]

    Virus and malware removal in the news:

  • WordPress Fancy Permalinks not working Giving 404 error

    I spent the better part of a Friday night sorting this out. I had just launched 4 new wordpress blogs that were secondary installs on each server. (I use a VPS and /home/domain/www/ was the primary wordpress install – the secondary installs existed at /home/domain/www/secondsite ) . The problem was that I switched on the fancy permalinks in the control panel of wordpress and after that nothing worked but the main page. Setting things back to the default I could see posts, but not the feed or anything else that relied on mod_rewrite.

    (more…)

  • Universal Translator Plugin for WordPress

    I’ve started using a “universal translator” plugin for wordpress across most of my sites lately. In the past I’ve seen many people using google or bablefish translation to view pages (according to the logs.) I thought it would be a great convenience and perhaps open up my posts to be searched by viewers who read other languages. I’ve studied Spanish for quite some time and previously have studied German and French. Since, the universal translator plugin is based on google’s translation engine I know it’s not going to be perfect. But, I did a bit of proofreading in the languages I could and from what I could see it was more or less intelligible (obviously there were some problems, but I’ve seen (MUCH) worse translations of Japanese news-stories to English..)

    (more…)

  • Ubuntu Linux

    I thought I’d forgo the idea of posting several separate posts and make just one long page to talk about the various Ubuntu installs that I have. I hope the details may be useful to anyone installing on similar hardware or with a similar setup.

    In the last week I’ve upgraded or installed Ubuntu 8.04 on 4 machines and wanted to give some overall impressions, success and frustrations. I will likely install 8.04 Hardy Heron on a few other machines before long and may update this to reflect their progress.

    First off, I’ve installed various flavors of Windows over the years with varying levels of frustration and …. occasional anguish. Likewise, I’ve installed (or tried to install) various flavors of linux over the years on a number of years going back to my first successful linux install which was caldera linux (2.2?)

    Anyway…

    The first system up was my desktop machine. Originally this was a Windows 98 machine before my switch to Mandrake Linux 8.0. When I got linux up and running I left windows behind on this system. I’ve upgraded the hardware two or three times since then and kept the data migrating forward and had moved up to Mandriva 2007 as the main operating system. One of the things I’ve loved about linux is it’s ability to adapt. My hardware upgrades were typically…. buy new barebones case. Shut down old machine, plug in hard drive to new, boot up new system. Sometimes I’d have to try and figure out why networking hadn’t come up and modprobe the driver for it, but usually it “just worked.” (Try that with windows and check back with me in a couple weeks when you get it stabilized.)

    Anyway, the new desktop motherboard had built in SATA (which does not read 3.0Gb/s sata drives – only 1.5Gb/s ) and I have a SATA addon pci card (Rosewill – 4 port + 2 external – it seems as though the external is an either or proposition with one of the internal pairs – but it’s still a good controller card). I’d really been wanting to try software raid on the desktop (my server has been running software raid for a couple years.) This seemed like the perfect time. So, with 2 400GB sata drives and the 8.04 alternative install cd off I go. I configured the software raid array with little trouble, but the second of the two drives seemed to cause the system to freeze for a few minutes. I rebooted and tried the installer again. Same freeze when trying to add the second drive to the raid array. So, I pulled the drive out and tested with another machine. The SMART test I started failed partway through so I tried a disc wipe (which also failed partway through.) I had ordered 3 identical drives and unfortunately one of them was bad from the start. So, returned to the ubuntu install. I decided to proceed with the software raid (RAID1 (mirroring)) on just one disc and add in the second when I get the RMA replacement drive. All went fantastically well. I was notified when I logged in that some of my hardware could perform better with proprietary drivers (my NVIDIA card) it downloaded and installed those with just a few clicks. Really a smooth install. The Compiz 3d “eye candy” is really nice on this machine. (Even some of the touches that aren’t really 3d accelerated are nice for the overall look and feel.)

    So now all that’s left on this one is the continued migration of data from the old hard drive. I had ~10 years of collected (and scattered) data accumulated there. Not everything was needly cordoned off in the /home/user tree. Also, I’m saving the old drive for the many configuration files that were there. (Openvpn settings for instance, etc.) Those I’m moving over “on demand.”

    Some other notes on the desktop. It’s running VMWare server for a few desktop operating system testboxes. (Windows XP/98 as well as a bsd or linux or two. I think I’ve got a few DOS VM’s there as well.) Vmware server had one issue after reinstalling from VMWare’s distribution package. There were errors trying to launch the vmware console. It wasn’t able to find a couple of files. Symbolic links solved the problem…

    sudo ln -sf /lib/libgcc_s.so.1 /usr/lib/vmware/lib/libgcc_s.so.1/libgcc_s.so.1
    sudo ln -sf /usr/lib/libpng12.so.0 /usr/lib/vmware/lib/libpng12.so.0/libpng12.so.0

    Next up is the Server. The server was my first Ubuntu install in the house and has been running Dapper Drake 6.06 (with upgrades) since then. It is running VMWare server and has a guest system that’s a time server and another guest that’s my mailserver. This Ubuntu machine has software raid (RAID1) in two arrays. The first array deals with the filesystem it’self and the virtual machines and the second array is for file shares on the local network (audio/video/boot cd’s/etc./etc.) This one I was going to be ambitious and try the upgrade through the update-manager.

    As far as upgrades of an operating system go… I’ve had some fair and worse experiences. It seems there are always some problems post install to work out. All in all, this upgrade could have been disastrous – booting from the software raid device could have failed. It didn’t…. in fact everything went very smoothly with the exception of Video acceleration. On this one I have an Nvidia 6400 card that apparently is not their top of the line card, but had previously been using NVidia’s distributable driver (because the precompiled one from Ubuntu for 6.06 had strange artifacts.) Before I go on, YES it is a server, but I have the gui on it for occasional administration and to serve up the uber cool NOC look with Google Earth on one screen, (with a weather radar overlay), Etherape on another, security web page on another….

    So… the restricted drivers manager made no mention of the NVIDIA card, I tried various iterations of installing ubuntu nvidia packages/the nvidia driver/etc. Finally I removed everything and installed from NVidia’s driver and things worked. So… it wasn’t a perfect install, but considering all the hair pulling moments I’ve had in windows installs hunting for drivers for EVERY LAST piece of hardware to get it working…. this is not too bad at all. (I’ll note that video was working prior to the driver install, just not 3d accelerated video.) On this machine I had to also do a manual edit of the /etc/X11/xorg.conf because it detected the correct resolutions for the card and monitor, but something in the config made a “virtual display” larger than the chosen actual display size… so the login box was not centered and I couldn’t see the options box. (This was after installing the NVidia driver – things seemed okay if I recall before I did that install.) But the quick edit of the file fixed the resolution/screen space issue and it was on to the VMWare install.

    Just as on the desktop, the VMware install went without a real hitch until trying to start the vmware client application, but the aforementioned soft links solved the issue and vmware was up and running. (The virtual machines were up and running through the entier upgrade process by the way.)

    It’s funny, these two upgrades took place on days that I also had appointments out. So, one evening I finished up the desktop install, the next day I REMOTELY kick started the server upgrade via VNC, checking in from time to time to answer questions. That evening I finished up the details with the video card. I remember Windows upgrades/installs/rebuilds taking the better part of a weekend… certainly not a full weekend of constant attention, but lots of waiting, answering a question here and there and then lots of hunting for drivers.

    Anyway, next up was the laptop. This one I saved for the weekend because this is the machine I use the most during the week. This machine had been running Gutsy 7.10 from Ubuntu (has a Windows XP partition as well.) The laptop in question is an IBM Thinkpad T23 and I’m using a Belkin Wireless card. ( F5D7010 ) The upgrade took place wired for speed. Most of the day was spent downloading packages, the install was an in place update-manager upgrade and went all right. Post upgrade there was a bit of a problem with the wireless. Again the restricted manager failed to recognize the possibility of helping this device out. (Strangely enough it seemed to on the original 7.10 install.) I installed fwcutter and was able to get the firmware for the card downloaded and installed the firmware to make the broadcom card work. (b43 driver).

    Next up is a small Sony Vaio pcg-sr17k – It’s a neat lightweight notebook P3 700ish with about 256mb of memory – that was running gutsy (7.10) last. (That was an update-manager upgrade from feisty 7.04). This machine has no cdrom. The pc-card slot works for about 30 seconds at a time (overheating issue(?)) and it has usb 1 which I use for a wireless usb stick network adapter. The original install of fiesty I used a usb adapter for the notebook hard drive and had the drive attached to another machine. (Because it seems to lack the BIOS ability to boot from USB.) For these reasons I was compelled again to follow the upgrade-manager route and instead of the wireless usb network adapter I opted for a usb wired adapter (which gave impressive speeds btw.)

    I don’t recall if it was the sony vaio or the IBM thinkpad that the gui crashed during the upgrade process. I was able to log back in and restart the upgrade-manager though so that was really a non-event. (It could have been frightening for a new user though.) This little laptop has never shutdown properly (I’ve always been obligated to slide the power button to make it shutdown, but I discovered that it may yet be possible to have shutdown and suspend/hibernate work. It appeared as though the kernel option acpi=off was being passed (maybe as a side effect of the way I originally installed to this hard drive?) I did a test boot with acpi=force and indeed I was able to hibernate/standby and even have it shut down properly. Now, it might be able to work without the acpi=force, but I got tired of testing boot options. The only real problem I have with this device is the wireless usb stick that I use doesn’t seem to be able to scan wireless networks until it first is told to connect to one. For example there are three networks that typically show at one location I visit. When I browse networks with this device, nothing shows. I can tell it to “connect to other wireless network” and it will (but I have to enter the essid and the passphrase each time.) After it connects, then I can see all three wireless networks that I normally would. This card is based on the prism chipset if I recall. (Netgear MA111) It’s not a great usb stick (just 802.11b) , but it works. I sometimes use a different wireless stick on this one, based on the rt2500 chipset but I’ve seemed to have trouble getting it to connect (with or without encryption) to access points running dd-wrt.

    Here are some other notes:

    [ad#adsensesquare]

    WOL (wake on lan) I wanted to do some power saving on the desktop, so I’ve set it to go to sleep after a certain period of time. I do want to be able to access it from the network still, so I wanted to be able to do wake on lan. I enabled it in BIOS (I seem to recall it being called wake on PME in my BIOS – unfortunately the terminology varies and so does the implementation – it seems some believe in just allowing wake from standby/etc.). So…. I did that and then tested suspending and waking up from another machine. Here’s what’s needed on the other machine: wakeonlan or etherwake (might install both just in case… apt-get install wakeonlan etherwake)

    etherwake requires root privileges, but wakeonlan doesn’t. The simplest usage of wakeonlan is “wakeonlan TH:IS:IS:TH:MC” replace TH:IS:IS:TH:MC with the mac address of the machine you’re trying to wake up. The only problem is by default this didn’t wake up the machine out of sleep or standby (suspend) mode OR out of hibernate. I was able to wake it from a full shutdown state (I still wanted resume from suspend though.)

    To get to this point I had installed ethtool which let’s you set (in software) the wake on lan status of a network card. I followed instructions found at the ubuntu forums to create a file in /etc/init.d/ called wakeonlanconfig. It includes:

    #!/bin/bash
    ethtool -s eth0 wol g
    exit

    A very simple setup.

    I also edited /etc/init.d/halt and towards the end find the following line:

    halt -d -f -i $poweroff $hddown

    I removed the -i which supposedly leaves the internet interfaces up and running after a system halt (a must to make WOL work from a system shutdown.)

    So now it reads

    halt -d -f $poweroff $hddown

    Okay. So after that I was able to wake on LAN after a true shutdown, but not sleep or hibernate.

    Here are the changes I made to make WOL from standby (S3) and Hibernate work.

    I edited one of the hibernate/sleep scripts:

    sudo pico /usr/lib/pm-utils/sleep.d/50modules

    and commented out (*put a # in front of the lines) that say unload_network and invoke-rc.d networking restart (This prevents networking from being shut down at suspend/standby/sleep or hibernate and if we keep them from being shutdown, we shouldn’t need to restart them.)

    I also took a look at /proc/acpi/wakeup

    Device S-state Status Sysfs node
    PS2K S4 disabled pnp:00:08
    PS2M S4 disabled pnp:00:09
    USB0 S4 disabled pci:0000:00:02.0
    MAC S5 disabled pci:0000:00:05.0
    USB1 S4 disabled pci:0000:00:02.1
    USB2 S4 disabled pci:0000:00:02.2
    P0P1 S4 disabled pci:0000:00:0e.0

    It turns out that MAC is the identifier for my network card (I used lspci and matched the PCI id)…. so now I sudo su and…

    echo MAC > /proc/acpi/wakeup
    which makes the file look like this…

    Device S-state Status Sysfs node
    PS2K S4 disabled pnp:00:08
    PS2M S4 disabled pnp:00:09
    USB0 S4 disabled pci:0000:00:02.0
    MAC S5 enabled pci:0000:00:05.0
    USB1 S4 disabled pci:0000:00:02.1
    USB2 S4 disabled pci:0000:00:02.2
    P0P1 S4 disabled pci:0000:00:0e.0

    So…. I put the system into suspend and once again try…

    wakeonlan my:mc:ad:rs and voila…. it wakes up. I tested hibernate as well and that works too. The only change left is to add the magic enabling into the wakeonlanconfig script that runs at startup.

    (Not as easy as it could have been, but it is now working.)

    Standby/suspend/sleep/hibernate I’ve never really thought about the power saving modes that much before. I always remember treating standby or hibernate as if they didn’t exist and insisted on shutdown if I was getting the laptop ready to move. (Yes I know, welcome to the 21st century, the dark ages must have been fascinating!) I’ve seen so many screwy problems from resuming from sleep or hibernation that I had long ago written them off as unreliable. (And I have tested them a few times since I moved to linux.) This time around though. Sleep and hibernate work. They work quite well too. I don’t know if it’s some of the extra work that Canonical has put into ubuntu 8.04 or if I should be thanking the kernel developers or who…. but power saving modes and resumes now seem VERY reliable. The only unusual thing I’ve seen is some green artifact on the screen WHILE it’s resuming and when it starts to shutdown. (Like the video adapter needs to be re-initialized.) I’ve seen this now on two machines from hibernate (both with the savage video driver.))

    Gnome – I’ve been a LONGtime user of KDE. In fact I toyed with gnome when I first switched to linux, but I’ve used KDE since then. Again I don’t know if it’s the team at canonical or if it’s Gnome’s developers that should get the credit, but it’s so well polished and integrated…. I’ve been using gnome now for ALL of the machines. I still have KDE installed (and may take a look at KDE4.) I also have XFCE installed for when I want something with a bit leaner footprint running.

    Power Management With ACPI now – I was surprised to see this kind of detail from my battery(!)

    $ cat /proc/acpi/battery/BAT0/info
    present: yes
    design capacity: 48000 mWh
    last full capacity: 37640 mWh
    battery technology: rechargeable
    design voltage: 10800 mV
    design capacity warning: 2400 mWh
    design capacity low: 480 mWh
    capacity granularity 1: 1 mWh
    capacity granularity 2: 1 mWh
    model number: IBM-02K7028
    serial number: 170
    battery type: LION
    OEM info: LG

    as well as ….

    $ cat /proc/acpi/battery/BAT0/state

    present: yes
    capacity state: ok
    charging state: charging
    present rate: 14716 mW
    remaining capacity: 23250 mWh
    present voltage: 12122 mV

    Now where it says last full capacity: 37640 mWh – it was actually lower at first, but I did an (almost) full drain cycle and then recharged and I got about 10000 mWh back – so it led me to some good battery maintenance research. (I LOVE the level of detailed information there though.)

    senao wireless card I’ve had a senao wireless card that is a longer range card for b/g wireless (prism54 chipset (?) – I think I recall it loading the prism54 driver.) ( SL-3054CB Plus ) The only problem I had was that I couldn’t access WPA networks. Today I successfully connected to a WPA network (but failed at a WPA2 network.) I may research this and try again, but it may just be a limitation of this card. (I’ll try to remember to update model numbers.)

    drivers in general The biggest problem I hear people talk about with linux is that it’s hard to find drivers for hardware. The only problems I’ve run into have typically been with NVIDIA video cards. I haven’t attempted to get fakeraid running (you know those system boards that claim they have raid, but it turns out to be driver assisted raid as opposed to true hardware raid.) All of the devices on my system boards always seem to be detected just fine. I notice I usually have bluetooth “stuff” loaded by default even on systems that don’t have the built in hardware. I have tested some of that with a bluetooth usb adapter I have though and it seems to work. Wireless cards I have (for one) needed to download the firware using a utility made for it. Comparing this to the driver experience of a windows install is vastly different though. Recently I had a pair of windows laptops that someone wanted me to look at. They had a handful of dell disks and didn’t know which disk had come with which laptop. One laptop had received the typical windows flush and reinstall and needed drivers for EVERYTHING. It took some educated guessing going through the discs, but I finally found the right one. Going online to get drivers wasn’t an option because even the onboard lan hadn’t been detected and used by some sort of driver…. so… that was my first priority and I managed to get the LAN working and then work through the other devices. It’s funny though because there were three devices that were mysteries. They were just unrecognized devices according to windows. I found a “ricoh driver” on the disk which turned out to be the driver for all three. They were some sort of media card reader. She told me the reason she had the system wiped and reinstalled was because there were those three “unrecognized devices” to begin with. BTW, we also spent time trying to UPDATE the wireless driver due to unreliable behaviour from the wireless. (It would work one boot, but not after a reboot. It might work for a few minutes and then show no signal/etc.) That was a whopping 20MB download that was projected to take about a half hour…. which she was comfortable with completing the install when the download was finished. I told her what her options were if that didn’t solve the problem with the built in wireless (add on card) and she was content to proceed on her own.

    I just LOVE the fact that with linux, when I install/reinstall I don’t have to go hunting for 20 drivers here and there online, or scratch for an old driver disk to get the process started. I also love the fact that I don’t then have to dig out all my software reinstall disks, but that’s another story altogether.

    Updates In the windows world it sometimes drives me up the wall trying to keep up with software updates for people. I’ve hoped for YEARS for some sort of open source windows workalike to aptitude to come along. It shouldn’t be that hard a concept to create an rss feed of the “latest release” tag it as a security fix if that’s the case, or a new software version, include the download link. Then have one application that acts as the aggregator and can manage the download links. Google updater is the closest FREE (as in cost) application that does some of this with a limited number of applications. I’ve heard of others that are pay applications. See… that’s one thing about windows that will ALWAYS make it a challange – installing software can be tedious – installing software updates can be tedious. Under linux – I have an icon that pops up in the system tray that notifies me when there are updates, I click on it and it lists both the system security updates and the software application updates. Depending on my defined “sources” for software I may get notification of feature upgrades as well. (And it notifies me when a new release of ubuntu comes out so IF I want to I could upgrade to it.) Then I type in my password and off it goes. Unless there are configuration file changes with the new version of software I don’t even have to click again. Oh and YES, there are other ways to install updates in ubuntu – command line/scripted/automatic – but… don’t let that scare the beginner away – because there are nice gui ways to do it too.

    Software Raid vs. Hardware Raid

    I’ve always been a fan of things that are easy to replace in order to get back up and running. I hate tape drives for a number of reasons – one of which is if the tape hardware is gone on an old one you are hard pressed to find compatible replacements. That’s why for backups I prefer CD/DVD or removable USB drives of some sort… any sort as long as it’s not tape drives. I remember scsi hard drive controller cards – there were several different flavors and if your card died finding something that would just work was a challenge. You couldn’t go out to the nearest big box store and get a replacement you had to look online and wait several days.

    Hardware raid – to me is too fragile a situation for me to use it. Let me put it this way – how compatible is one hardware raid approach to another. If the raid controller fails will I be able to get my data off the disks. Your answer may be “it depends” – it probably depends on the brand controller – the way THEY implemented raid – the newness of the controller (to get an exact match.) I just see too many ways that can go wrong. Yes, the controller should last for years, but it’s inevitable that will fail.

    Software raid – the linux kernel does very good software raid. The devices can be SATA/IDE/USB it doesn’t matter – (for RAID1 you just need your partition sizes to be equal.) So…. what are the risks? Data corruption I think may be the biggest risk and that’s a possibility with a hardware controller as well. System board fails – move them to another machine and boot – the boot order may have changed, but that’s fixable – your data is still readable…. It just seems to me like I have more options in the case of a failure. So – for me (and RAID1) software raid seems like the best option. And, by the way, the thing that interests me most about raid is the potential for redundancy. (As opposed to large storage/speed improvements/etc.)

    T23 No Sound after Suspend – I just discovered that sound didn’t work on the IBM Thinkpad T23 after resume. According to this thread the problem is that when the thinkpad suspends to ram it mutes all the sound levels (the sound mixer doesn’t seem to reflect this when it powers back up though.) A fix… is found here in the form of a script that can be placed in your resume scripts directory and solved the issue for me.

    Netgear MA111 – revisited I had the opportunity to test this stick out on a fresh install of Hardy… it didn’t auto detect the internet – I did a search to find a suggestion for setup. I added the following to /etc/modprobe.d/aliases

    alias wlan0 prism2_usb

    on the next boot it came up just fine and was able to scan for access points. (I didn’t have to manually enter an ssid…) So, on my usual laptop for this memory stick I looked and found no such entry in modprobe.d/aliases and I also found that I had installed linux-wlan-ng at some point. I uninstalled that, added the alias, rebooted and all is browsing wireless as it should. I wonder if it would have just worked if I had kept the stick in during a fresh install….

    command line distribution update

    I have a few virtual machine images that are running Ubuntu 6.06 with updates (so really 6.06.2 now)… and many of htese are without a gui, cli only. I was pleased to see that there are fairly easy instructions. (I know it’s possible to change the apt/sources.list but…. ) Anyway… according to the Ubuntu wiki:

    If you run an Ubuntu server, you should use the new server upgrade system.

    1.

    enable the “dapper-updates” repository
    2.

    install the new “update-manager-core” package – dependencies include python-apt, python-gnupginterface and python2.4-apt.
    3.

    run “sudo do-release-upgrade -p” in a terminal window

    *

    The -p flag is needed because hardy is not being offered as a lts-upgrade until 8.04.1.

    Nice…. it took about an hour or so all told (discounting the time it was sitting waiting for me to answer a question – 2 hours if you count that.)

    [ad#adsensesquare]

    [xls]

  • DD-WRT

    dd-wrt is an alternative firmware for a variety of routers. The most popular (and one of the first) that can be used with dd-wrt is the Linksys-Cisco WRT54GL Wireless-G Broadband Router.

    [ad#adsensesquare]

    So, why would you want alternative firmware on your router? The stability can be improved for one, but there are also a large number of features that you can enable and configure through alternative firmware. The device that was originally just a wireless router can become a repeater, a client bridge, can serve separate subnets from each port, can broadcast multiple SSID (and BSSID’s) with different encryption levels, can act as a captive portal/hotspot system. This is… just a few of the really interesting features that you can get from v24 of dd-wrt.

    Here’s another take on why you should consider an option like dd-wrt:

    I’ve been buying and installing consumer grade routers from Linksys, Dlink, netgear and others for about 10 years. Every new purchase usually means a new model number, a new version of proprietary firmware, and a new package deal of features, bugs and limitations.

    Every couple of years, the hardware gets more capable. In the consumer market, the hardware features and not the software differentiate one product from another. Within a feature set, these products are commodities. If Linksys is out of stock at officemax, then dlink or netgear is in. One is not preferred over another because of its software. Every manufacture has bugs. If you read the product reviews, -every- manufacturer has reviews that read: “worked on the first try” – followed by “couldn’t make it work.” The criteria is not “who is best”. It is – “can I make this model work, FOR ME?” and, which manufacturer’s software has been the most annoying…lately?

    Here is the problem. EVERY manufacturer has bugs in their firmware, which are gradually reduced over time with upgrades and workarounds. Those firmware bugs ALWAYS RETURN in the next –hardware- generation. It is as though each hardware design commissions a blank slate rewrite of the proprietary operating system software. The user interface may have cosmetic consistency across the product family – but features that worked in the last generation of hardware have been seen to fail in the version -1- software of the replacement equipment.

    This is why I am ready to make the jump to Linux based open source routers for the next and forseeable future generations. I will standardize on a router –software- platform. The platform will continue to mature over time, regardless of the hardware that it runs on. New features wil be integrated into the stable code base as hardware becomes more powerful, and the configuration and management of these routers will (at last) be something that can be simplified.

    Ron Parker
    Operation Improvement Inc.

    Are you looking to use the Linksys WRT54GL with a dual lan connection? This page in the dd-wrt wiki explains how to setup dual wan with failover. This may well be my next dd-wrt and wrt54gl project!

    So far I have configured a number of linksys wrt54gl boxes with dd-wrt.

    1) Router/captive portal for a large wireless installation. Modified with scripts to monitor the devices status and send daily email reports. (Used v23sp2) (NOT serving wireless directly – just captive portal via the LAN ports. Other devices deal with the wireless. As many as 20-30 users/day maximum. (NOT WIRELESS – JUST ROUTER))
    2) Simple Encrypted (WPA) router with QOS bandwidth limitation on one port of the LAN switch (to cascade to a second open access point.) (Additional firewalling rules on selected LAN port as well to prevent access from 2nd open ap into protected wpa network.) (v23sp2) (3-5 users/day)
    3) Open captive portal for public use (nocatsplash) light use public hotspot (averages a couple visitors a day.) (v23sp2)
    4) Dual SSID broadcast (one WEP for legacy support/one WPA2(TKIP)). (V24)
    5) Repeater/client bridge for one WPA encrypted network (V24)
    6) Test bed duplicate of #1 above
    7) Spare on hand replacement of #1 above
    8) Openvpn server box to be placed directly a network and serve up openvpn for the outside world. (So remote clients can vpn into the internal LAN.)

    One issue I’ve run into with dd-wrt running on the linksys wrt54gl is with the nocatsplash process. It seems that the two boxes I have will occasionally find a way that the splash process dies and it leaves the box up and running but not able to handle new clients. (The splash process is a way of authenticating users in our situation, getting people to click on a terms/agreements acceptance.) So, we get sporadic reports of the internet being down when the splash process (splashd) has simply been overwhelmed. (Our best guess at this point is that this can happen when one machine opens up a gazillion internet connection attempts at once, but I don’t have any proof of that yet.)

    So, I scripted a way to monitor the nocatsplash manager once a minute and if it’s no longer running restart the splash process. That’s taken care of the biggest stability problem I’ve seen in the field with a dd-wrt box. (Both implementations I have in the field with nocatsplash are using dd-wrt v. 23-sp2 because at the time of the solution build nocatsplash was not working in newer releases (v. 24 was still in the RC process.))

    Anyway- here are some of the scripts I’ve used. One system I use I have enough space on the jffs that I’ve been able to put the scripts there permanently with the requisite path changes in the script below. In another installation I didn’t have /jffs space free so I had to save it as a startup script. For what I’ve detailed here you can copy and paste into the web interface on the command page and then click save startup or you can login via telnet/ssh and paste these after typing nvram set rc_startup=”

    after pasting the text, type another ” to close the quotes and press enter. Then you can type nvram get rc_startup to verify that everything copied as desired and if so, type nvram commit. I have added several backwards leaning slashes to “escape” out certain things that I found I could not get to paste otherwise via ssh. Instead of pasting directly they substituted. So, when you verify what is pasted you should NOT see those backwards leaning slashes \ …

    What the following does is create a /tmp/myscripts directory, then it echos a bunch of text into a file we call monitor_splash that checks to see if there are 1 or more instances of splashd running. If there are 0 instances of the splashd process running then it restarts it. After writing the script, we need to make it executable. Then, we echo a line into our crontab to tell the scheduler to run this script every minute to check the presence of splashd. Finally, we tidy things up with a restart of the cron service. It’s not much, but it’s saved us several service calls and several reboots of the box over the course of a few months.

    mkdir /tmp/myscripts
    /bin/echo ‘#!/bin/sh
    status=`/bin/ps | /bin/grep splashd | /bin/grep -v grep | /bin/wc -l`
    #echo $status

    if [ $status = 0 ];
    then
    /usr/sbin/splashd >> /tmp/nocat.log 2>&1 &
    else
    exit
    fi’ >> /tmp/myscripts/monitor_splash
    /bin/chmod +x /tmp/myscripts/monitor_splash

    /bin/echo ‘ * * * * * root /tmp/myscripts/monitor_splash’ >> /tmp/crontab

    # restart cron daemon
    stopservice cron && startservice cron

    Here’s another of my dd-wrt recipes: I have found much of the setup for this from the dd-wrt wiki which is a MUST read if you’re setting up dd-wrt on a router. This formula…. is a startup script to allow the router to act as an internal openvpn server. Basically you need to create all your keys on another machine (client and server and ca keys as well as your secret and your dh key.) If all of this is making your eyes glaze over you need to read about setting up openvpn with full blown certificate based encryption.

    Anyway, the box is given an ip address on the local network and a port is forwarded from the firewall. Clients can then connect in and browse the internet through the tunnel. So… to the internet they look as though they are on the network that the vpn box is on. (They also have an ip address in the local net.) Make sure you change addressing to your local subnet for this.

    192.168.100.9 is the address that the box is configured on and I’ve used the vpn version of dd-wrt v. 24. The settings below could be changed to allow connections on port 53 UDP (default is 1194) as that could be more likely to pass by more draconian firewalls when you’re out in the world. Another good idea is to change it to tcp-server and use port 443. you’ll need to make sure to forward the appropriate port on the firewall. I actually prefer using tcp port 443 because most firewalls would expect encrypted data to be going to port 443 (https). This is the setup I use when I have my laptop out at a hotspot. It puts me on my home network and allows me to browse the internet through my HOME connection while I’m out in the world.

    cd /tmp
    openvpn –mktun –dev tap0
    brctl addif br0 tap0
    ifconfig tap0 0.0.0.0 promisc up

    echo ”
    # Tunnel options
    mode server # Set OpenVPN major mode
    ifconfig-pool 192.168.100.190 192.168.100.199 255.255.255.0
    push \”route-gateway 192.168.100.9\”
    push \”dhcp-option DNS 192.168.100.1\”
    proto udp # Setup the protocol (server)
    port 1194 # TCP/UDP port number
    dev tap0 # TUN/TAP virtual network device
    keepalive 15 60 # Simplify the expression of –ping
    daemon # Become a daemon after all initialization
    verb 3 # Set output verbosity to n
    comp-lzo # Use fast LZO compression

    # OpenVPN server mode options
    client-to-client # tells OpenVPN to internally route client-to-client traffic
    duplicate-cn # Allow multiple clients with the same common name

    # TLS Mode Options
    tls-server # Enable TLS and assume server role during TLS handshake
    ca ca.crt # Certificate authority (CA) file
    dh dh1024.pem # File containing Diffie Hellman parameters
    cert server.crt # Local peer’s signed certificate
    key server.key # Local peer’s private key
    tls-auth statickey 0 #used for a bit of extra security during the handshake – clients substitute 0 for 1
    ” > openvpn.conf

    echo ”
    —–BEGIN CERTIFICATE—–
    this space is where the CA certificate should go.
    —–END CERTIFICATE—–
    ” > ca.crt
    echo ”
    —–BEGIN RSA PRIVATE KEY—–
    your private key for the server
    —–END RSA PRIVATE KEY—–
    ” > server.key
    chmod 600 server.key
    echo ”
    —–BEGIN CERTIFICATE—–
    here’s where your server cert goes.
    —–END CERTIFICATE—–
    ” > server.crt
    echo ”
    —–BEGIN DH PARAMETERS—–
    insert the contents of your DH here…
    —–END DH PARAMETERS—–
    ” > dh1024.pem
    echo “—–BEGIN OpenVPN Static key V1—–
    Insert your key here
    —–END OpenVPN Static key V1—–
    ” > statickey
    sleep 5
    ln -s /usr/sbin/openvpn /tmp/myvpn
    /tmp/myvpn –config openvpn.conf

    The firewall requires a small change to make this work as well. Make sure to change this to reflect the correct destination port and protocol.

    /usr/sbin/iptables -I INPUT -p udp –dport 1194 -j ACCEPT

    [ad#adsensesquare]

    You can see that there is some overlap in these items. I haven’t used anything but the wrt54gl for this, but there are some devices with host usb support which allows for much more interesting things (network storage/print serving/etc.) This page will be updated as time permits.

    Update 7-24-09 —-

    There is a remote exploitable root access vulnerability with dd-wrt – see here for details and workarounds/fixes.

    Update 6-17-10 ….

    Big openvpn/dd-wrt project lately that has taken a lot of time, but it has solved an issue that I’m sure a lot of network admins have run into. When designing networks and looking to bridge offices with openvpn network admins are advised to pick unique subnets so that 192.168.1.1 in one office can route well over the vpn to 192.168.2.1 in the other office. If both networks (or multiple) use 192.168.1.0/24 there is network address collision – packets get lost and things don’t work. Well, it is possible with the right setup to do NAT on the packets that are traveling over the vpn. Why? Well, let’s say you’re a client of this 192.168.1.0 office network and are out at a wifi hotspot that also happens to be a 192.168.1.0 – you can’t exactly make them change their addressing to avoid conflicts with your business network and migrating an established business network can be a big task. Of course, you could start out your network design by choosing a different subnet and I’ve used this approach several times, but it’s really just a matter of time until you stumble across someone else with the same subnet that needs to vpn into the network and you run into the hairy address conflict problem.

    So, we’ve designed a box based on dd-wrt openvpn edition…. This box has a vpn “personality” (client key and configuration to connect to a server out in the internet (a linux vps is the hub of the wheel for our topology and our openvpn server.) That server identifies the box by it’s certificate and gives it an address at 10.111.1.254. It also pushes routes to 10.111.2.0/24 with 10.111.2.254 as the gateway and 10.111.1.0/24 with gw of 10.111.1.254 to our second box which is given a 10.111.2.254 address. On each device in addition to the vpn personality there is a special brew of firewall rules which handles the packet rewriting such that any device that is attached to our two vpn boxes are accessible from the other side even though internally they can share the same 192.168.2.0/24 network. So, each client has it’s own network address (192.168.34.1) and it’s vpn address 10.111.1.1 This has worked well – it did take a lot of time to initially design but we’ve now rolled out two initial installs of it. (Not bad considering that it’s all done with ~$60 dollar router hardware.) In the future I may provide more details on the setup here because as I researched this I found NO ONE explaining step by step how to design this kind of a setup. At this point the only negative with our setup is that two devices behind the same box will not see each other via their vpn address(10.111.1.1/10.111.1.2), but their lan address (192.168.34.1/192.168.34.2) Of course, this plan also allows for mobile vpn clients that aren’t “behind the box” and they register in the 10.111.0.0/24 subnet and they are all screened with the wider subnet via the server so that anything in the 10.111.0.0/16 is pingable from each vpn subnet.

    As I said, it’s been a big project and I may be detailing it here, but want to wait until all the dust settles on our setup.

    [xls]