Tuesday, December 07, 2010

tcpdump

I much prefer wireshark because, well, I forget command line options. But today I had to run tcpdump because the server didn't have X and I would have been beaten up by our good sysadmins if I'd installed wireshark and all its dependencies (and maybe vnc) for a very short session :-).

So I read the fine manual and ended up with:

tcpdump -i eth3 -n -p -w tcp.log -c 100 -s 1024 'dst port 12345'

After which, I just scp'ed the tcp.log file and started it up in wireshark with "wireshark tcp.log".

No doubt I'll need to learn more tcpdump filter syntax when I need to do that again :-). Fortunately, that was sufficient for the immediate need.

Tuesday, November 30, 2010

Building a deb from installed files

I needed to create a deb of a currently installed package. I didn't have the original deb anymore but needed it so that I could use it for rolling back an upgrade in case of problems.

a google search for "create deb from installed" immediately pointed me at dpkg-repack


sudo apt-get dpkg-repack
sudo dpkg-repack


creates the deb file in the current directory. I don't know how complete that is, but it's certainly better than the nothing I had before.

Monday, November 22, 2010

Finally suspend

My Toshiba A75 laptop previously had a problem with suspend and switch-user. Before I clean-installed Maverick, the problem was likely due to confusion in configuration due to dist-upgrades.

The system was a lot cleaner and more stable after the Maverick fresh install, but there was one last niggling bug. Most of the time switch-user would work, and often suspend and restore would work, but when they'd fail they'd show "atiixp: codec reset timeout" and other dmesg errors. When switch-user or restore from suspend wouldn't work, they'd show that or another atiixp error and the laptop would then hang. I'd have to turn it off and on (I didn't try the magic sysreq keys since I'd never used them and just plain didn't know how to use them :-).

I finally got around to trying noacpi (edit /etc/defaults/grub and set GRUB_CMDLINE_LINUX=" noacpi ") and from testing tonight it looks like it's very stable. I've restored from suspend several times and switched users a lot more time.

Previously I thought it was failing because I was doing something with video or audio, so I viewed a video, switched to another user and suspended while that other user. Restore worked fine.

Over a week or two of use we'll know if noacpi is a good fix or if it working now is just a fluke :-).

Monday, November 08, 2010

Installing XPI from downloaded file

I was at a client site the other week and I was stumped. They have a locked down corporate environment and I couldn't get a newly installed firefox to download the Selenium IDE from the download site.

It was possible to get the files on another machine (that had network access), but not from firefox itself.

The weekend passed and on Monday I realized how simple the solution was. Just copy the xpi files on the other machine. Copy them to a USB drive or over the network to the locked down machine, and then load them via the file:// URL.

Tested on Linux and got that working. Didn't get to test on the actual Windows target since, in the meantime, the sysadmins gave us enough information to allow firefox to browse outward. But file:// is an easy way to install xpi files, if the browser can't get out to the world to download them directly.

Saturday, November 06, 2010

Why You should be using virtualisation

Why You should be using virtualisation resonates with me since, even though I run Ubuntu (a debian based linux) and the staging and production servers are Debian, there are still compatibility issues (if only that debian packages move slowly so the Ubuntu packages are much newer than on the deployment servers).

Fortunately, since I only work on Linux servers, I can run vservers, even different versions of Debian [but using the same vserver kernel as the host]. That's a lot less memory intensive than running full virtualization environments (my preference is VirtualBox, but that's just because I haven't gotten around to testing Xen. I may test LXC on my home computers, but I'll stick with Vservers on my work dev machine since I already have a procedure for building the classes of vservers that we use at work.

Saturday, October 30, 2010

libavcodec-unstripped

I have a shell script that takes jpg and videos from various cameras I've used and converts them (via ffmpeg) to standard sizes/compression ratios/formats.

Must remember though (reminded when it stopped working after a from-scratch install of Ubuntu Maverick) that I need to use libavcodec-unstripped*. Otherwise I get errors about Unknown encoder 'libxvid'

Monday, October 25, 2010

Remastering Ubuntu Live flashdrive image

I use live USB flashdrive images for installing Ubuntu because I've got old laptops and their DVD drives are sometimes flaky.

One of our laptops is also sufficiently old that the fans don't do a great job anymore. When it runs at top speed (3.3 Ghz), the laptop halts within two minutes or so because the fans can't keep the hardware cool enough and it's not smart enough to slow down the CPU. Linux also doesn't have built-in drivers for the hardware, so it can't detect that the CPU is running too hot and therefore can't dial down the cpu freq.

I needed to install Ubuntu Maverick from scratch because there were enough bogosities in the configuration (and I don't have either the time or the talent to understand it all enough to figure it all out).

This page shows how to remaster an Ubuntu 7.10 image. Fortunately, the process still works for Ubuntu 10.10.

There were more customizations I could have done (e.g., installing the omnibook module so that I'd have CPU temp monitoring available [not used by the kernel to slow down the CPU though, just informational]), and certainly I could have at least removed nano :-). But that's more easily done once the OS is running on the target. And I only have one laptop like that. If I had 3 or 5 then putting the customizations in the live flashdrive would have been worthwhile.

Friday, September 24, 2010

yahoo.com -- sucking

Yahoo.com is seriously sucking for me right now. It's not a yahoo-wide problem. It affects me and maybe some small percentage of other yahoo users.

I can log in to mail.yahoo.com. signing in to www.yahoo.com doesn't work. After the login, I get redirected back to the yahoo site but I'm still not logged in.

Pidgin can't auth to yahoo when I'm connecting from New Zealand. However, if I use a socks proxy in the Philippines, I *can* login to pidgin.

I thought maybe the email reading issue were an NZ issue. It isn't though. Via the socks proxy in the Philippines I connected to us.yahoo.com and modified my setup so that I wouldn't be auto-redirected to the NZ yahoo pages. that works when I'm connected via proxy (I see the US yahoo page). However when I browse to yahoo from NZ, I still get redirected to Xtra. That's a stupidity (although I think not on Xtra's side, it's a yahoo bogosity).

When I read my email via the socks proxy, the URL indicates that I'm going to a U.S. server, but I receive the same (well, very similar) error message:

"Sorry for the inconvenience.
You've stumbled upon an unexpected, temporary problem. Performing your action again in a few moments will likely resolve the problem completely. If not, we suggest you try re-launching Yahoo! Mail.
If the problem persists, feel free to contact Customer Care about Error Code 1.

Thanks,
The Yahoo! Mail Team".

The only difference I can see between that and the error message when I surf from new zealand (no socks proxy) is that the NZ page says "the Yahoo!Xtra Mail team".

So it's a yahoo issue, not an NZ issue.

I realized, after seeing that error often enough that gmail was pulling yahoo email and that I had changed my yahoo password recently. I changed gmail's yahoo password but yahoo is still whining. I'm guessing that yahoo got confused because there were so many bad password logins from gmail (and from a continent different from what I had told yahoo was my location). I do wonder though how that is supposed to get resolved. Will the error clear itself out eventually? Will it clear itself out before I completely give up on yahoo (abandoning friends who only know my yahoo address and whom I can't ask to change their addresses for me since, well, I can't get into my contacts list either).

*dumbasses*

Monday, September 13, 2010

Selenium RC with generated PHP tests


  1. If you don't have it already, sudo apt-get install phpunit
  2. start the selenium server (at minimum: java -jar selenium-server)
  3. Generate the PHP testcase from the IDE.
  4. rename the class from Example to whatever you're testing.
  5. rename the file to be the same as the classname plus .php
  6. In the class, add a __construct() which calls $this->setUp() and $this->start()
  7. After the class is defined, instantiate the class and call its testcase method.


There may be syntax errors. The code generation is not perfect. Fix those.

Tuesday, August 10, 2010

disabling the firefox new addon notification

I use firefox for running selenium-RC and when it starts I want to disable the notification of new addons having been installed.

mzfuser says to: "Go to about:config, create a new boolean value "extensions.newAddons", and set it to false."

That works for me (firefox 3.6 Ubuntu).

Monday, July 26, 2010

JMeter Drupal Proxy URLS to exclude

I often use jmeter to load test drupal websites. One of the first things I need to do is capture a sample browsing session over the site using the jmeter proxy.

When I'm capturing a sample browsing session I usually don't want to grab all the embedded files since that makes for a very large set of http client requests in the thread group. At this point I want the thread group to contain just the top level URLs I actually clicked on but I want the individual entries to have "Retrieve All Embedded Resources" to be clicked.

That will increase the CPU load on the jmeter instances at runtime (they need to parse the downloaded file to extract the resources). I'm happy to make that trade for now. If it becomes a problem I'll adjust to have the embedded resources pre-extracted at proxy capture time but for most jmeter jobs I've done I haven't had to worry about test time CPU load much.

I always forget what the URL exclude patterns should look like. This is posted so I'll find it later.

Drupal sometimes adds GET parameters to URLs even for "static" resources such as css or png files. I haven't gone through to figure out which resources can have GET parameters added to them, instead, when excluding embedded/static resources I just treat them all similarly:

.*\.gif(\?.*|)
.*\.jpg(\?.*|)
.*\.png(\?.*|)
.*\.css(\?.*|)
.*\.js(\?.*|)

etc.

Thursday, July 22, 2010

CTEs for clarity (no efficiency gain here)

Some messages are sent to two kannels. I've got the essential data in a postgresql table but I wanted to find the messages which were sent to both kannels (within 5 seconds of each other, most such duplicated messages are sent within the same second, or within 1 second of each other).

The query could have been done without CTEs (using subqueries) but I prefer the CTEs since they move the subqueries "out" of the select statement, making the select much easier to read.

/* set up the CTEs although they're not really common except in the sense that they're the same statement, I'm just using them as *table*expressions* :-) */
WITH lhs AS
(
select id,kannel,tstamp,dest,msg_text from decmtmo WHERE mt_mo='mt'
), rhs as
(
select id,kannel,tstamp,dest,msg_text from decmtmo WHERE mt_mo='mt'
)
SELECT lhs.id lid,rhs.id rid,abs(extract('epoch' from lhs.tstamp-rhs.tstamp)),
lhs.kannel lk, rhs.kannel rk, rhs.dest,trim(rhs.msg_text )
FROM lhs,rhs /* this is what improved, otherwise we'd have the subselects here */
WHERE lhs.id<>rhs.id /* make sure we don't look at the same row on both sides */
AND lhs.dest=rhs.dest AND lhs.msg_text=rhs.msg_text /* MT identity */
AND lhs.kannel<>rhs.kannel /* but different kannels */
AND lhs.id>rhs.id /* avoid showing two copies of the same row, with lhs and
rhs swapped */
AND 5 > abs(extract('epoch' from lhs.tstamp-rhs.tstamp))
/* within 5 seconds of each other */
ORDER by lhs.id,rhs.id

Friday, June 25, 2010

Tonido kernel with NAT (and no su to non-root user)

My "could not su to non-root user" problem with building a kernel with NAT support on the tonidoplug is solved.

On the tonido support forums (requires login, but I'm posting the link here anyway) aleinss helpfully pointed at Logging into tonido as a non root user.

Apparently, with 2.6.31 kernels and later, /proc/sys/vm/mmap_min_addr needs to be 32768 (instead of the previous 65536).

I tested it with sudo echo "32768" > /proc/sys/vm/mmap_min_addr but that didn't work. Reboot required, I guess. The solution was to edit /etc/sysctl.d/10-process-security.conf and edit the vm.mmap_min_addr line to say

vm.mmap_min_addr = 32768


Many thanks to aleinss for pointing that out.

cssh feature wishlist -- clicking on one cssh window brings all related cssh windows to the front

It might be possible to do this already (I've customized .csshrc a *little* bit, mainly just to set the default window sizes and locations), but as in the title, what I'd really like is a toggle so that when I click on one of a set of related cssh windows, all of them (including the window into which I type commands to execute on all related servers) should come to the front.

Either that or another two monitors.

Or three monitors and a computer that can support four monitors altogether :-).

Tuesday, June 22, 2010

php file handle GC and flock

I was confused for a bit because I had code similar to this (details elided):


function myFunc() {
$h = fopen (MYLOCKFILE,"r");
return flock($h, LOCK_EX);
}


And when I would call it and sleep (e.g., myFunc();sleep 300;) and then run the same program in another shell the second shell wasn't blocking at the flock call.

strace showed an flock(4, LOCK_UN) being called in the first running instance. Apparently, since I don't return the handle nor do I assign it to a variable that's passed by reference, php decides that $h can be GCed immediately upon function return. That closes the file and releases the lock, so the second instance wouldn't block since there was no lock there.

Wednesday, June 16, 2010

Toshiba Satellite A75 temperature control -- Maybe

I've had a problem forever with the Toshiba Satellite A75. It's got a 3.3Ghz CPU in there but I could only ever run it at one of the two lowest speeds (1.8GHz, 2.1Ghz) because any faster (even with ondemand having me run mostly at 1.8Ghz) if the CPU ever ran too long at high speeds the kernel wouldn't notice and it couldn't speed the fans up.

The whole machine is old too, so I wouldn't be surprised if the fans they're just not working too well anymore.

I just found the omnibook kernel module project though. And after

git clone, make 
sudo make install
sudo modprobe omnibook ectype=12


It loads correctly and cat /proc/omnibook temperature says 56C. And once or twice I heard the fans spin up faster (they're on all the time these days). So I'm testing (by setting my maximum CPU speed at 3.3Ghz, but still ondemand).

If the machine is stable this way, I'll scale down to 2.4Ghz or a bit higher maybe. It'll be good to be able to do useful things at a reasonable speed again on this machine. 1.8Ghz was getting so old :-).

Sunday, June 13, 2010

Transmission blocklists

I thought I'd setup some blocklists for Transmission. After some googling and looking at this and that blocklist, I decided to go full paranoid and used a whole bunch of blocklists from IBlockList.

The blocklists are gleaned from other tools (Bluetack, PeerGuardian, etc).

I don't care too much about performance (there's a warning on IBlocklist that using too many rules will affect broadband performance :-), so I decided to just install a whole bunch of (possibly redundant) lists.

To get the blocklists and install them in transmission-daemon's blocklists directory (on my machine, ~/transmission/blocklists) I use (not yet in cron, will be soon):


#!/bin/bash

cd ~/transmission/blocklists

URLS="http://list.iblocklist.com/?list=bt_level1 http://list.iblocklist.com/?list=bt_level2 http://list.iblocklist.com/?list=bt_level3 http://list.iblocklist.com/?list=bt_edu http://list.iblocklist.com/?list=bt_rangetest http://list.iblocklist.com/?list=bt_bogon http://list.iblocklist.com/?list=bt_ads http://list.iblocklist.com/?list=bt_spyware http://list.iblocklist.com/?list=bt_proxy http://list.iblocklist.com/?list=bt_templist http://list.iblocklist.com/?list=bt_microsoft http://list.iblocklist.com/?list=bt_spider http://list.iblocklist.com/?list=bt_hijacked http://list.iblocklist.com/?list=bt_dshield http://list.iblocklist.com/?list=bcoepfyewziejvcqyhqo http://list.iblocklist.com/?list=cslpybexmxyuacbyuvib http://list.iblocklist.com/?list=pwqnlynprfgtjbgqoizj http://list.iblocklist.com/?list=ijfqtofzixtwayqovmxn http://list.iblocklist.com/?list=ecqbsykllnadihkdirsh http://list.iblocklist.com/?list=jcjfaxgyyshvdbceroxf http://list.iblocklist.com/?list=lljggjrpmefcwqknpalp http://list.iblocklist.com/?list=nxs23_ipfilterx http://list.iblocklist.com/?list=soe http://list.iblocklist.com/?list=ccp"

for u in $URLS
do
wget -t 10 -c --limit-rate=128k -w 10 -nd --ignore-length -N "$u"
gzip -d *.gz
done


Unfortunately transmission-daemon doesn't notice new blocklists added while it's running, so I also have a separate script to restart transmission-daemon (not in cron yet either since I'm just playing around with this stuff for now :-). I haven't tested kill -HUP yet.

Tuesday, June 08, 2010

Tonidoplug kernel with NAT

I bought a Tonido plug computer and have been playing with it at home. I want it to be a dnsmasq, squid, openvpn and ssh server. It'll also do some other things, but those are the main things I'll run on it. I don't need the tonido software running there (although that may change if the people at home need to support themselves instead of me setting everything up via the command line).

I'm very happy with it since it's so much faster and easier to work with than my NSLU2 (which is 1/10th the CPU freq and 1/16th the RAM). There was one problem though, I couldn't load the NAT modules. After some investigation it turns out that the kernel doesn't have routing configured and it's missing a whole bunch of modules that Tonido (or sheeva, not clear about which exactly) decided they didn't need to provide.

Fortunately, I'm booting from a USB drive, and it's very easy to make a bootable drive. If I make a mistake and make the USB drive unbootable, I can just extract the rootfs and modules tarballs back onto the drive (before or after mkfs, according to taste) and it'll be bootable again. I would never try to modify the kernel (or even install modules) on the NAND since I don't want to risk bricking the plugcomputer. Although I did do a bunch of sudo apt-get [packages] on the NAND before I realized what I was doing and stopped :-).

Mikestaszel suggested building the module and copying it over, to get ppp working. Taking that hint, I downloaded the source for the kernel I was using and after some misadventures due to forgetting techniques from long ago, I finally got the modules I needed built and installed.

The tonido runs the 2.6.30-rc6 kernel so I downloaded 2.6.30.6 from kernel.org. I used the config file for this kernel from sheeva.with-linux.com. My first try at building the kernel didn't work because of bad magic. After some googling I realized/remembered that I needed to modify the kernel makefile so that EXTRAVERSION would match the one from the running kernel, so EXTRAVERSION=-rc6.

A second try at building the kernels got me closer but it still didn't work. The bad magic error was gone, but some symbols were missing.

I didn't particularly want to build the kernel itself since I'd hoped that just building and installing relevant modules would be sufficient. Unfortunately, NAT requires CONFIG_IP_ADVANCED_ROUTER, and that can't be built as a module. So there was no way around it, I'd have to build a kernel.

After the kernel was configured and built along with the modules I needed (make menuconfig;make;make modules), I needed to make a uImage (google pointed me at this generate uImage for sheevaplug page). That required:
sudo apt-get install uboot-mkimage
make uImage
cp arch/arm/boot/uImage /boot
make modules_install
reboot


modprobe iptable_nat finally succeeded and some testing proved that the plugcomputer was working correctly as a NAT router.

-- UPDATE --
When I installed and rebooted with the new kernel, I found myself unable to run processes as a regular user. The processed would be killed immediately. I can't see how it would have been a problem with how I built the kernel since all I did was allow advanced router features and NAT/MASQUERADE. But there it is. I don't mind running as root on the tonidoplug since everything I do there I'd need to run sudo anyway, but I've switched back to using the NSLU2 for now so I can play with the tonidoplug, building kernels, rebooting at will and possibly eventually getting this latest problem fixed :-).

-- UPDATE 2010-06-22 --
I'm wrong. I *do* mind running everything as root on the tonidoplug. I don't mind running openvpn or sshd as root, but I don't want to run squid or transmission-daemon as root since any successful remote attack instantly gets root privileges.

Friday, May 14, 2010

Orca on Ubuntu Lucid (10.04)

My brother-in-law is blind, so I've been interested in linux accessibility for a long time. Not interested (or talented) enough to actually improve accessibility, but interested enough to keep an eye on the matter.

Long ago, I couldn't get Festival or Orca to work at all on my laptops. Mainly hardware support issues. One particular problem had to do with the software requiring the audio card to allow sampling at a rate that was twice what my audio card could do.

I just tested Orca on Lucid though and it's looking very good. Just enabling Orca took all of 5 seconds. I was a little confused since some things worked (firefox and the Orca preferences) and others didn't (gedit, gnome-terminal running man). Logging out and logging back in fixed that. I suppose just enabling Orca but not restarting didn't allow Orca to get its hooks deep enough into Gnome so it could intercept X11 display and keyboard/mouse events.

It took a few retries and hour and a half to get a reasonable set of Orca flat-view keybindings that didn't conflict with the regular gnome keybindings. I like using the Windows key (Super or Super-L) as a command key for Orca since it isn't used in Linux, exists in all new keyboards and is convenient. I don't much like Orca using the Caps-Lock key for that. Using the Windows key would be a problem if Orca ran in Windows but as far as I can tell (from the Orca website it doesn't run in Windows.

I was a little confused that Orca had firefox-specific keybindings, but they probably had to implement that to have similar behavior as JAWS (the dominant windows screen reader, and therefore the dominant screen reader in the world).

So Orca has some generic keybindings for general flat-view and other functionality. It can have app-specific keybindings. And it's scriptable (says the web page, although I haven't looked at what scripts might look like or how powerful they are).tt

It's also been pretty stable (tested on three laptops, all of which are pretty old). The only instability I saw happened when trying to close the Orca program via the GUI. Gnome and X hung so completely I had to go to a terminal and kill/restart gdm.

That's no big deal though since blind people would normally *always* have Orca on. And when I killed Orca from the command line (orca -q), it died gracefully and didn't take Gnome or X with it.

But all I've done so far is play with it a bit. I haven't used it extensively at all. Instability might become a lot more noticeable after hours or days of use. Maybe I'll try to get my brother-in-law to test-drive it on one of these laptops (instead of his Windows+JAWS laptop) for a day or two.

Friday, April 09, 2010

getting the vodafone usb modem working on ubuntu

http://ip-62-105-171-197.dsl.twang.net/bvportal/forums/index.html?threadId=ff80808122654e6f01227632fff8503c&postId=ff80808122654e6f01228e6f22484bb4

Thursday, April 08, 2010

tomcat thread dump at work

At work, if tomcat isn't responding, send it a kill -3 to get it to produce a thread dump.

Friday, April 02, 2010

xhost

I run three or four different browser profiles for security. There's a general browsing profile for reddit.com and links I follow from there, there's a more secure profile for gmail and facebook, and there's a most secure profile for internet banking.

Not only do I run these separate profiles, I also run them as separate users under sudo -H -u [user] [browser] [other-params].

But in order to do that I need to have an xhost setting that allows these browser profiles (running as users other than me) to display on my root display. To enable that, I have this line in ~/.xinitrc.

xhost local:

Wednesday, March 17, 2010

bash for loop

Oooh, I just saw Bash for loop examples.

I definitely like:

for i in {1..100}
do
...
done


I don't usually need to step forward in increments greater than 1, but for that there's


for i in {1...100..2}
do
...
done


Of course there's also


for (( c=1; c<=100; c++ )) do ... done


which is what i've used in the past, but I always forget about the double parens.

Saturday, March 13, 2010

grandr on toshiba satellite karmic dual monitor setup

When I first installed a second monitor on Ubuntu Karmic, the dual monitor setup was trivial. The built-in method (System | Preferences | Display) worked very well.

Lately though (possibly due to a package upgrade) that method stopped working perfectly. It couldn't identify the external monitor model (showing it as Unknown), and when I'd select the correct resolution for it (1440x900), on gnome restart or laptop reboot, some icons on the left of the desktop would be all scrunched up together, dragging a window from the external monitor (left) to the laptop monitor (right) would have the window end up partly on the left and partly on the right. It wouldn't go all the way to the right edge of the laptop monitor. As if the virtual screen width had changed to something a *lot* shorter.

I just installed grandr and ran that. It sees better than an Unknown monitor, and the virtual screen width is back to normal. I don't know yet if this fix will survive reboots. But it probably will. And if it doesn't, well, it'll be a reasonable workaround until I upgrade to Lucid.

Friday, March 05, 2010

gnucash OFX

I started playing with gnucash a month or so ago. I ran into a bunch of problems and it turns out they're mostly due to export file format I chose.

My bank supports OFX-MS-Money, OFX-Quicken and Quicken. I saw a post that said to avoid quicken because there were issues with identifying transactions as having already been loaded (when loading the same transactions twice, either because the same export file was loaded twice, or because two export files intersect). So I avoided Quicken and OFX-Quicken.

Unfortunately OFX-MS-Money has a worse problem. For some reason, the export files produced by my bank (might be the bank's problem, might be a gnucash bug, or it might just be a bogosity in the file format, or an obscure interaction among these and other features), would load into gnucash, but for the checking account, some transactions would be lost. I doubt if the transactions were really missing, but gnucash was somehow not seeing them.

I tried the Quicken file format the other day. All transactions loaded correctly and so importing a month's worth of data was very little effort. Gnucash also asks for particular expense sources (this grocery, that pharmacy, that other restaurant, etc) to be identified as to which kind of transaction they were. That's nice since for future months, those expenses will be correctly allocated to the correct account.

Unfortunately, Quicken has a weakness in that the transaction entries were missing a lot of information. For withdrawals, for instance, OFX-MS-Money would indicate which ATM card (Sol's or mine), as well as what ATM branch the transaction was made at. The Quicken format would just have a description of WITHDRAWAL and a memo field of ATM. And it was similarly silent for a lot of other transactions.

So, while the Quicken imports very nicely and has some great usability shortcuts, I can't use it since I forget what particular transactions are about IN THE SAME WEEK, let alone a month or two later.

Fortunately the OFX-Quicken format (which gnucash calls QFX) has *most* (not all, but enough) of the information from OFX-MS-MONEY, and the accuracy of loading of the Quicken format. We don't have a *huge* number of transactions per month. It only takes 30 minutes or so to load a month's worth of transactions and correctly assign expenses to the correct account. And I don't have to walk through the checking transactions doing a binary search for missing transactions.

At some point we'll have bank accounts at other banks. When that happens I'll be able to compare accuracy of other bank OFX-MS-Money files and determine if the bug is in gnucash or in my current bank's export file :-). I'm betting on a gnucash bug, myself. But now that I've got OFX-Quicken working, I don't care enough to replicate the bug. Maybe I'll do that on the easter weekend, if we don't go anywhere.

Monday, February 15, 2010

common git branch tasks

I like

Zorch's workflow examples on starting a new branch on a remote git repository.

[Here as a reminder so I can search on site:monotrematica.blogspot.com git branch]

Thursday, February 11, 2010

NZ School goes completely open source

There's a great story at CIO about how a New Zealand high school switched to open source servers going from 48 servers to 4. It's pretty good to save 11/12ths of your hardware, electricity and server maintenance/sysadmin budget.

Wednesday, February 10, 2010

fireEvent when keyPress, keyDown, keyUp don't work

Evil Tester writes about fireEvent, so I don't need to

I found this (and Nick Bartlett's summary) when doing a google search for selenium IDE where type, keyPress, keyDown, keyUp, etc weren't working as expected.

I'd actually found and used fireEvent a few months ago when I was working with some selenium tests for the Mahara e-portfolio open source system. But I'd since forgotten.

In the problem at hand, there was an input textbox with an onkeydown which detected what key was pressed and if it was the ascii(13), would call this.blur(). The solution was just to "fireEvent | locator | onblur".

[Posted here so that I'll be able to find it when I do a google search on "site:monotrematica.blogspot.com selenium IDE keyPress keyDown onblur" :-]

Thursday, February 04, 2010

Parameterized jmeter threadgroup and loop count settings

as pointed out in the mailing list post,

run jmeter with user specified jmeter parameters, e.g.,

jmeter -J threads=10 -J loopcount=5

and then, in the threadgroup, set the relevant fields to, e.g.,

${__P(threads)} and ${__P(loopcount)}

In the beanshell sampler, parameters can also be accessed via:
JMeterUtils.getProperty("threads");

Wednesday, January 20, 2010

Gnome panels on external monitor

The new monitor works very well, but some things aren't great. The fact that it's a rectangle (and there is invisible space above the laptop top panel is one. But I'll adjust to that.

I did need panels on the external monitor though. It's not convenient to have tasks on both monitors over on the laptop panel since I couldn't get a panel on the external monitor.

Then I found several solutions at answers.launchpad.net.

I used the gconf-editor solution. But after reading downward, I learned about the Alt-drag trick. That works too, and is much simpler. To put a panel on the second monitor, just create it first (it'll go on the first monitor). Then press Alt and click and drag the panel to the other monitor.

Monday, January 18, 2010

Services not coming up

After a recent package update on my work computer (AMD64), services were not being started on boot (no apache, ssh, etc). Strangely enough, gdm *does* start, so I get to log in to gnome.

runlevel says "unknown" though.

A bit of googling points to this: https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/497299

and also https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/461725

I don't understand the whole thing. There seems to be a race condition when init tasks run in parallel. And also bugs in updating /etc/network/interfaces.

It's fixed for me by forcing upstart to the previous version (0.6.3-10). I then pinned that. If a 0.6.3-12 version comes up I may download it and test. Or maybe I won't, since 0.6.3-10 works and I doubt if newer versions in karmic will actually give me much reason to upgrade.

[Update]
Sol's laptop (upgraded to karmic the other day) has the same problem. I'll fix it the same way tonight (can't ssh into it since the services didn't start :-).

[Update]
boy, upstart-0.6.3-11 officially sucks. I updated the toshiba laptop and had the same problem. The Durabook is fine though. That's three computers out of four. Might become four out of five after I upgrade sol's desktop at work to karmic.