Tuesday, July 22, 2014

Limit CPU, memory, other resources available to an LXC instance

I saw this article on controlling container resources. That's very helpful, I'll be using that where previously I'd used VirtualBox (much too fat) for fine control on RAM and CPU allocated to a VM.

Show CPUs currently available to an instance:
  lxc-cgroup -n ol6ctr1 cpuset.cpus 0-7

Limit CPU time and block I/O

lxc-cgroup -n ol6ctr2 cpu.shares 256
lxc-cgroup -n ol6ctr2 blkio.weight 500

Limit RAM

lxc-cgroup -n ol6ctr2 memory.soft_limit_in_bytes 268435456
lxc-cgroup -n ol6ctr2 memory.limit_in_bytes 536870912

The settings can be made permanent by setting them in the lxc instance's config file, e.g.,

lxc.cgroup.cpuset.cpus=0,1
lxc.cgroup.memory.soft_limit_in_bytes=268435456
lxc.cgroup.memory.limit_in_bytes=536870912

The oracle article has a typo for the memory.limit_in_bytes example, it was missing the last digit (2). Fixed here.



Friday, July 04, 2014

install mint display manager on ubuntu 14.04

http://www.noobslab.com/2014/04/mdm-display-manager-and-mdm-themes-for.html

sudo add-apt-repository ppa:noobslab/mint sudo apt-get update sudo apt-get install mdm mdm-themes

Monday, November 04, 2013

Local only postfix

I often need to configure postfix to accept email but never forward it anywhere.  E.g., when configuring a backup or test instance of alfresco and I want to see email notifications being sent but I don't want that email to go to real email addresses (a common issue when restoring a production alfresco backup onto a test or dev machine).

A simple way to do all this is:

# if it's already installed.  remove it.  make sure you mean to do this. or back up your configuration

sudo apt-get purge postfix
sudo apt-get install postfix

(and tell it it's an internet server)

 edit /etc/postfix/main.cf and set

virtual_alias_maps = hash:/etc/postfix/virtual
default_transport = error:outside mail is not deliverable
unknown_local_recipient_reject_code = 450


and edit /etc/postfix/virtual, entering any necessary aliases there, e.g.,

@gmail.com tiger
bopolissimus@gmail.com bopolissimus
gerald@catalyst.net.nz gerald

generate the hash:

   postmap /etc/postfix/virtual

and restart postfix:
   sudo service postfix restart



and postfix will now accept email from localhost and forward them to the locally aliased users or to root (if no aliases match)

Because I'm paranoid, I also do the following:


   iptables -A OUTPUT -p tcp --dport 25 -d 127.0.0.1 -j ACCEPT
   iptables -A OUTPUT -p tcp --dport 25 -j DROP
   iptables -A OUTPUT -p tcp --dport 587 -j DROP
   iptables -A OUTPUT -p tcp --dport 465 -j DROP


Wednesday, October 23, 2013

buntangle

I often work with kannel logs and always get a severe headache when working with bearerbox logs.

I'll need this project in future, when I need to work with bearerbox logs again.

https://github.com/bopolissimus/buntangle

It reads a bearerbox log file, organizes the entries by the third field in the log file (I should read the kannel source so I'll know what that field is called, I just call it the key :-).  It then finds the PDUs in the per_key array (untangled already), and writes out the PDUs in order.

PDUs are written out with newlines before and after so they're much easier to read.

I'll also add a filter program later that will remove PDU types we don't need (generally enquire_link/enquire_link_resp, but possibly also bind_receiver* and bind_transmitter* PDUs, maybe some others).

Thursday, August 01, 2013

Find and restore a single document from alfresco backup

I haven't tested this, but it looks good. Must test soon. Preferably before I need to do it against someone's production setup :-) Find and restore a single document from alfresco backup Highlights: Given the filename ${fname}
  1. load the database dump into a scratch db
  2. sql selects to identify which file on disk corresponds to the file you want (including which version)
  3. grab the file from the backup

Saturday, June 08, 2013

bandwidth limiting apt-get

I'm doing dist-upgrades to Mint Olivia from Mint-Nadia at home on a weekend. That's not such a great time since on the weekend Timmy likes to view youtube videos. And I think my sister-in-law streams tagalog movies. I could do it at night (and I did, for my first guinea pig laptop last night) but I'd rather get some of that done in the daytime too. Fortunately it's google and commandlinefu.com to the rescue. The magic command line arguments to apt-get are:
-o Acquire::http::Dl-Limit=20 -o Acquire::https::Dl-Limit=20
I could set the limit to 100 or 150 and there wouldn't be much complaint, but I'm setting it to 75 since I don't mind if it takes a while. I do have squid in front, so some of the packages will be in the cache. I'm only going to save about half the bandwidth though since two laptops are i386 and the other two are x64. Ah, but for now, I think even better is rsyncing /var/cache/apt from the other i386 machine to this one.

Thursday, June 06, 2013

Ubuntu self-signed SSL cert

cd /etc/ssl/private # or wherever the certs go sudo make-ssl-cert /usr/share/ssl-cert/ssleay.cnf [hostname].crt

Monday, May 13, 2013

Creating a quantal lxc instance

For a while I tried to create a quantal lxc instance by first installing precise and then doing a distribution upgrade. After some documentation reading and reading of the lxc-ubuntu template, it turns out that's not necessary :-).

Instead, I should just:

lxc-create -t ubuntu -n quantal -- --release quantal


Next, to create a wheezy instance :-)

Thursday, May 09, 2013

alfresco 4.2.x solr OverlappingFileLockException



In Alfresco 4.2.c (likely also in 4.2.b and 4.2.a), solr comes with

  alfresco.enableMultiThreadedTracking=true

 In the default setup I get:

    14:02:31,502 ERROR [org.alfresco.solr.tracker.CoreTracker] Tracking failed java.nio.channels.OverlappingFileLockException at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255) at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152) at sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1017) at java.nio.channels.FileChannel.tryLock(FileChannel.java:1154) ... 

Stopping solr and setting that false for both archive and workspace (archive|workspace)-SpacesStore/conf/solrcore.properties isn't sufficient. There will be other problems with the solr indexes.

So I just blow away the indexes and have them rebuild.

1. stop solr (and/or alfresco)
2. rm -rf archive/SpacesStore/* workspace/SpacesStore/*
3. rm -rf archive-SpacesStore/alfrescoModels workspace/SpacesStore/alfrescoModels
4. start solr (and/or alfresco).

Blowing away the models like that can lead to solr trying to index items for which the models haven't been indexed yet (leading to more solr related log entries). After everything is indexed though, those error logs should not appear anymore and solr will be stable moving forward.

Thursday, April 11, 2013

Alfresco 4.2.c manager/html fix


I've had problems with Alfresco 4.2.c (bundle) because I couldn't get to the tomcat manager.

After editing tomcat-users.conf to add the manager-gui role and a user with the manager-gui role, I still couldn't get to the manager. The browser would jump straight to the Unauthorized page instead of first showing a dialog box for basic auth.

 Thanks to sujaypillai's fix it turns out that the default tomcat configuration in the bundle is broken. Sujay Pillai pointed out the solution. In tomcat/conf/context.xml, comment out the whole securePagesWithPragma valve block.
  
I thought setting securePagesWithPragma="true" would do the same thing, but it doesn't. Just comment it out then. Then restart, of course.

Tuesday, April 09, 2013

Quick fix for debian locales error

Thanks for the fix Below is as given in the link except changed country for easy copy-paste when I need this again :-).
export LANGUAGE=en_NZ.UTF-8
export LANG=en_NZ.UTF-8
export LC_ALL=en_NZ.UTF-8

locale-gen en_NZ.UTF-8
dpkg-reconfigure locales

Tuesday, March 12, 2013

Alfresco 4.2.c, OpenLDAP modifyTimestamp format and timezone

I had a problem with alfresco 4.2.c correctly syncing LDAP users if modifyTimestamp was of the form 20130220195623Z but not syncing them if the format was 20130221233702.687769 It turns out that a large part of the issue was just that the new format (from openLDAP in Lenny, previous was openLDAP in etch) didn't specify Zulu, so it was being interpreted as the local timezone. That's always a problem when you're 12 or 13 hours ahead of GMT (as we are in NZ). And it seems there was a separate bug so that if the timezones didn't match the modifyTimestamp code that alfresco was using was just consistently failing to compare correctly. Even if the modifyTimestamp was weeks before it didn't think it needed to do an LDAP sync. Solution was to specify the timezone offset (although numeric offsets suck since I'll still need to switch them around and restart alfresco twice a year). ldap.synchronization.personDifferentialQuery&(objectclass\=inetOrgPerson)(!(modifyTimestamp<\={0}+1300))) http://forums.alfresco.com/comment/131475#comment-131475

Sunday, December 30, 2012

Multiuser pulseaudio in Mint Nadia (and maybe Ubuntu Quantal too)

I upgraded my several home computers to Mint Nadia recently and the audio stopped working the way I wanted.

I run chrome as several different users (a reasonably secure user, an insecure user and myself) and if (in the insecure or reasonably secure browser) a page had audio, I'd have to copy the URL, and paste it into firefox running as myself to hear the audio.

It turns out, Ubuntu Quantal (I'm guessing it's their fault) had changed how to do system mode in pulseaudio. To begin with, we don't even configure pulseaudio in /etc/default/pulseaudio anymore, now it's /etc/init/pulseaudio.conf.

The new way to enable system mode is:
  1. sudo vi /etc/init/pulseaudio.conf
  2. find the "#start on rulevel [2345]" line
  3. uncomment it (remove the #).
  4. restart pulseaudio.

Thursday, December 06, 2012

tcpdump between apache and tomcat

I needed to monitor http traffic between apache and tomcat (apache was reverse proxying).

I usually use wireshark so I don't actually know tcpdump parameters or filter format :-).

For this very common use case though:

tcpdump -vv  -A -s 1500 -i lo 'port 8080'

Friday, November 30, 2012

sudoers for secure and insecure

run browsers as different users.  Previously I didn't need to customize /etc/sudoers since my user had NOPASSWD:ALL.  I'm stepping away from that insecure (but convenient) practice now, so just doing

  sudo -H -u insecure chromium-browser

won't work anymore since it'll require a password, I won't type one in, and it'll fail.

Now I need to add some entries to /etc/sudoers (sudo visudo)

  tiger ALL=(secure,insecure) NOPASSWD: /usr/bin/chromium-browser


and all works again.

Thursday, November 15, 2012

RALink wifi drivers

Our new laptop (an HP Pavilion G6-2119TU) has been prone to crashing once or twice a day (four times today) because the wifi driver for it is unstable.  I'm hoping that the driver from RALink is more stable. 

http://michael-peeters.blogspot.co.nz/2011/06/fixing-rt2860-wifi-chipset-under-ubuntu.html

Had to recompile from source, so posting the link here so I can find it again for when I need to rebuild the module on kernel upgrades or whenever I upgrade Mint.

Sunday, November 11, 2012

multi user pulse audio

Because I run browsers as different users I need all three different users to be able to play sound and by default pulse will only allow the currently logged in user to play sound.

To let all three users play sound (usually when I click on a flash video or stream play music from Pandora) it's sufficient to edit /etc/default/pulseaudio, set  PULSEAUDIO_SYSTEM_START=1, and then restart pulse (with a logout/login or an X restart).

On Mint that port isn't accessible to the network by default.  if it is though, then add appropriate iptables rules so that network access is disabled.

Any local users will now be able to access pulse, so only do this if you trust your local users.

UPDATE:
In 2016, the way to do this is now via paprefs.  Install the paprefs package and run paprefs as the main user (i.e., I'd run it as tiger and allow localhost only access to sound).

Do this only if you trust all users on the machine.

Saturday, November 10, 2012

xhost for specific local user only

On Linux (Mint, lately, but also Ubuntu and others) I run my browsers as three different users.

  • When browsing sites that need high security (online banking, etc) I use the browser when logged in as myself, 
  • For regular browsing of important but not financial sites (gmail, facebook, etc) I run a browser as a user named "secure" (could be any other name).  
  • For likely insecure browsing (reddit, etc) I run a browser as a user named "insecure"
Previously I would have a shell script that did:

  ssh -X secure@localhost

with public key auth.  Lately I'd switched to a script that did:


  xhost +


instead so I could just


  sudo -H -u secure chromium-browser


but that's not too secure.  I finally broke down and read the xhost manual and am now doing it the right way with:

  xhost +si:localuser:secure
  xhost +si:localuser:insecure

which at least limits the xhost permission to just those two users.

Sunday, October 14, 2012

Linux Mint/Mate Panel on second monitor

Not obvious. I flailed around for a bit until I found this forum post on how to move a mate panel to the second monitor.

Copying the text from that link:

  1. Right-click the panel you wish to move and select “Properties”.
  2. Uncheck the “Expand” option under the “General” tab.
  3. Grab one of the edges of the panel by clicking on the left or right end (top or bottom end for vertical panels).
  4. Drag the bar to the desired screen and position.
  5. Check the “Expand” option in the “Panel Properties” window and click “Close”.

Wednesday, June 13, 2012

mDesktop is a winner!

As I mentioned in the preceding post on disabling windows screen flipping, I'm working on a Windows 7 machine at a client site for a few months. I've half heartedly looked for multi-workspace software for Windows. Must be free as in beer (since this isn't *my* computer and it's not important enough to get the company to pay for the software). mDesktop does a great job. I'd love to have some customization features, but it works great as-is.

Disable Windows screen flip

I'm working at a client site for a few months and they use Windows computers there. For a while I'd get confused because when I'd type Ctrl-Alt-Left or Ctrl-Alt-Right, the display would flip to portrait (with the "top" at the left or the right depending on which key combination I'd typed). It turns out this is easy to fix (well, on *this* machine, exactly where it needs to be fixed may depend on video card/driver). And the path to fix it varies too depending on version of Windows. For Windows 7 and for this Intel Q45/Q43 Express chipset: Control Panel > All ControlPanelItems > Display > Change display settings > Advanced Settings Intel(R) Graphics Media Accelerator Driver > Graphics Properties > Display Settings uncheck "Enable Rotation" > Apply and accept the change.

Saturday, April 28, 2012

Auto stop torrents at 100%

#!/bin/bash

P=100

id=`transmission-remote -l | cut -c 1-11 | sed "s/\*//"  | cut -c 3- | sed "s/^ //" | grep "$P\%" | cut -f 1 -d " " `

if [ -z $id ]
then
  echo "no match : $id"
else
  sleep 60

  echo "stopping $id"
  transmission-remote -t $id -S
fi

Tuesday, March 27, 2012

No supported regular expression matcher error under ant

Solution is at:

http://tatwright.wordpress.com/2008/03/23/ant-troubleshooting/

sudo apt-get install ant-optional

Tuesday, February 21, 2012

selenium under jenkins after a firefox update

A client ran into a problem recently. After an upgrade of firefox, they suddenly had many selenium test failures. I was confused and couldn't figure it out for a *long* time.

When we'd connect to the running test (vnc), there were two tabs in the controlling firefox window. It turns out that the second tab was a "Thank you for upgrading firefox". That would have been innocuous if it just sat there in the background. Unfortunately, selenium took it over and then started to run the test in *both* tabs.

Since some tests changed session state (e.g., login, do some logged in work, logout) tests would fail when one tab would log the user in and the other tab would log the user out while the first tab wasn't finished yet.

Running firefox manually while forcing it to use the base profile used by selenium fixed the issue. We could also just have edited prefs.js and modified the lastAppVersion entry, but the first fix is easier (it does the lastAppVersion modification) and anyway I only learned about modifying lastAppVersion directly after I'd already done the first fix :-).

Thursday, February 02, 2012

Binding host directories to container directories in lxc with lxc.mount.entry

I'm very happy about lxc.mount.entry in lxc containers (working in Ubuntu 11.10 Oneiric host, not tested on older hosts since I don't have any around).

e.g.,

lxc.mount.entry=/home/[user]/[some_directory] /var/lib/lxc/[container_name]/rootfs/home/[user]/[some_directory] none defaults,bind 0 0


That lets me mount a directory on the host (/var/lib/lxc/**/[some_directory]) inside the container somewhere (in this example,
/home/[user]/some_directory)
.

I'd tried something like that with samba but had instability problems so gave up. Most of the time it would work fine, but sometimes (I never had the time or inclination to figure out why) something in samba would get out of sync and I'd not be able to write new files from the LXC container, delete files, etc.

Samba does have an advantage in that the UIDs don't have to be the same since you specify which user the files will be mounted by locally. But it's easy enough to synchronize UIDs between host and container.

Wednesday, November 09, 2011

Low bandwidth vnc

I've been travelling and the bandwidth here isn't the best. When I need to vnc to work over the vpn I can still get pretty good performance even on slow mobile broadband:

vncviewer -depth 8 -encodings tight -compresslevel 9 [host]:[port]

Monday, September 05, 2011

debian iceweasel and jenkins seleniumhq plugin

I was having trouble getting selenium (under jenkins) to run iceweasel (on debian squeeze). selenium was whining that /usr/bin/firefox was a script. Symlinking the xulrunner stub to /usr/bin/firefox-bin didn't work since it would whine about not finding application.ini.

Thanks to Tero Tilus on running selenium under debian lenny, I find that the solution is to add /usr/lib/iceweasel to jenkins' path. That way it finds firefox-bin plus all the other files that firefox-bin needs to start.

Thursday, August 18, 2011

xpath css class matching

xpath class matching when the element actually has more than one class in the class="foo bar" statement.

"The solution is arcane but I promise it works:"


//div[contains(concat(' ',normalize-space(@class),' '),' foo ')]


"Note that there must be spaces on either side of the class name 'foo'."

That's pretty long and is more general than I need. If multi-class entries are normalized already (e.g., no non-space whitespace in there, just spaces), then it's simpler to use just:


//div[contains(@class, "foo")]


That's so much shorter that I generally start with that and only fallback to the longer form if I find any entries in the html that have non-space whitespace.

Wednesday, August 10, 2011

whatprovides for deb

It took me long enough.

I've often wanted to learn which package installed a given file (e.g., /bin/nc or /bin/vim). It was never important enough to actually read the manual for, however :-). And my google incantations weren't quite right.

Finally I see that it's


dpkg -S [filename]


that's a lot less typing than for rpm :-).

Sunday, July 24, 2011

guruplug and slow USB devices

I've got my guruplug connected to a seagate 1TB external expansion drive. At boot, the guruplug doesn't detect the seagate drive on first usb start (in the uboot process, before it even boots the linux kernel).

It's a timing issue, the drive just takes a while to start up and with the default bootcmd setup (just one usb start), the drive just isn't available yet when linux comes up.

Solution:

edit the bootcmd so that where it does the "usb start", it should next do a "usb stop;sleep 10;usb start". I thought it might be sufficient to do "sleep 10;usb start", but it isn't. The first usb start wakes up the disk. If we skip it by just doing a sleep 10, the disk still isn't detected fast enough by the succeeding usb start.

So we startup usb (usb start), stop it (usb stop) so we can start it again later, sleep a bit, and then start it. The second time we start it, it's already spun up and is available to uboot (and later, to debian).

I still had to set the passcount in /etc/fstab to zero so it wouldn't be auto-fsck'ed. This is because processing of /etc/fstab and auto-fsck happens before the USB devices are detected by the kernel. So if passcount is not zero then linux tries to check the drives to see if they should be fsck'ed but the USB drives aren't available yet and I need to Ctrl-D and type the root password for maintenance.

I might put the root filesystem on the USB drive (still thinking about it). I like the idea of having the rootfs on USB (SDCard is not an option for me since this is not the GuruPlug Server plus, no SDCard slot) since I had to install uboot, kernel and rootfs from scratch because some random power cycling corrupted the NAND and when I did an fsck on it, it corrupted it enough so that I had to unbrick it with the JTAG :-).

I've done that twice now (once here and once on the Tonido). If this (or similar) plug is going to be deployed in places where power is unstable, then having the rootfs on an external device that can be easily fscked on some other linux machine is essential.

Wednesday, July 20, 2011

ecryptfs and changing password

Apparently we need ecryptfs-rewrap-passphrase ~/.ecryptfs/wrapped-passphrase

or (if it's not mounted), then ecryptfs-rewrap-passphrase /home/ecrypts/[username]/.ecryptfs/wrapped-passphrase.

Thursday, July 14, 2011

dos2unix in vim

http://www.xappsoftware.com/wordpress/2009/03/31/dos2unix-using-vi-or-vim/

the gist of which is:

:%s/^M//g

and to compose the ^M character you need to press 
CTRL-V -> CTRL-M

Wednesday, July 13, 2011

Tuesday, May 24, 2011

Globe Telecom Service -- A Quality Vacuum

I was in the Philippines for two weeks and had first hand experience of various bogosities in Globe Telecom's mobile broadband service.  I decided to test supersurf50 (avoiding supersurf220 initially in case the service was so bad that the other 4 days would be wasted if I abandoned the service).  supersurf50  provides 1 day of unlimited mobile broadband.

  1. supersurf** is supposed to text you when your 24 hour subscription subscribes.  This is because, when your subscription subscribes, you switch to their default/casual mobile broadband rate of PHP 5 for 15 minutes (therefore, PHP 20/hour).  I didn't get a text the first day and the casual rate ate all but PHP 1.00 of my credit (because it only takes PHP 5.00 at a time).  I tested supersurf60 again the next day, to confirm that I still didn't receive the text.  I didn't.  This time it ate less (since I was ready for it and explicitly testing).
  2. I called Globe support about this problem.  They asked for my number.  When I asked if they had caller ID and couldn't they just look it up, the support person said they didn't have caller ID.  I called Globe support several more times until I gave up on their phone support.  I got a different support person each time.  I asked every single support person if they had caller ID.  All of them said they didn't.  It must be true.  It's also amazing in a ridiculous and incompetent sort of way.
  3. After they ask for your number, they ask for your name.  The second time this happened (and I think one more later support call) I asked if they had a field in their trouble ticket system for the name, so that they wouldn't have to ask it every time.  Every support person said they don't.  Must be true, no one would invent such a stupid detail, after all.  So to Globe, prepaid subscribers are just numbers that they can't even identify unless you tell them.  I could, therefore, start filing bogus problem reports using the globe mobile numbers of anyone whose number I knew (or just random numbers).
  4. I was keeping track of when my supersurf50 would expire (so I could test, at expiration, whether it was eating my prepaid credit).  I had an alarm that would fire at expiry time.  When the alarm fired, I'd check my balance, ensure that it had been reduced by PHP 5.00 (and that, therefore, I was on the casual rate).  I'd send "supersurf status" and it would stay that I was not subcribed.  This is good since clearly I wasn't anymore.  I'd then send "supersurf50" to resubscribe, and it would say I was still subscribed.  Even an hour or two after expiry, I still couldn't resubscribe.
  5. When I called support about this, they couldn't help.  They couldn't LOOK at my status to see if I was still subscribed or not.  It seems, they can't actually look at anything about your account or phone number.  Their only purpose is to find something in their script that's close to your problem and then read that to you.  And apologize for their inability to do anything.

    Twice I was referred to technical support (apparently different from "account" support.  They couldn't help me either.  They knew more technical sounding words and APN settings and such, but they couldn't look at my subscriptions either, and they couldn't even see what account support had written about my problems.  They had two completely separate (but similarly useless, because couldn't look at my true status) trouble ticket systems that didn't interoperate, so every time I was referred to technical support (only twice, to be honest), they had to ask me for my number again, my name again, and what the problem was.  again.

    To be fair, I only talked to technical support twice or so and I only asked one of them if she could see my "account support" tickets.  She said she couldn't and said the systems were separate.  So that's a sample of one.  It's likely true though, given the thoroughgoing incompetence of this whole system.
  6. I had still not been told how to resubscribe to supersurf when status said I wasn't subscribed but resubscribing kept telling me I couldn't do that since I was still subscribed.  Finally, I tried "supersurf stop", and after it said I needed to send "supersurf yes" and I did that, I was finally able to resubscribe.

    Apparently, however, that's not in the scripts.  I talked to two or three support people about this problem and none of them suggested "supersurf stop".  Either they didn't know about the option at all, or no one had considered it as a solution, or someone had but the solution hadn't moved up to the support people yet (perhaps because Globe's response to bug reports is to apologize and then do nothing since, they can't do anything anyway.  they can't even look up status, let alone change anything).

    Heh, Similar bug report here from LAST YEAR.  And he couldn't resubscribe for 5 DAYS.  If the 5 days is consistent (haven't looked for more cases), it might be that there's a bug confusing "50" with "220" (perhaps "220" is assumed even if the actual subscription is "50").
  7. I was a bit forceful in my last two account support calls.  I asked to be pushed up to the next support tier.  Apparently there is no next support tier (two support people actually said that they couldn't because there was no next level that could help me).  The best they could do was refer the issue to an issue resolution team.  I never got a call back from any such team, even after I'd made clear in my report that Globe had STOLEN several hundred pesos of prepaid credit because their text notification of subscriptions did not work as advertised.  To be fair, this was my fault too, I'd stayed up all night twice in a row for my stepfather's wake, so slept through the expiry the next morning :-(.  And I'd trusted Globe when they said they'd text me before expiry.  That's the real fault, right there.

    I asked to speak to a manager.  They couldn't let me do that.  I asked to have my complaint (on two occasions) referred to management with the explicit request that they listen to the recording so they could hear EXACTLY what I said (rather than just read the filtered account that the support persons were typing into their ticket system).  I never got a call or any other acknowledgement from anyone about that either (or, really, about any of my complaints about their service).  Either management listened to the recording and did nothing, or they didn't listen.  I'll attempt charity and guess the  second.
  8. I had had enough of losing money every day to the casual rate, so I switched to supersurf220.  That would give me 5 days of unlimited broadband, reducing my casual rate losses to, perhaps 1/5th of the daily rate :-).  Unfortunately, the confirmation SMS said that the unlimited rate would expire 8 hours early (I subscribed at 23:45 or so, and it said it would expire at 15:45 or so 5 days forward).  Support couldn't help.  They could only assure me that it would actually expire at the right time.  They didn't explicitly say but left implied (presumably it's painful or against Globe policy to actually acknowledge a bug) that the notification had gotten the time wrong.

    As it happens, in fact the subscription did expire at the right time (23:45 that Saturday).  So either the notification was just wrong (likely, just a consistency bug similar to the "supersurf status" bug) or the complaint was pushed up to someone who could actually look at the status and they fixed the expiry time (unlikely, IMO).
  9. At around 8:30PM one night I continuously tried to send "supersurf50".  It kept replying that it couldn't perform the subscription yet and to try again later.  There are capacity problems there, it seems.  That should be fixed.

    Heh, That bug has also been around for at least a year.
I had more problems with Globe service (apart from slowness and it seems they have QOS or similar rules that discriminate against UDP) but eventually I gave up calling support.  I used the broadband for 5 days since I'd already paid for it.  Quality was very unpredictable.  Most of the time it was very slow (about as slow as a dial-up modem), sometimes I'd get 128kbps, and for a whole afternoon I could stream youtube videos very fast (1.5Mbps or so).

For light surfing, facebook and chat it's sometimes worthwhile.  Next trip to the Philippines though I'm avoiding Globe.  To be fair, the support persons do seem to be very willing to help.  They just can't since they don't have the tools to help.

I tried Smart and Sun mobile broadband too.  Neither of them would let openvpn connect (I tried various combinations of reducing MTU, using TCP instead of UDP, removing TLS auth).  I'm going to have to work remotely from wired DSL connections (openvpn may be slow, but it will at least connect so I can get to the wiki and run jmeter tasks remotely via commandline).

Saturday, May 14, 2011

Aaahh, tethering finally working on Globe telecom

Name: MyGlobe Inet
APN: http.globe.com.ph
MCC: 515
MNC: 02
AUTH: PAP or CHAP

Leave all other settings alone.

Globe firewalls DNS.  Only DNS requests to Globe's own servers will work.  If tethering, the default DNS server (the android handset) doesn't work.  Need to edit /etc/resolv.conf and set nameservers there.

The SetDNS app helps with that (tells you what your DHCP DNS settings are).  Google gives some other Globe DNS servers too, but they don't work.  In case the DNS server IP addresses change (they're not meant to be hardcoded anyway, being provided by DHCP instead), then setDNS is the first thing to look (there are also a bunch of other IP settings viewers on the app market, I chose SetDNS since I was testing and wanted to be able to set the DNS servers *in* android (not just the tethering laptop).

I think 2degrees must also have the same DNS firewalling behavior (I had the same issue when testing at the airport in Wellington, tethering worked, I could ping remote hosts by IP address, but I couldn't resolve DNS).

Update:

Weird situation in the morning.   I'm down to PHP 1.00 of load.  Packets go out, and even ping (tethered) works with replies arriving.  However, reply packets don't come back.  So DNS works, ping [8.8.8.8] works (including replies) but http and google talk don't work.  I *think* yahoo messenger works though.  Possibly it's TCP that's blocked but UDP is allowed through?

Ah, I just reloaded and now have unlimited broadband for the day.  I was surprised when I lost all my load leaving me with PHP 1.00.  Apparently, when the broadband plan expires, I start to get charged at PHP 5 for every 15 minutes (no auto-reload).  The lack of auto-reload is fine, but I got no notification of the expiry and so the trickle deduction of load is very surprising (and pauperizing).   Also, there's some indication that internet only works while there is at least PHP 5.00 of load.  That's just bogus.  If there's no more load, just stop providing service.  Don't set an artificial level of required load below which, for no good reason, you suddenly stop providing service.

Tuesday, May 03, 2011

ubuntu lxc howto

http://blog.bodhizazen.net/linux/lxc-configure-ubuntu-lucid-containers/ [Edit: 2013-05-29] Using LXC in Ubuntu is now *much* easier. Don't follow the instructions in the link above anymore, they're out of date.

Saturday, April 23, 2011

Setting the guruplug to be a wifi WPA2 client

I'm going to need this link when I finally get around to setting up the guruplug :-).

Setting GuruPlug to be a Wifi client

Saturday, April 09, 2011

nload finally working on tonido (sheevaplug) with debian squeeze

I've tried to get nload working on the tonido plug for a while. My tonido runs debian (I didn't want to stick with the old version of Ubuntu that comes standard, to begin with, security updates weren't being provided anymore years ago).

One thing I missed was nload. Whenever I'd try to run it I'd get a segfault.

Finally though (possibly due to a recent apt-get update;apt-get upgrade;) nload is now working without crashing. This is debian squeeze.

Monday, March 28, 2011

apache derby constraints

Apache derby's log messages on constraint violation aren't very useful. E.g.,

The check constraint 'SQL123456789012' was violated while performing an INSERT or UPDATE on table ...

To find which constraint that is:

SELECT
s1.checkdefinition
FROM
sys.syschecks s1,
sys.sysconstraints s2,
WHERE
s1.constraintid=s2.constraintid AND
s2.constraintname='SQL123456789012';

Saturday, February 12, 2011

Xpath for Drupal form_build_id in jmeter

I often load test Drupal webapps with jmeter. One thing that I need to do very often is grab the form_build_id (unique for every form invocation) and use it for the form submit.

For a very simple form (only one form_build_id on the page), this works:

//input[@name="form_build_id"]/@id

in the Xpath extractor, set the reference name to whatever you want. I often set a default value of, e.g., form_build_id_no so I can easily tell (in a Debug Sampler) that no value was found and therefore the xpath isn't quite right.

The @id extracts just the id attribute from the input element.

Monday, January 24, 2011

Bloom facebook photo uploader problem -- screenshot



java web start console output is at:

http://pastebin.com/nvckgDJd

Tuesday, December 07, 2010

tcpdump

I much prefer wireshark because, well, I forget command line options. But today I had to run tcpdump because the server didn't have X and I would have been beaten up by our good sysadmins if I'd installed wireshark and all its dependencies (and maybe vnc) for a very short session :-).

So I read the fine manual and ended up with:

tcpdump -i eth3 -n -p -w tcp.log -c 100 -s 1024 'dst port 12345'

After which, I just scp'ed the tcp.log file and started it up in wireshark with "wireshark tcp.log".

No doubt I'll need to learn more tcpdump filter syntax when I need to do that again :-). Fortunately, that was sufficient for the immediate need.

Tuesday, November 30, 2010

Building a deb from installed files

I needed to create a deb of a currently installed package. I didn't have the original deb anymore but needed it so that I could use it for rolling back an upgrade in case of problems.

a google search for "create deb from installed" immediately pointed me at dpkg-repack


sudo apt-get dpkg-repack
sudo dpkg-repack


creates the deb file in the current directory. I don't know how complete that is, but it's certainly better than the nothing I had before.

Monday, November 22, 2010

Finally suspend

My Toshiba A75 laptop previously had a problem with suspend and switch-user. Before I clean-installed Maverick, the problem was likely due to confusion in configuration due to dist-upgrades.

The system was a lot cleaner and more stable after the Maverick fresh install, but there was one last niggling bug. Most of the time switch-user would work, and often suspend and restore would work, but when they'd fail they'd show "atiixp: codec reset timeout" and other dmesg errors. When switch-user or restore from suspend wouldn't work, they'd show that or another atiixp error and the laptop would then hang. I'd have to turn it off and on (I didn't try the magic sysreq keys since I'd never used them and just plain didn't know how to use them :-).

I finally got around to trying noacpi (edit /etc/defaults/grub and set GRUB_CMDLINE_LINUX=" noacpi ") and from testing tonight it looks like it's very stable. I've restored from suspend several times and switched users a lot more time.

Previously I thought it was failing because I was doing something with video or audio, so I viewed a video, switched to another user and suspended while that other user. Restore worked fine.

Over a week or two of use we'll know if noacpi is a good fix or if it working now is just a fluke :-).

Monday, November 08, 2010

Installing XPI from downloaded file

I was at a client site the other week and I was stumped. They have a locked down corporate environment and I couldn't get a newly installed firefox to download the Selenium IDE from the download site.

It was possible to get the files on another machine (that had network access), but not from firefox itself.

The weekend passed and on Monday I realized how simple the solution was. Just copy the xpi files on the other machine. Copy them to a USB drive or over the network to the locked down machine, and then load them via the file:// URL.

Tested on Linux and got that working. Didn't get to test on the actual Windows target since, in the meantime, the sysadmins gave us enough information to allow firefox to browse outward. But file:// is an easy way to install xpi files, if the browser can't get out to the world to download them directly.

Saturday, November 06, 2010

Why You should be using virtualisation

Why You should be using virtualisation resonates with me since, even though I run Ubuntu (a debian based linux) and the staging and production servers are Debian, there are still compatibility issues (if only that debian packages move slowly so the Ubuntu packages are much newer than on the deployment servers).

Fortunately, since I only work on Linux servers, I can run vservers, even different versions of Debian [but using the same vserver kernel as the host]. That's a lot less memory intensive than running full virtualization environments (my preference is VirtualBox, but that's just because I haven't gotten around to testing Xen. I may test LXC on my home computers, but I'll stick with Vservers on my work dev machine since I already have a procedure for building the classes of vservers that we use at work.

Saturday, October 30, 2010

libavcodec-unstripped

I have a shell script that takes jpg and videos from various cameras I've used and converts them (via ffmpeg) to standard sizes/compression ratios/formats.

Must remember though (reminded when it stopped working after a from-scratch install of Ubuntu Maverick) that I need to use libavcodec-unstripped*. Otherwise I get errors about Unknown encoder 'libxvid'

Monday, October 25, 2010

Remastering Ubuntu Live flashdrive image

I use live USB flashdrive images for installing Ubuntu because I've got old laptops and their DVD drives are sometimes flaky.

One of our laptops is also sufficiently old that the fans don't do a great job anymore. When it runs at top speed (3.3 Ghz), the laptop halts within two minutes or so because the fans can't keep the hardware cool enough and it's not smart enough to slow down the CPU. Linux also doesn't have built-in drivers for the hardware, so it can't detect that the CPU is running too hot and therefore can't dial down the cpu freq.

I needed to install Ubuntu Maverick from scratch because there were enough bogosities in the configuration (and I don't have either the time or the talent to understand it all enough to figure it all out).

This page shows how to remaster an Ubuntu 7.10 image. Fortunately, the process still works for Ubuntu 10.10.

There were more customizations I could have done (e.g., installing the omnibook module so that I'd have CPU temp monitoring available [not used by the kernel to slow down the CPU though, just informational]), and certainly I could have at least removed nano :-). But that's more easily done once the OS is running on the target. And I only have one laptop like that. If I had 3 or 5 then putting the customizations in the live flashdrive would have been worthwhile.

Friday, September 24, 2010

yahoo.com -- sucking

Yahoo.com is seriously sucking for me right now. It's not a yahoo-wide problem. It affects me and maybe some small percentage of other yahoo users.

I can log in to mail.yahoo.com. signing in to www.yahoo.com doesn't work. After the login, I get redirected back to the yahoo site but I'm still not logged in.

Pidgin can't auth to yahoo when I'm connecting from New Zealand. However, if I use a socks proxy in the Philippines, I *can* login to pidgin.

I thought maybe the email reading issue were an NZ issue. It isn't though. Via the socks proxy in the Philippines I connected to us.yahoo.com and modified my setup so that I wouldn't be auto-redirected to the NZ yahoo pages. that works when I'm connected via proxy (I see the US yahoo page). However when I browse to yahoo from NZ, I still get redirected to Xtra. That's a stupidity (although I think not on Xtra's side, it's a yahoo bogosity).

When I read my email via the socks proxy, the URL indicates that I'm going to a U.S. server, but I receive the same (well, very similar) error message:

"Sorry for the inconvenience.
You've stumbled upon an unexpected, temporary problem. Performing your action again in a few moments will likely resolve the problem completely. If not, we suggest you try re-launching Yahoo! Mail.
If the problem persists, feel free to contact Customer Care about Error Code 1.

Thanks,
The Yahoo! Mail Team".

The only difference I can see between that and the error message when I surf from new zealand (no socks proxy) is that the NZ page says "the Yahoo!Xtra Mail team".

So it's a yahoo issue, not an NZ issue.

I realized, after seeing that error often enough that gmail was pulling yahoo email and that I had changed my yahoo password recently. I changed gmail's yahoo password but yahoo is still whining. I'm guessing that yahoo got confused because there were so many bad password logins from gmail (and from a continent different from what I had told yahoo was my location). I do wonder though how that is supposed to get resolved. Will the error clear itself out eventually? Will it clear itself out before I completely give up on yahoo (abandoning friends who only know my yahoo address and whom I can't ask to change their addresses for me since, well, I can't get into my contacts list either).

*dumbasses*

Monday, September 13, 2010

Selenium RC with generated PHP tests


  1. If you don't have it already, sudo apt-get install phpunit
  2. start the selenium server (at minimum: java -jar selenium-server)
  3. Generate the PHP testcase from the IDE.
  4. rename the class from Example to whatever you're testing.
  5. rename the file to be the same as the classname plus .php
  6. In the class, add a __construct() which calls $this->setUp() and $this->start()
  7. After the class is defined, instantiate the class and call its testcase method.


There may be syntax errors. The code generation is not perfect. Fix those.

Tuesday, August 10, 2010

disabling the firefox new addon notification

I use firefox for running selenium-RC and when it starts I want to disable the notification of new addons having been installed.

mzfuser says to: "Go to about:config, create a new boolean value "extensions.newAddons", and set it to false."

That works for me (firefox 3.6 Ubuntu).

Monday, July 26, 2010

JMeter Drupal Proxy URLS to exclude

I often use jmeter to load test drupal websites. One of the first things I need to do is capture a sample browsing session over the site using the jmeter proxy.

When I'm capturing a sample browsing session I usually don't want to grab all the embedded files since that makes for a very large set of http client requests in the thread group. At this point I want the thread group to contain just the top level URLs I actually clicked on but I want the individual entries to have "Retrieve All Embedded Resources" to be clicked.

That will increase the CPU load on the jmeter instances at runtime (they need to parse the downloaded file to extract the resources). I'm happy to make that trade for now. If it becomes a problem I'll adjust to have the embedded resources pre-extracted at proxy capture time but for most jmeter jobs I've done I haven't had to worry about test time CPU load much.

I always forget what the URL exclude patterns should look like. This is posted so I'll find it later.

Drupal sometimes adds GET parameters to URLs even for "static" resources such as css or png files. I haven't gone through to figure out which resources can have GET parameters added to them, instead, when excluding embedded/static resources I just treat them all similarly:

.*\.gif(\?.*|)
.*\.jpg(\?.*|)
.*\.png(\?.*|)
.*\.css(\?.*|)
.*\.js(\?.*|)

etc.

Thursday, July 22, 2010

CTEs for clarity (no efficiency gain here)

Some messages are sent to two kannels. I've got the essential data in a postgresql table but I wanted to find the messages which were sent to both kannels (within 5 seconds of each other, most such duplicated messages are sent within the same second, or within 1 second of each other).

The query could have been done without CTEs (using subqueries) but I prefer the CTEs since they move the subqueries "out" of the select statement, making the select much easier to read.

/* set up the CTEs although they're not really common except in the sense that they're the same statement, I'm just using them as *table*expressions* :-) */
WITH lhs AS
(
select id,kannel,tstamp,dest,msg_text from decmtmo WHERE mt_mo='mt'
), rhs as
(
select id,kannel,tstamp,dest,msg_text from decmtmo WHERE mt_mo='mt'
)
SELECT lhs.id lid,rhs.id rid,abs(extract('epoch' from lhs.tstamp-rhs.tstamp)),
lhs.kannel lk, rhs.kannel rk, rhs.dest,trim(rhs.msg_text )
FROM lhs,rhs /* this is what improved, otherwise we'd have the subselects here */
WHERE lhs.id<>rhs.id /* make sure we don't look at the same row on both sides */
AND lhs.dest=rhs.dest AND lhs.msg_text=rhs.msg_text /* MT identity */
AND lhs.kannel<>rhs.kannel /* but different kannels */
AND lhs.id>rhs.id /* avoid showing two copies of the same row, with lhs and
rhs swapped */
AND 5 > abs(extract('epoch' from lhs.tstamp-rhs.tstamp))
/* within 5 seconds of each other */
ORDER by lhs.id,rhs.id

Friday, June 25, 2010

Tonido kernel with NAT (and no su to non-root user)

My "could not su to non-root user" problem with building a kernel with NAT support on the tonidoplug is solved.

On the tonido support forums (requires login, but I'm posting the link here anyway) aleinss helpfully pointed at Logging into tonido as a non root user.

Apparently, with 2.6.31 kernels and later, /proc/sys/vm/mmap_min_addr needs to be 32768 (instead of the previous 65536).

I tested it with sudo echo "32768" > /proc/sys/vm/mmap_min_addr but that didn't work. Reboot required, I guess. The solution was to edit /etc/sysctl.d/10-process-security.conf and edit the vm.mmap_min_addr line to say

vm.mmap_min_addr = 32768


Many thanks to aleinss for pointing that out.

cssh feature wishlist -- clicking on one cssh window brings all related cssh windows to the front

It might be possible to do this already (I've customized .csshrc a *little* bit, mainly just to set the default window sizes and locations), but as in the title, what I'd really like is a toggle so that when I click on one of a set of related cssh windows, all of them (including the window into which I type commands to execute on all related servers) should come to the front.

Either that or another two monitors.

Or three monitors and a computer that can support four monitors altogether :-).

Tuesday, June 22, 2010

php file handle GC and flock

I was confused for a bit because I had code similar to this (details elided):


function myFunc() {
$h = fopen (MYLOCKFILE,"r");
return flock($h, LOCK_EX);
}


And when I would call it and sleep (e.g., myFunc();sleep 300;) and then run the same program in another shell the second shell wasn't blocking at the flock call.

strace showed an flock(4, LOCK_UN) being called in the first running instance. Apparently, since I don't return the handle nor do I assign it to a variable that's passed by reference, php decides that $h can be GCed immediately upon function return. That closes the file and releases the lock, so the second instance wouldn't block since there was no lock there.

Wednesday, June 16, 2010

Toshiba Satellite A75 temperature control -- Maybe

I've had a problem forever with the Toshiba Satellite A75. It's got a 3.3Ghz CPU in there but I could only ever run it at one of the two lowest speeds (1.8GHz, 2.1Ghz) because any faster (even with ondemand having me run mostly at 1.8Ghz) if the CPU ever ran too long at high speeds the kernel wouldn't notice and it couldn't speed the fans up.

The whole machine is old too, so I wouldn't be surprised if the fans they're just not working too well anymore.

I just found the omnibook kernel module project though. And after

git clone, make 
sudo make install
sudo modprobe omnibook ectype=12


It loads correctly and cat /proc/omnibook temperature says 56C. And once or twice I heard the fans spin up faster (they're on all the time these days). So I'm testing (by setting my maximum CPU speed at 3.3Ghz, but still ondemand).

If the machine is stable this way, I'll scale down to 2.4Ghz or a bit higher maybe. It'll be good to be able to do useful things at a reasonable speed again on this machine. 1.8Ghz was getting so old :-).

Sunday, June 13, 2010

Transmission blocklists

I thought I'd setup some blocklists for Transmission. After some googling and looking at this and that blocklist, I decided to go full paranoid and used a whole bunch of blocklists from IBlockList.

The blocklists are gleaned from other tools (Bluetack, PeerGuardian, etc).

I don't care too much about performance (there's a warning on IBlocklist that using too many rules will affect broadband performance :-), so I decided to just install a whole bunch of (possibly redundant) lists.

To get the blocklists and install them in transmission-daemon's blocklists directory (on my machine, ~/transmission/blocklists) I use (not yet in cron, will be soon):


#!/bin/bash

cd ~/transmission/blocklists

URLS="http://list.iblocklist.com/?list=bt_level1 http://list.iblocklist.com/?list=bt_level2 http://list.iblocklist.com/?list=bt_level3 http://list.iblocklist.com/?list=bt_edu http://list.iblocklist.com/?list=bt_rangetest http://list.iblocklist.com/?list=bt_bogon http://list.iblocklist.com/?list=bt_ads http://list.iblocklist.com/?list=bt_spyware http://list.iblocklist.com/?list=bt_proxy http://list.iblocklist.com/?list=bt_templist http://list.iblocklist.com/?list=bt_microsoft http://list.iblocklist.com/?list=bt_spider http://list.iblocklist.com/?list=bt_hijacked http://list.iblocklist.com/?list=bt_dshield http://list.iblocklist.com/?list=bcoepfyewziejvcqyhqo http://list.iblocklist.com/?list=cslpybexmxyuacbyuvib http://list.iblocklist.com/?list=pwqnlynprfgtjbgqoizj http://list.iblocklist.com/?list=ijfqtofzixtwayqovmxn http://list.iblocklist.com/?list=ecqbsykllnadihkdirsh http://list.iblocklist.com/?list=jcjfaxgyyshvdbceroxf http://list.iblocklist.com/?list=lljggjrpmefcwqknpalp http://list.iblocklist.com/?list=nxs23_ipfilterx http://list.iblocklist.com/?list=soe http://list.iblocklist.com/?list=ccp"

for u in $URLS
do
wget -t 10 -c --limit-rate=128k -w 10 -nd --ignore-length -N "$u"
gzip -d *.gz
done


Unfortunately transmission-daemon doesn't notice new blocklists added while it's running, so I also have a separate script to restart transmission-daemon (not in cron yet either since I'm just playing around with this stuff for now :-). I haven't tested kill -HUP yet.

Tuesday, June 08, 2010

Tonidoplug kernel with NAT

I bought a Tonido plug computer and have been playing with it at home. I want it to be a dnsmasq, squid, openvpn and ssh server. It'll also do some other things, but those are the main things I'll run on it. I don't need the tonido software running there (although that may change if the people at home need to support themselves instead of me setting everything up via the command line).

I'm very happy with it since it's so much faster and easier to work with than my NSLU2 (which is 1/10th the CPU freq and 1/16th the RAM). There was one problem though, I couldn't load the NAT modules. After some investigation it turns out that the kernel doesn't have routing configured and it's missing a whole bunch of modules that Tonido (or sheeva, not clear about which exactly) decided they didn't need to provide.

Fortunately, I'm booting from a USB drive, and it's very easy to make a bootable drive. If I make a mistake and make the USB drive unbootable, I can just extract the rootfs and modules tarballs back onto the drive (before or after mkfs, according to taste) and it'll be bootable again. I would never try to modify the kernel (or even install modules) on the NAND since I don't want to risk bricking the plugcomputer. Although I did do a bunch of sudo apt-get [packages] on the NAND before I realized what I was doing and stopped :-).

Mikestaszel suggested building the module and copying it over, to get ppp working. Taking that hint, I downloaded the source for the kernel I was using and after some misadventures due to forgetting techniques from long ago, I finally got the modules I needed built and installed.

The tonido runs the 2.6.30-rc6 kernel so I downloaded 2.6.30.6 from kernel.org. I used the config file for this kernel from sheeva.with-linux.com. My first try at building the kernel didn't work because of bad magic. After some googling I realized/remembered that I needed to modify the kernel makefile so that EXTRAVERSION would match the one from the running kernel, so EXTRAVERSION=-rc6.

A second try at building the kernels got me closer but it still didn't work. The bad magic error was gone, but some symbols were missing.

I didn't particularly want to build the kernel itself since I'd hoped that just building and installing relevant modules would be sufficient. Unfortunately, NAT requires CONFIG_IP_ADVANCED_ROUTER, and that can't be built as a module. So there was no way around it, I'd have to build a kernel.

After the kernel was configured and built along with the modules I needed (make menuconfig;make;make modules), I needed to make a uImage (google pointed me at this generate uImage for sheevaplug page). That required:
sudo apt-get install uboot-mkimage
make uImage
cp arch/arm/boot/uImage /boot
make modules_install
reboot


modprobe iptable_nat finally succeeded and some testing proved that the plugcomputer was working correctly as a NAT router.

-- UPDATE --
When I installed and rebooted with the new kernel, I found myself unable to run processes as a regular user. The processed would be killed immediately. I can't see how it would have been a problem with how I built the kernel since all I did was allow advanced router features and NAT/MASQUERADE. But there it is. I don't mind running as root on the tonidoplug since everything I do there I'd need to run sudo anyway, but I've switched back to using the NSLU2 for now so I can play with the tonidoplug, building kernels, rebooting at will and possibly eventually getting this latest problem fixed :-).

-- UPDATE 2010-06-22 --
I'm wrong. I *do* mind running everything as root on the tonidoplug. I don't mind running openvpn or sshd as root, but I don't want to run squid or transmission-daemon as root since any successful remote attack instantly gets root privileges.

Friday, May 14, 2010

Orca on Ubuntu Lucid (10.04)

My brother-in-law is blind, so I've been interested in linux accessibility for a long time. Not interested (or talented) enough to actually improve accessibility, but interested enough to keep an eye on the matter.

Long ago, I couldn't get Festival or Orca to work at all on my laptops. Mainly hardware support issues. One particular problem had to do with the software requiring the audio card to allow sampling at a rate that was twice what my audio card could do.

I just tested Orca on Lucid though and it's looking very good. Just enabling Orca took all of 5 seconds. I was a little confused since some things worked (firefox and the Orca preferences) and others didn't (gedit, gnome-terminal running man). Logging out and logging back in fixed that. I suppose just enabling Orca but not restarting didn't allow Orca to get its hooks deep enough into Gnome so it could intercept X11 display and keyboard/mouse events.

It took a few retries and hour and a half to get a reasonable set of Orca flat-view keybindings that didn't conflict with the regular gnome keybindings. I like using the Windows key (Super or Super-L) as a command key for Orca since it isn't used in Linux, exists in all new keyboards and is convenient. I don't much like Orca using the Caps-Lock key for that. Using the Windows key would be a problem if Orca ran in Windows but as far as I can tell (from the Orca website it doesn't run in Windows.

I was a little confused that Orca had firefox-specific keybindings, but they probably had to implement that to have similar behavior as JAWS (the dominant windows screen reader, and therefore the dominant screen reader in the world).

So Orca has some generic keybindings for general flat-view and other functionality. It can have app-specific keybindings. And it's scriptable (says the web page, although I haven't looked at what scripts might look like or how powerful they are).tt

It's also been pretty stable (tested on three laptops, all of which are pretty old). The only instability I saw happened when trying to close the Orca program via the GUI. Gnome and X hung so completely I had to go to a terminal and kill/restart gdm.

That's no big deal though since blind people would normally *always* have Orca on. And when I killed Orca from the command line (orca -q), it died gracefully and didn't take Gnome or X with it.

But all I've done so far is play with it a bit. I haven't used it extensively at all. Instability might become a lot more noticeable after hours or days of use. Maybe I'll try to get my brother-in-law to test-drive it on one of these laptops (instead of his Windows+JAWS laptop) for a day or two.

Friday, April 09, 2010

getting the vodafone usb modem working on ubuntu

http://ip-62-105-171-197.dsl.twang.net/bvportal/forums/index.html?threadId=ff80808122654e6f01227632fff8503c&postId=ff80808122654e6f01228e6f22484bb4

Thursday, April 08, 2010

tomcat thread dump at work

At work, if tomcat isn't responding, send it a kill -3 to get it to produce a thread dump.

Friday, April 02, 2010

xhost

I run three or four different browser profiles for security. There's a general browsing profile for reddit.com and links I follow from there, there's a more secure profile for gmail and facebook, and there's a most secure profile for internet banking.

Not only do I run these separate profiles, I also run them as separate users under sudo -H -u [user] [browser] [other-params].

But in order to do that I need to have an xhost setting that allows these browser profiles (running as users other than me) to display on my root display. To enable that, I have this line in ~/.xinitrc.

xhost local:

Wednesday, March 17, 2010

bash for loop

Oooh, I just saw Bash for loop examples.

I definitely like:

for i in {1..100}
do
...
done


I don't usually need to step forward in increments greater than 1, but for that there's


for i in {1...100..2}
do
...
done


Of course there's also


for (( c=1; c<=100; c++ )) do ... done


which is what i've used in the past, but I always forget about the double parens.

Saturday, March 13, 2010

grandr on toshiba satellite karmic dual monitor setup

When I first installed a second monitor on Ubuntu Karmic, the dual monitor setup was trivial. The built-in method (System | Preferences | Display) worked very well.

Lately though (possibly due to a package upgrade) that method stopped working perfectly. It couldn't identify the external monitor model (showing it as Unknown), and when I'd select the correct resolution for it (1440x900), on gnome restart or laptop reboot, some icons on the left of the desktop would be all scrunched up together, dragging a window from the external monitor (left) to the laptop monitor (right) would have the window end up partly on the left and partly on the right. It wouldn't go all the way to the right edge of the laptop monitor. As if the virtual screen width had changed to something a *lot* shorter.

I just installed grandr and ran that. It sees better than an Unknown monitor, and the virtual screen width is back to normal. I don't know yet if this fix will survive reboots. But it probably will. And if it doesn't, well, it'll be a reasonable workaround until I upgrade to Lucid.

Friday, March 05, 2010

gnucash OFX

I started playing with gnucash a month or so ago. I ran into a bunch of problems and it turns out they're mostly due to export file format I chose.

My bank supports OFX-MS-Money, OFX-Quicken and Quicken. I saw a post that said to avoid quicken because there were issues with identifying transactions as having already been loaded (when loading the same transactions twice, either because the same export file was loaded twice, or because two export files intersect). So I avoided Quicken and OFX-Quicken.

Unfortunately OFX-MS-Money has a worse problem. For some reason, the export files produced by my bank (might be the bank's problem, might be a gnucash bug, or it might just be a bogosity in the file format, or an obscure interaction among these and other features), would load into gnucash, but for the checking account, some transactions would be lost. I doubt if the transactions were really missing, but gnucash was somehow not seeing them.

I tried the Quicken file format the other day. All transactions loaded correctly and so importing a month's worth of data was very little effort. Gnucash also asks for particular expense sources (this grocery, that pharmacy, that other restaurant, etc) to be identified as to which kind of transaction they were. That's nice since for future months, those expenses will be correctly allocated to the correct account.

Unfortunately, Quicken has a weakness in that the transaction entries were missing a lot of information. For withdrawals, for instance, OFX-MS-Money would indicate which ATM card (Sol's or mine), as well as what ATM branch the transaction was made at. The Quicken format would just have a description of WITHDRAWAL and a memo field of ATM. And it was similarly silent for a lot of other transactions.

So, while the Quicken imports very nicely and has some great usability shortcuts, I can't use it since I forget what particular transactions are about IN THE SAME WEEK, let alone a month or two later.

Fortunately the OFX-Quicken format (which gnucash calls QFX) has *most* (not all, but enough) of the information from OFX-MS-MONEY, and the accuracy of loading of the Quicken format. We don't have a *huge* number of transactions per month. It only takes 30 minutes or so to load a month's worth of transactions and correctly assign expenses to the correct account. And I don't have to walk through the checking transactions doing a binary search for missing transactions.

At some point we'll have bank accounts at other banks. When that happens I'll be able to compare accuracy of other bank OFX-MS-Money files and determine if the bug is in gnucash or in my current bank's export file :-). I'm betting on a gnucash bug, myself. But now that I've got OFX-Quicken working, I don't care enough to replicate the bug. Maybe I'll do that on the easter weekend, if we don't go anywhere.

Monday, February 15, 2010

common git branch tasks

I like

Zorch's workflow examples on starting a new branch on a remote git repository.

[Here as a reminder so I can search on site:monotrematica.blogspot.com git branch]

Thursday, February 11, 2010

NZ School goes completely open source

There's a great story at CIO about how a New Zealand high school switched to open source servers going from 48 servers to 4. It's pretty good to save 11/12ths of your hardware, electricity and server maintenance/sysadmin budget.

Wednesday, February 10, 2010

fireEvent when keyPress, keyDown, keyUp don't work

Evil Tester writes about fireEvent, so I don't need to

I found this (and Nick Bartlett's summary) when doing a google search for selenium IDE where type, keyPress, keyDown, keyUp, etc weren't working as expected.

I'd actually found and used fireEvent a few months ago when I was working with some selenium tests for the Mahara e-portfolio open source system. But I'd since forgotten.

In the problem at hand, there was an input textbox with an onkeydown which detected what key was pressed and if it was the ascii(13), would call this.blur(). The solution was just to "fireEvent | locator | onblur".

[Posted here so that I'll be able to find it when I do a google search on "site:monotrematica.blogspot.com selenium IDE keyPress keyDown onblur" :-]

Thursday, February 04, 2010

Parameterized jmeter threadgroup and loop count settings

as pointed out in the mailing list post,

run jmeter with user specified jmeter parameters, e.g.,

jmeter -J threads=10 -J loopcount=5

and then, in the threadgroup, set the relevant fields to, e.g.,

${__P(threads)} and ${__P(loopcount)}

In the beanshell sampler, parameters can also be accessed via:
JMeterUtils.getProperty("threads");

Wednesday, January 20, 2010

Gnome panels on external monitor

The new monitor works very well, but some things aren't great. The fact that it's a rectangle (and there is invisible space above the laptop top panel is one. But I'll adjust to that.

I did need panels on the external monitor though. It's not convenient to have tasks on both monitors over on the laptop panel since I couldn't get a panel on the external monitor.

Then I found several solutions at answers.launchpad.net.

I used the gconf-editor solution. But after reading downward, I learned about the Alt-drag trick. That works too, and is much simpler. To put a panel on the second monitor, just create it first (it'll go on the first monitor). Then press Alt and click and drag the panel to the other monitor.

Monday, January 18, 2010

Services not coming up

After a recent package update on my work computer (AMD64), services were not being started on boot (no apache, ssh, etc). Strangely enough, gdm *does* start, so I get to log in to gnome.

runlevel says "unknown" though.

A bit of googling points to this: https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/497299

and also https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/461725

I don't understand the whole thing. There seems to be a race condition when init tasks run in parallel. And also bugs in updating /etc/network/interfaces.

It's fixed for me by forcing upstart to the previous version (0.6.3-10). I then pinned that. If a 0.6.3-12 version comes up I may download it and test. Or maybe I won't, since 0.6.3-10 works and I doubt if newer versions in karmic will actually give me much reason to upgrade.

[Update]
Sol's laptop (upgraded to karmic the other day) has the same problem. I'll fix it the same way tonight (can't ssh into it since the services didn't start :-).

[Update]
boy, upstart-0.6.3-11 officially sucks. I updated the toshiba laptop and had the same problem. The Durabook is fine though. That's three computers out of four. Might become four out of five after I upgrade sol's desktop at work to karmic.

Friday, November 13, 2009

Problems with 32bit java on AMD64 on karmic

I prefer to run 32 bit java on my desktop since I only have 2GB of RAM. 64bit buys me nothing, and it eats twice the RSS.

With Jaunty (and Gutsy before that) I'd followed derek's advice on building a 32bit .deb.

I ran a downloaded 32bit eclipse.

Karmic seems to have broken something (probably SWT) and 32bit eclipse with 32bit JDK isn't usable.

Posting this here so I can find Derek's article again and start with that to get 32bit eclipse and 32bit sun-jdk working together again.

[update] It looks like Miroslav Hruz has a solution to the 32bit SWT issue. I'll try that on the weekend (remotely).

[update] I got things working without really understanding (or logging) what I did. After a bunch of uninstall, reinstall, all without notes (and some of it was in synaptic, so not in .bash_history), 32-bit sun-jdk and 32-bit eclipse started working again without me needing to do anything as in the links above. I did export GDK_NATIVE_WINDOWS=1 though. Thus ends this unhelpful post :-).

Saturday, October 31, 2009

No audio? Is skype running?

I was looking at newly uploaded (from camera) videos of Timmy and John and I was confused because the videos were weird:


      Totem said they where *playing*, but the progress bar wasn't moving and there was neither sound nor video.

      The time counter (shows how many seconds/minutes into the video/song you're in) wasn't moving.

      I thought it was something wrong with the newly mangled videos (made smaller via ffmpeg) so I tried some MP3s. Same symptoms as for the videos.



So I looked around at the modules, and at dmesg. Everything looked good. Until I moved my mouse to the bottom of the screen and the hidden status panel popped up. Skype was running. Apparently, on this machine, it takes over the sound card.

Thursday, October 08, 2009

Slow vim startup -- solved

I've had some frustration due to slow vim startup.

time vim -c 'q'

real 0m6.138s
user 0m0.096s
sys 0m0.024s

Found the solution though. a blog post at samdorr.net says to use -X.

So now I have two new aliases in ~/.bashrc

alias vi="/usr/bin/vi -X"
alias vim="/usr/bin/vim -X"

I was a little confused because the slowness was there in screen, but when I opened a new terminal, there was no slowness (even without the aliases). I think it's because I've restarted X since I started screen. So $DISPLAY in the screen sessions is :0.0, but possibly there's some other X authentication cookies that refer to the old X session. Ok, I just looked, there's an XDG_SESSION_COOKIE, maybe that's it, or if not, something similar. So the X authentication still succeeds, but only after a timeout.

The poster at samdorr had a different problem. His server probably didn't have X at all, or maybe vim is trying to connect via ssh X forwarding, back to his graphical terminal :-). But the solution he gives is an axe that solves my problem too since I don't need vim to talk to X at all.

Hmmm, someday I'll just need to catch up to the modern world and use gvim, and probably syntax highlighting even :-).

Tuesday, October 06, 2009

Vodafone "Vodem" -- very easy

A friend of mine has a vodem (that's a USB HSDPA modem that works with the Vodafone NZ network). I borrowed it and tried it out on my Jaunty (Ubuntu 9.04) laptop at home.

I was confused initially since I had no manual or anything else. My friend said that on windows there's a CD, it installs a bunch of things and then just works. I didn't think to ask if authentication via login/password was required.

After some messing around, I found a hint on ubuntuforums that pointed me in the right direction. NetworkManager in Jaunty automatically detects the modem. It even automatically detects the network. It then presents a dialog asking which country (NZ is already default selected) and which Network to use. There are three networks ("Vodafone", "Vodafone (restricted)", and "Vodafone (unrestricted)"). My confusion was that I chose the first and the modem immediately disconnected.

I should have chosen the third. Upon choosing "Vodafone (unrestricted)", the vodem connects immediately to the Vodafone network and then just works. No further management needed. This is pretty cool. Too bad vodafone data charges are still so high. When the data charges drop by a factor of 10, this will be a real player. For now, it's a nice toy that I'm soon going to return to its rightful owner.

Thursday, October 01, 2009

On recruiting software developers

John Fuex has a great article, 19 Tips for Recruiting Great Developers

Now, not all companies are going to be needing the superstars this article focuses on, but the points made there should be relevant to, say, the top 85-90-95 percent of developers.

Perhaps the tips can be relaxed according to the quality of the developer needed by the company (although the company, HR division or recruiter who is conscious of the actual target percentiles [instead of being hypnotized by some mantra about hiring "only the best"], is likely very rare on the ground).

Friday, September 25, 2009

Switched back to gnome

I had switched to xfce4 in Ubuntu because it gave me some memory savings. I found, however, that on my work desktop, I got *far* more savings by installing and using a 32-bit JDK (and 32-bit eclipse to go with it).

I didn't really need to switch back to gnome, but gnome is a bit smoother than xfce in the total experience, and I found myself using gnome applets in xfce (mainly the user switcher, but some others too).

I didn't switch back to gnome immediately since I *much* preferred xfce's Alt-F2 behavior to gnome's. The application chooser is much smarter even than Gnome-do. But then I realized that I could use xfrun4 in gnome. And after testing that at work, I've switched to gnome+xfrun4 at home too.

I forgot how I was running firefox as another user for security :-). After some fumbling, I figured it out again (although, really, I should just have logged back into xfce and looked at the launcher :-).

sudo -u [other_user] -H /usr/bin/firefox-3.5 -a [profile] -P [profile]

The -H is necessary because if it's not given then it'll use your own home directory rather than the home directory of other_user.

Wednesday, September 09, 2009

Ubuntu 9.04 gphoto2/libgphoto2 borken for my Canon Digital Ixus 700

I use gthumb for downloading camera pictures to my computer. I have a script that takes the filenames produced by gthumb and renames and creates resized copies of the images and videos. gphoto2 talks to the camera in PTP mode.

For a while gthumb worked on my laptops. It's stopped working now though and I don't know why. There are bugs posted with Ubuntu regarding this. Adding yet another bug confirmation won't do any good.

At one point I had gphotofs working enough to mount the camera filesystem. But I didn't want to mess with the filesystem directly. And anyway, gphotofs isn't working anymore now (it runs, returns, but doesn't actually mount the filesystem, and gphotofs keeps running in the background [which is OK, that's what it needs to do as a fuse filesystem provider]).

So now I have a horrendous hack for grabbing the images :-). I installed Ubuntu Intrepid under VirtualBox, gave it access to the USB devices, and I run gthumb there. Then I just scp the files over to the host box and halt Intrepid.

Yech. It works, but is hoogly :-). Maybe this'll be fixed in Karmic.

Overall, I find Ubuntu a pretty good platform for doing everything I need to do, but there certainly are the little niggles like this that demonstrate it's not really ready for regular users. Or it is, but they'll come up against walls every once in a while, get frustrated, and go back to their windows viruses.

[Update]
Ah, pulled in gphoto2, libgphoto2 and libgphoto2-port0 from karmic (downloaded the debs manually and installed with dpkg -i) and gthumb is now downloading the pictures. I understand about lack of resources, but it does seem a bug that this fix wasn't backported to work with Jaunty.

[Update]
I'm now actually on Karmic. The dist-upgrade reverted a separate and necessary fix. Gnome has a gvfs module for gphoto2 and when it's loaded, gthumb can't read the pictures/videos from the camera since the PTP port is already in use (by the gvfs gphoto2 module). Solution is to disable that. There might be a neater way, but I just did
chmod a-rwx /usr/lib/gvfs/gvfsd-gphoto2.

Sunday, August 30, 2009

Screen and scrollbars!

I work with Nigel McNie and was whining (on the company IRC server) to a friend about screen and how I wish I could get it to work with scrollbars. He pointed me at Nigel's page on how he uses screen.

I don't use urxvt, but the invocation given there works with rxvt too (just change urxvt to rxvt in the .Xdefaults entry). So now I've got scrollbars working with screen.

Slightly more secure

On my home computer I've got a reasonably secure browsing setup (firefox 3.5, noscript, adblock, flashblock, made the flash cookies directory non-writeable, etc). But nothing is perfect. So I decided to raise the bar a bit. I moved my main browsing profile to a separate user account (so that even if it gets cracked, it won't have access to my ssh keys (ssh-agent is convenient, but it could be a hole), data in my home directory (svn working copies, git working copies, random other files) or to my other privileges (sudo access on this and other computers).

My trusted profiles (online banking, power company, mobile phone company, phone/internet company, cable tv company, etc) will probably go into yet another account. I haven't done that yet. But I'll get it done tomorrow, probably.

For reference:

to allow the other user to run firefox on the main display:
xhost local:[other_user_name]

and to actually run firefox as the other user:
sudo -H -u [other_user_name] firefox-3.5 -a [profile] -P [profile]

I don't think the -a should be needed there, but it doesn't work right (loading the default profile instead of the profile I want) when it's removed. So I keep it in.

Update - I wondered why youtube and other videos had no sound in this new setup. Today I realized that it's because the browser is running as the other user, and that other user isn't in the audio group.

Fixed with vigr.

Saturday, July 18, 2009

FreeNX

Install FreeNX server on ubuntu

Google has also announced a free NX server called NeatX. It's very new and there are no ubuntu packages yet.

Sunday, June 21, 2009

Firefox 3.5 Memory usage looking good

This article benchmarks opera, chrome, firefox 3.5 and safari in terms of how much memory they took to perform the same task(s). The numbers are for Windows, but I expect that there'll be a similar improvement in memory use on Linux.

Firefox 3.5 is looking very good. I'm going to download the beta and test the heck out of it :-). Browser memory use has been a *huge* problem for me, particularly since I've been doing a *lot* of Selenium testing. Of course selenium, and firebug and similar developer tools will increase the amount of memory used by browsers by a lot. But if the base browser can use a lot less memory, that'll be a huge help (particularly since Eclipse and tomcat 5.5 aren't memory-thin applications either, and running everything together makes my system slow as molasses as they force each other out to swap).

pidgin stopped working with yahoo

Yahoo changed their messenger authentication protocol and now pidgin 2.5.6 has stopped working with Yahoo Messenger. There's an announcement that 2.5.7 is available at launchpad, but it's not really there yet. I guess it takes a while for packages to become available, or maybe I'm hitting a mirror and the mirror hasn't synced yet.

I hope it'll be there tomorrow so I can upgrade my laptops and my work computer :-). If not, well, web.im works well enough for now. It sure would be convenient though if pidgin were to start working again soon :-).

I'm not yet ready to build pidgin from source. But I may be, by Tuesday :-).

Saturday, June 20, 2009

Broadband plan upgrade

I was sick for much of last week. That's why we're upgrading our broadband plan (to avoid 64kbps when we go over our cap). Now, it's only one week til the end of the current cycle, so we're going to have to use up 10G in one week :-). I don't think that's going to be a problem.

Our previous plan was the Explorer plan, with 10GB of bandwidth before we're slowed down to 64kbps.

Since I was sick last week though, but I only took two sick days (Monday and Tuesday). I went to work on Wednesday, but that was a mistake since I got worse on Thursday and had to stay home Thursday and Friday. But I didn't want to not work at all the whole week, so I worked from home. Unfortunately, work involved a lot of vnc work against a vserver at work. So I blew around 2.5GB on vnc :-).

So we're upgrading to a 40GB cap plan. It's only NZ$10.00 more for double the bandwidth, so it's a great deal. There's a real danger that we won't downgrade from this plan :-).

Well, we plan to get a second broadband link at some point. Sol works from home 4 days a week, and I do quite a lot of work from home, so redundancy (even against an extremely unlikely outage) is going to be worthwhile. But that won't be for a while yet. And if we do that, then I'll certainly ratchet the telecom plan down.

I should have started the upgrade yesterday morning, so that it'd take effect by Tuesday (two working days). I didn't though, so we'll have to stay under the 800MB cap until around end of Tuesday or sometime Wednesday when the new plan takes effect. It had better not take til Friday to take effect though.

Update: I looked at the bandwidth monitor this morning and I noticed that we'd already been upgraded. No 2 day wait. That's cool since I *was* wondering what they were thinking with the 2 day wait. The delay was probably a leftover from some manual procedure that required review and approval, a leftover that got brought over to the web based procedure. And telecom finally figured out that the approval and delay weren't necessary since, after all, the customer logged in and authenticated themselves with their password.

Thursday, June 18, 2009

Caveats of Evaluating Databases

The title of this article is just the title of the article on Caveats of Evaluating Databases. That title isn't very good. But the article is.

Wednesday, June 17, 2009

very interesting discussion of tomcat classloader leak that leads to running out of PermGen

Must read deeply and test (-client seems an easy test)

Update -
Ok, -client doesn't work for me. OTOH, this is an old article (2005). No doubt a lot has changed with Java garbage collectors (and maybe less, but still some changes in Sun java API/JVM implementations). -client is actually significantly bad, compared to -server.

Time to look at org.springframework.web.util.IntrospectorCleanupListener

Monday, June 15, 2009

Morons? Utter Morons?

Sounds like Microsoft has outdone themselves with a bug that makes windows unbootable. And fixing it just sets you up for letting Microsoft making itself unbootable again.

Hearsay only, I wouldn't know if this is true since I don't run windows (it sits there eating up some disk space in case I run across some hardware that needs it, and I'd rather not have to waste money on a license since I've already got one good license [actually, I'd have three, except I've blown away windows on two of our three laptops]).

Thursday, June 04, 2009

myvodafone fail

I get my mobile phone service from Vodafone NZ because when my family and I arrived in New Zealand, we brought our GSM phones with us, and Vodafone is currently the only GSM provider in NZ. It's a prepaid service since I don't need to make many calls.

Since it's prepaid, I need to top-up my prepaid credit every once in a while. Now vodafone has a service called Hotlink. With Hotlink, it's possible to register a phone number and pin with my bank (highly recommended) and then get prepaid credit top-ups via a vodafone app that works through SMS messages. Hotlink worked very well for us for a year. Lately, however, my sister-in-law came to visit us in NZ and we asked her to buy us new phones since our old phones (well, mine) were approaching unusable due to a cracked screen, shorter battery life, etc.

We love our new phones. However, apparently vodafone's Hotlink app doesn't work with all handsets. Presumably it only works with handsets that vodafone sells or has sold in the past. So no hotlink for us.

Fortunately, there's a website where I can top-up my own phone via credit card payment. I didn't realize that I could top up my wife's phone too, using my account. So I tried to log in to *her* account. I'd forgotten the password, so I clicked on the forgotten password link and it sent a new password to her mobile. Except the password didn't work. I generated passwords three times and none of them worked. FAIL.

And phone support doesn't work since vodafone phone support isn't 24x7. FAIL.

So I logged in to my account (I use the Revelation password manager in Ubuntu to store my passwords) and I noticed that I could pay for prepaid credit to (via credit card) go to any mobile phone. So I used that to send credit to my wife's phone.

But vodafone FAIL isn't done. Vodafone accepts the credit card number on their site instead of having the credit card transaction be processed through a dedicated credit card gateway. In the name of usability they allow myvodafone users to store their credit card information *in*their*profile*. So they're not dropping the credit card information as soon as the credit card transaction is done, they're really storing the credit card information in their database.

Well, they'd better be really security paranoid over there.

Saturday, May 16, 2009

Switching to xfce4 on Ubuntu 9.04 (Jaunty)

I've been using Ubuntu (Gnome) since Dapper Drake. I've liked it and didn't see the need to switch to anything else. I did take a look at KDE (didn't like it) and xfce (didn't like it then either). It's been a few years though, and xfce is now sufficiently like Gnome (except thinner), that I have now switched over to xfce completely.

I would probably still use Gnome except my work is in java lately, and with tomcat, eclipse, firefox, firebug and selenium, I'm finding that 2GB of RAM isn't enough. I can't upgrade my laptops (they all max out at 2GB, I'd need to buy new laptops to use 4GB or more). So I'm doing everything I can to retrieve memory from fat apps.

No doubt there are more ways to save memory. Maybe opera and selenium-server. For now though, xfce is definitely usable. It's growing on me and I expect that I'll like it more than I like Gnome in just a week or two :-).

Friday, May 01, 2009

xfce4 on vnc

I'm liking xfce4 on Ubuntu. I'd looked at xfce before and not been too impressed. It was pretty good, but not good or easy enough to use. So I'd stayed with gnome.

For slow machines, I've used either fvwm or icewm. There are other lightweight window managers, but I liked those two.

At work, I use gnome on my primary desktop (but I may change that to xfce4, actually), and icewm on another desktop on which I have a vnc server. I run eclipse and tomcat on my primary desktop, and browsers, IRC and mail clients on the remote desktop.

I think I'm about ready to switch to xfce, actually. I tested that out on my laptop running gnome, with vnc running xfce. I had some minor problems getting xfce working under VNC until I saw a post on "xfce4 on vnc" on ubuntuforums.

Tried it out. It works on the laptop (both primary and vncserver running on the same box. That should be perfect for work.

#!/bin/sh
unset SESSION_MANAGER
startxfce4 &

Not jaunty yet at work

I had planned to upgrade my work desktop to Jaunty today. But that's now pushed back to Monday. I forgot to bring a laptop to work, so I wouldn't be able to work if something went wrogn with the upgrade (and I prefer to work on a laptop anyway while the upgrade is running to avoid any instability the upgrade might cause).

Was in such a rush to get Timmy ready for school, and take care of John while Sol brought Timmy to school, that I didn't have time to pack up the laptop.

Naturally, the bus was then late and I would have been able to pack *two* laptops if I'd wanted to :-).