I was going to test out git-svn, but I couldn't make heads or tails of it. So I'm working with svnmerge and I like it very much. It looks like git would be very nice to work with, but since I can't understand it for now, I'm going to watch linus' youtube presentation on git.
Linus is pretty irritating in the presentation, and vlc won't fast forward over the flv file I've got, so I can't skip the pointless part. I'm just going to have to sit through the whole thing I guess. Maybe mostly listen to it and only look at it when he discusses something that looks like the slide is important.
Saturday, July 28, 2007
Thursday, July 26, 2007
backup and restore
I'm backing up my laptop's /home filesystem to my USB harddrive because I think it's time to change filesystems. /home is xfs. A few weeks ago though I found what seems to be a bad sector on that partition (an ISO image on /home had sections that were unreadable, although the same CD that the ISO was built from was perfect). Unfortunately, neither XFS nor JFS has the capacity to accept badblocks output so that the blocks found bad can be marked unusable (decreasing usable disk space a bit, but keeping data safe).
So I'm backing up my data (damn it's taking a long time to backup 37GB over a USB cable) so I can reformat /home as reiserfs (likely) or ext3 (less likely). And then it'll be another night of running badblocks so I can feed that to the new filesystem.
This isn't a big deal. I'm reading widely and playing chess against crafty (and losing every game). So it's not time lost. I'll go to bed soon and hope (but unconsciously) that the backup will be done by tomorrow morning, so I can format, badblocks, and slowly copy the data back.
This laptop doesn't have any S.M.A.R.T. options in the BIOS, so I can't take advantage of those options (apparently JFS and XFS rely on S.M.A.R.T. to handle bad blocks for them). Heck, since Ubuntu makes the IDE drive look like a SCSI drive, I can't even hdparm -d1 /dev/sda. I get:
Yech. There's probably a way to force Ubuntu to treat the drive as an IDE drive instead of emulating SCSI for it. I haven't gotten around to surfing for that solution though. It's not important enough just yet.
The next morning: Wow, badblocks has gone through just a fourth of the disk and the badblocks file is already 135MB. I think I'll have to go with ext3 instead of reiserfs so that I can take advantage of the -c parameter to both mkfs.ext3 and fsck.ext3. Yech, I've always had more problems with ext2/ext3 than with reiserfs/xfs with corruption after powerloss. But I'm going to have to bite the bullet here. And anyway, it's a laptop. The only time the OS wouldn't be shut down correctly would be if there were some sort of kernel panic (haven't got those in a long time, since I stopped compiling and using win4lin kernels) or if I were to turn the laptop off. The battery works as a UPS, so I won't get instant off when the wall power goes out.
I should really buy a new harddrive, of course. I'm not able to just yet though, so I'll limp along with this one and make weekly rdiff-backups to the USB drive, and another rdiff-backup everytime I transfer pictures and videos of Timmy from the digital camera. :-)
So I'm backing up my data (damn it's taking a long time to backup 37GB over a USB cable) so I can reformat /home as reiserfs (likely) or ext3 (less likely). And then it'll be another night of running badblocks so I can feed that to the new filesystem.
This isn't a big deal. I'm reading widely and playing chess against crafty (and losing every game). So it's not time lost. I'll go to bed soon and hope (but unconsciously) that the backup will be done by tomorrow morning, so I can format, badblocks, and slowly copy the data back.
This laptop doesn't have any S.M.A.R.T. options in the BIOS, so I can't take advantage of those options (apparently JFS and XFS rely on S.M.A.R.T. to handle bad blocks for them). Heck, since Ubuntu makes the IDE drive look like a SCSI drive, I can't even hdparm -d1 /dev/sda. I get:
/dev/sda:
setting using_dma to 1 (on)
HDIO_SET_DMA failed: Inappropriate ioctl for device
Yech. There's probably a way to force Ubuntu to treat the drive as an IDE drive instead of emulating SCSI for it. I haven't gotten around to surfing for that solution though. It's not important enough just yet.
The next morning: Wow, badblocks has gone through just a fourth of the disk and the badblocks file is already 135MB. I think I'll have to go with ext3 instead of reiserfs so that I can take advantage of the -c parameter to both mkfs.ext3 and fsck.ext3. Yech, I've always had more problems with ext2/ext3 than with reiserfs/xfs with corruption after powerloss. But I'm going to have to bite the bullet here. And anyway, it's a laptop. The only time the OS wouldn't be shut down correctly would be if there were some sort of kernel panic (haven't got those in a long time, since I stopped compiling and using win4lin kernels) or if I were to turn the laptop off. The battery works as a UPS, so I won't get instant off when the wall power goes out.
I should really buy a new harddrive, of course. I'm not able to just yet though, so I'll limp along with this one and make weekly rdiff-backups to the USB drive, and another rdiff-backup everytime I transfer pictures and videos of Timmy from the digital camera. :-)
Thinice
I thought I'd look for an Ubuntu (gnome) desktop theme I liked. So I looked around and tested the themes in System | Preferences | Theme. I didn't much like what I saw, but then I clicked on "Customize" and, testing the different themes there, found that I liked the *darkness* of the Thinice theme.
I tend to lower the brightness of the screen on any computer I use. Even on a laptop LCD, I prefer for the screen to be less bright. That isn't consistent with the dark text on light background of my terminal windows, but there it is, Very well then, ... I contain multitudes.
In any case, I liked thinice and I'm going to move all my computers (work and home) to that theme :-). The scrollbar sliders are cool too.
I tend to lower the brightness of the screen on any computer I use. Even on a laptop LCD, I prefer for the screen to be less bright. That isn't consistent with the dark text on light background of my terminal windows, but there it is, Very well then, ... I contain multitudes.
In any case, I liked thinice and I'm going to move all my computers (work and home) to that theme :-). The scrollbar sliders are cool too.
Monday, July 23, 2007
DRY
So this code I'm working through (it was supposed to be a trivial exercise in fixing brace style) has turned into a nightmare. I can understand when people copy and paste some code. Maybe copying and pasting the same code twice or three times is OK, although three times is stretching it. BUT GODDAMN IT, MUST THE SAME DAMN CODE BE COPIED 23 TIMES????
And it's not just one function it's more like 8 functions copied and slightly modified (or not modified) all over the place. If this had to be done at the beginning because there was no time to refactor, __NOW__ is the time to refactor because any bugs that come up (will need to be fixed 23 times. And I guarantee that at least of those bugs will be missed.
Parameterize, abstract, and remove redundancy. Dammit.
Oh yeah, that's Don't Repeat Yourself.
And it's not just one function it's more like 8 functions copied and slightly modified (or not modified) all over the place. If this had to be done at the beginning because there was no time to refactor, __NOW__ is the time to refactor because any bugs that come up (will need to be fixed 23 times. And I guarantee that at least of those bugs will be missed.
Parameterize, abstract, and remove redundancy. Dammit.
Oh yeah, that's Don't Repeat Yourself.
the root of all evil
Alright, it's not all that evil, but I'm looking at this PHP code:
And I'm thinking,
I don't care too much about the strings being hardcoded, the app doesn't need to be multilingual, yet. But I'm struck by the fact that in the attempt to optimize (in PHP, single quoted strings are faster than double quoted strings since the runtime doesn't need to try to do string interpolation (parse the string to find variables, and replace the variables with their values).
On the other hand, this code is going to send mail. This is a classic too early optimization. The network will be the bottleneck, or writing the file will be the bottleneck (if we write the emails into a file first for batch sending later [which isn't relevant for the concrete case, the email is sent immediately, see the mailto(...) function call]). And it's harder to read (mixing two kinds of strings always is, and in fact the code is mixing three kinds of strings, the third being variables. Code for maintainability first, code for performance only after profiling and finding the hotspots in the code.
$message = 'Hi, ' . "\r\n";
$message .= "\r\n";
$message .= 'Your account at ' . $this->domain . ' is awaiting activation.' . "\r\n";
$message .= "\r\n";
$message .= 'To activate your account, please visit ' . $this->domain . '/members/activate/' . $activation_key . "\r\n";
$message .= "\r\n";
mail($to, $subject, $message, $headers);
And I'm thinking,
- mixing single quoted strings and double quoted strings.
- mixing single quoted strings, double quoted strings, and concatenating variables instead of putting them inline
I don't care too much about the strings being hardcoded, the app doesn't need to be multilingual, yet. But I'm struck by the fact that in the attempt to optimize (in PHP, single quoted strings are faster than double quoted strings since the runtime doesn't need to try to do string interpolation (parse the string to find variables, and replace the variables with their values).
On the other hand, this code is going to send mail. This is a classic too early optimization. The network will be the bottleneck, or writing the file will be the bottleneck (if we write the emails into a file first for batch sending later [which isn't relevant for the concrete case, the email is sent immediately, see the mailto(...) function call]). And it's harder to read (mixing two kinds of strings always is, and in fact the code is mixing three kinds of strings, the third being variables. Code for maintainability first, code for performance only after profiling and finding the hotspots in the code.
Sunday, July 22, 2007
ionice!!!
Hot damn! ionice is incredible.
I was going to say "nice", but that would have been obvious and shallow.
I know, that's a link to a short tutorial on how to use ionice, I like short tutorial links though, so I can test something out quickly.
I'm installing something very IO heavy in vmware. This is on a single-core 1.5Ghz, 1.25GB Ram laptop (slow IDE hard drive). Before ionice, the browser and sometimes all of X would stall on me for some seconds (30-40) before getting more responsive. I don't mind if the vmware process takes a long time to run. If it finishes tomorrow, well, I'll just sleep through it. So it's very nice to be able to lower its IO priority so that it doesn't make the laptop completely unusable when it's shoveling gigabytes around on the disk.
Highly recommended. Requires at least kernel 2.6.13 though. I've already asked the system administrators at work for one of the debian servers at work to be upgraded to 2.6.13 or higher. That box runs a very long running doxygen process that makes everything else crawl for a while. It'll be great to get some of that performance back :-).
I was going to say "nice", but that would have been obvious and shallow.
I know, that's a link to a short tutorial on how to use ionice, I like short tutorial links though, so I can test something out quickly.
I'm installing something very IO heavy in vmware. This is on a single-core 1.5Ghz, 1.25GB Ram laptop (slow IDE hard drive). Before ionice, the browser and sometimes all of X would stall on me for some seconds (30-40) before getting more responsive. I don't mind if the vmware process takes a long time to run. If it finishes tomorrow, well, I'll just sleep through it. So it's very nice to be able to lower its IO priority so that it doesn't make the laptop completely unusable when it's shoveling gigabytes around on the disk.
Highly recommended. Requires at least kernel 2.6.13 though. I've already asked the system administrators at work for one of the debian servers at work to be upgraded to 2.6.13 or higher. That box runs a very long running doxygen process that makes everything else crawl for a while. It'll be great to get some of that performance back :-).
Saturday, July 21, 2007
trying out git-svn
I've got a pretty good solution to the merging problem in svn. I've also previously used svk for disconnected work. I couldn't stand the ugly commit messages svk created though. I'm now testing out git-svn. If the commit messages from git-svn are reasonable, and there are no problems with branches and merges and commits to svn, and it runs very fast, then I may personally switch to it.
If git-svn works out as well as promised then I'll switch to using it and the others on my team can decide whether they want to stick with pure svn or try out git-svn.
If git-svn works out as well as promised then I'll switch to using it and the others on my team can decide whether they want to stick with pure svn or try out git-svn.
svnmerge heaven
There are three major problems with subversion.
Git apparently solves all those problems. So says Linus on youtube.
I haven't used git and I'm not likely to at $DAYJOB. It took a long time to convince the developers and system administrators at $DAYJOB to actually use svn and I'm not done with the conversion yet, technical support isn't on the version control bandwagon yet. So there's just no way I'm going to spend another year or two convincing people to switch to git from subversion.
I don't worry too much about being online to do most operations. At the office I'm online to the svn server, and at home or anywhere else with an internet connection I can get through to the office on the VPN. If I'm not online, well, I just won't work. Or I'll work with svk if necessary.
The current full working copy (including branches) at work is pretty large, so an svn status on that *is* pretty slow. Even svn status on just the trunk is slow. But we work around that by avoiding working at the top of a branch unless it's really necessary. Working in individual subdirectories is much faster and the only time we ever need to work at the top of a branch is when we need to see what all the uncommitted changes have been to that branch, or when we need to do an svn log to see what all the committed changes have been to the branch.
For a long time though, I would make branched tags but would not actually do experimental or development branches because, while branching is easy, merging was painful. It could be done, but it was so inconvenient I never tried to do it.
Finally, svnmerge has landed. Svnmerge is a python program (on Ubuntu feisty, part of the subversion-tools package) that takes the tedium and complexity out of merging in svn. The link above has a good introduction to svnmerge. I learned the basics, though, from Ken Kinder's little howto for svnmerge.
I've tested it out once or twice with real branches at work, it works pretty well and hasn't failed me yet. I wouldn't be surprised if there were some weird or extreme corner cases where svnmerge could get confused and do the wrong thing, but I'll worry about that when I get there. I haven't seen any reports of that yet and I intend to stay away from the more complex features anyway unless I really need them (e.g., no bidirectional merge until at the end, when the experimental branch is stable and is ready to be fully merged back into the trunk). In any case, I expect that development on svnmerge will fix any such showstopper bugs before I ever see them since I intend to be as considerate and conservative as I can be when using svnmerge ;-).
I'm very happy that merging in svn is finally convenient. It's been a long time, I've done my share of hand-merging branches. I'm glad that tedium is going away.
- It's slow
- You have to be online to the repository to do many operations (not distributed)
- Merging is hard
Git apparently solves all those problems. So says Linus on youtube.
I haven't used git and I'm not likely to at $DAYJOB. It took a long time to convince the developers and system administrators at $DAYJOB to actually use svn and I'm not done with the conversion yet, technical support isn't on the version control bandwagon yet. So there's just no way I'm going to spend another year or two convincing people to switch to git from subversion.
I don't worry too much about being online to do most operations. At the office I'm online to the svn server, and at home or anywhere else with an internet connection I can get through to the office on the VPN. If I'm not online, well, I just won't work. Or I'll work with svk if necessary.
The current full working copy (including branches) at work is pretty large, so an svn status on that *is* pretty slow. Even svn status on just the trunk is slow. But we work around that by avoiding working at the top of a branch unless it's really necessary. Working in individual subdirectories is much faster and the only time we ever need to work at the top of a branch is when we need to see what all the uncommitted changes have been to that branch, or when we need to do an svn log to see what all the committed changes have been to the branch.
For a long time though, I would make branched tags but would not actually do experimental or development branches because, while branching is easy, merging was painful. It could be done, but it was so inconvenient I never tried to do it.
Finally, svnmerge has landed. Svnmerge is a python program (on Ubuntu feisty, part of the subversion-tools package) that takes the tedium and complexity out of merging in svn. The link above has a good introduction to svnmerge. I learned the basics, though, from Ken Kinder's little howto for svnmerge.
I've tested it out once or twice with real branches at work, it works pretty well and hasn't failed me yet. I wouldn't be surprised if there were some weird or extreme corner cases where svnmerge could get confused and do the wrong thing, but I'll worry about that when I get there. I haven't seen any reports of that yet and I intend to stay away from the more complex features anyway unless I really need them (e.g., no bidirectional merge until at the end, when the experimental branch is stable and is ready to be fully merged back into the trunk). In any case, I expect that development on svnmerge will fix any such showstopper bugs before I ever see them since I intend to be as considerate and conservative as I can be when using svnmerge ;-).
I'm very happy that merging in svn is finally convenient. It's been a long time, I've done my share of hand-merging branches. I'm glad that tedium is going away.
Wednesday, July 18, 2007
firefox multi-tab home
Long ago I knew that firefox supported multiple-tabs for the "Home Page". And then I forgot.
Just the other day I rediscovered this. It's very convenient. My "Home Page(s)" button now opens
Just the other day I rediscovered this. It's very convenient. My "Home Page(s)" button now opens
- a cacti tab to monitor the database server, our internal time tracking web page,
- a gmail tab,
- a tab that opens our internal doxygen documentation and,
- because I'm studying svnmerge, since I've been waiting for decent merge support in svn since early svn betas:
Ken Kinder on "Subversion merge tracking with svnmerge"
Tuesday, July 17, 2007
SQL subtotals, grandtotals
I really should google more. I sat down and thought for several hours trying to find a way to get running totals out of an SQL query. I should really give up earlier so I could have googled and found:
It's got some SQL-Server-isms in there but it's generic enough that I learned the basic technique pretty quickly and implemented it in postgresql. I got my query to use running totals, and used that as a springboard to getting running balance working. I haven't yet gotten around to getting subtotals and grandtotals working, but I'm sure I'll get around to that soon, SQL use where I work is getting more sophisticated. Years ago the code used monkey-see-monkey-do and mysql style treating the database as a dumb datastore. We're on the way to something between mysql style dumb queries and doing joins in php and oracle style "do everything in the database". I think both styles have their appropriate niches, but our systems are in neither of those niches, we need to be more moderate, using SQL where it's appropriate and business logic in applications where *that's* appropriate.
SQL Server: Calculating Running Totals, Subtotals and Grand Total Without a Cursor
It's got some SQL-Server-isms in there but it's generic enough that I learned the basic technique pretty quickly and implemented it in postgresql. I got my query to use running totals, and used that as a springboard to getting running balance working. I haven't yet gotten around to getting subtotals and grandtotals working, but I'm sure I'll get around to that soon, SQL use where I work is getting more sophisticated. Years ago the code used monkey-see-monkey-do and mysql style treating the database as a dumb datastore. We're on the way to something between mysql style dumb queries and doing joins in php and oracle style "do everything in the database". I think both styles have their appropriate niches, but our systems are in neither of those niches, we need to be more moderate, using SQL where it's appropriate and business logic in applications where *that's* appropriate.
Monday, July 16, 2007
minimal centos vmware image
I had to build a minimal centos 4.4 vmware image recently. Two projects I'm working on use centos because our team systems administrator is an RHCE, so he uses a familiar distribution. I don't much care which distribution I use. CentOS is fine as long as I can get reasonably new versions of the software I require (php, postgresql).
In fact CentOS 4.4 isn't that distribution, with php stuck at somewhere around 4.3 and postgresql at 7.4. But it's certainly stable. And I can build any necessary software from source, so I don't feel any great need to upgrade to 5 (although we should probably have a plan for that upgrade path for sometime in the next 6 months).
For this distribution, I had to install Oracle 10g Express Edition (for learning Oracle, and for prototyping). That makes the installed VM bloat up by a lot. I went through the installed RPMs and removed everything I didn't think I'd need (and a few things that I found I needed later on). Later, I had to reinstall some development packages since I needed to build php from source (because I couldn't figure out how to get CentOS' built-in php to talk to oracle). I eventually wound up installing php 5.3.2 since I couldn't get php 4.3.9 to do what I wanted (connect to oracle). That's probably pilot error, but I don't mind the upgrade, and if there are incompatibilities with the installed php 4.3.9 on the deployed live servers, well, I'll ask the team sysad to deploy 5.3.2 there (after some paranoid testing on the vmware image).
After building php, I removed the development packages again :-). For when I need the list again:
gcc apr-devel gcc-c++ libtool bison flex make
autoconf automake cpp db4-devel apr-devel
glibc-devel glibc-headers apr-util-devel
httpd-devel libstdc++-devel openldap-devel
openssl-devel pcre-devel postgresql-devel
zlib-devel libpng-devel libxml2-devel
The vmware image we use doesn't have any X anything in there except what is needed for vmware-toolbox (useful for resizing). I thought that I absolutely needed to have X actually running in the VM. That may have been true with older versions of vmware, but with the free vmware-server, I can safely do X11 forwarding over ssh and do the vmware-toolbox resizing using the X on the host. No need to have gnome and all the rest of the packages it pulls in just for resizing the vmware virtual drives.
In fact CentOS 4.4 isn't that distribution, with php stuck at somewhere around 4.3 and postgresql at 7.4. But it's certainly stable. And I can build any necessary software from source, so I don't feel any great need to upgrade to 5 (although we should probably have a plan for that upgrade path for sometime in the next 6 months).
For this distribution, I had to install Oracle 10g Express Edition (for learning Oracle, and for prototyping). That makes the installed VM bloat up by a lot. I went through the installed RPMs and removed everything I didn't think I'd need (and a few things that I found I needed later on). Later, I had to reinstall some development packages since I needed to build php from source (because I couldn't figure out how to get CentOS' built-in php to talk to oracle). I eventually wound up installing php 5.3.2 since I couldn't get php 4.3.9 to do what I wanted (connect to oracle). That's probably pilot error, but I don't mind the upgrade, and if there are incompatibilities with the installed php 4.3.9 on the deployed live servers, well, I'll ask the team sysad to deploy 5.3.2 there (after some paranoid testing on the vmware image).
After building php, I removed the development packages again :-). For when I need the list again:
gcc apr-devel gcc-c++ libtool bison flex make
autoconf automake cpp db4-devel apr-devel
glibc-devel glibc-headers apr-util-devel
httpd-devel libstdc++-devel openldap-devel
openssl-devel pcre-devel postgresql-devel
zlib-devel libpng-devel libxml2-devel
The vmware image we use doesn't have any X anything in there except what is needed for vmware-toolbox (useful for resizing). I thought that I absolutely needed to have X actually running in the VM. That may have been true with older versions of vmware, but with the free vmware-server, I can safely do X11 forwarding over ssh and do the vmware-toolbox resizing using the X on the host. No need to have gnome and all the rest of the packages it pulls in just for resizing the vmware virtual drives.
Subscribe to:
Posts (Atom)