Sunday, December 25, 2005

Suse (even opensuse) on Vmware

I use Mandriva as my preferred distribution. The *drak* stuff isn't that big a deal for me, I prefer the command line, usually, and it still isn't easy to add an ethernet interface alias via drakconf (actually, I don't think I see how to do it at all, although you can do it with webmin). I'm *really* in love with the urpmi stuff though (and mainly the command line urpmi, although rpmdrake is very good too).

I thought I'd try suse though, either the evaluation version, or opensuse. But I can't afford to overwrite my laptop since I don't want to have to reconfigure everything (openvpn to two different networks, etc). So I thought I'd try it with vmware. When I downloaded the DVD ISO, it wouldn't work in vmware. It kept giving some sort of boot error. I thought maybe the DVD download was corrupted so I downloaded again, same problem. Next I downloaded the CDs. Same problem. It looks like opensuse (or the suse evaluation version) just won't install in vmware. That's a shame, since I'm certainly not going to switch to it if I can't test drive it, and I just don't have a spare computer to test drive it on. Ah well.

Saturday, December 17, 2005

Bloglines and Google Reader

I looked at bloglines last night, partly because of a post here, or on a mailing list about how it's so much easier to use than google reader. too lazy to put links there.

I think I *will* be switching to bloglines. It does seem much faster than reader (although that might just be because I've got digg in reader and there are a heck of a lot of entries in there for reader to load). It's also got subfolders, feeds are subfolders on the left (useful for ignoring digg until i've got enough time to waste, on reader everything is in the same place and digg overshadows everything), and it's got "Mark all read" (which I sometimes need to resort to if I've been away for a few days and there's no hope of reading everything that digg has found).

There's probably some way to filter things in digg (which is still too useful to drop, despite the inconvenience of having so much there), but I don't see how to do it in reader yet. I thought maybe digg would have a feature where the RSS feed is identified per user, so that my preferences in digg (what I want to see, which is not everything) would be what I'd see in my RSS reader, but even if there were a per user RSS feed, there doesn't seem to be any way to have preferences in digg similar to preferences in slashdot, where I'd only see articles in categories I'm interested in.

So it's bloglines I'll be using for now. When reader catches up and gets faster, I'll look at it again.

Tuesday, December 13, 2005

Dictionary of Algorithms and Data Structures

i saw the Dictionary of Algorithms and Data Structures long ago, posted on a mailing list. Saw it again on digg.com just today.

I didn't graduate CS, so a lot of this is new to me. I wish there were a more structured way to browse it though. alphabetical is doable, but doesn't link concepts very well together. alphabetical plus following related links (see also, etc) works a bit better, but that has its own problems. Heh, I'm being characteristically lazy and I want a tutorial that covers everything in there :-). I doubt if even Knuth covers everything in there and if he were to try, he'd need the immortality pill from The Myth.

Sunday, December 11, 2005

visayan on mailing lists (KLUG)

I see that sacha is looking into learning bisaya. I don't know why that would be, it's not like she's not got enough on her plate already. It's certainly a good thing, learning languages is good.

I hope, though, that it's not so that she can decipher things like this. And that she doesn't try to learn things from that and similar posts on some mailing lists. Almost all visayan posts on that list are like textingese in tagalog or english, essentially illiterate. That's not to insult the poster, that's just students communicating lazily in whatever language they use. It's not their fault, it's cultural. There are wonderful, literate visayan posts somewhere on the Net, i'm sure, but they're not on mailing lists populated by students who don't care enough about their posts to try to spell correctly.

Not that spelling is an easy thing in visayan. There's no given orthography, and the examples in sacha's post feel sort of archaic, although that's better than the illiteracy that passes for communication these days). Sounding archaic isn't a big deal though. Visayan varies from place to place, people will just assume she learned visayan with people who spoke classical visayan.

National Geographic Pics

Huh, in the preceding link to Google Video I wasn't quite sure whether it was a *good* way to waste bandwidth. So there's no qualifier there. On the other hand:

Pics from National Geographics and
ScubaDiving.com cannot possibly be considered a waste.

Lots of other good photo sites out there. for instance, APOD, Astronomy Photo of the Day.

All those sites better than webshots or screensaver software that installs adware and spyware on the computers of the gullible. I think so anyway (wouldn't really know, don't use windows anymore, maybe there are webshots or screensaver freeware that aren't evil).

Google Video

Huh, I see that Google Video is up. What a way to waste bandwidth:

Asian Backstreet Boys
How men screw up romance

lots of useless stuff there too. i wouldn't be surprised if there's some pr0n there though, somewhat like that smart telecom's post your phonecam pictures here site (too lazy to find the link).

tiger

Wednesday, November 30, 2005

Try ruby not working

I saw a link to Try Ruby! (in your browser) and thought, why not.

It's not working for me though. After the first few things to try, I get to the "Jimmy".reverse example and I get:


>> "Jimmy".reverse

<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>500 - Internal Server Error</title>
</head>
<body>
<h1>500 - Internal Server Error</h1>
</body>
</html>



PHP already has most of what I need and I'm interested in trying Win Binder, a native Windows binding for PHP. I don't know what that's like yet, but if it's reasonable, then I may use that to write some quick windows programs (anything to get away from the horror that is Visual Basic).

Hay, so much to learn, so little time. I need to delegate.

Tuesday, November 22, 2005

swsusp very useful

I was on a long bus trip recently and I decided to see if the new laptop supported swsusp. I had given up on swsusp on a previous laptop (also a Winbook, but a slightly older one, since sold) because it didn't work there. It would try to save and then would fail on trying to halt mysql. Or if i stopped myself myself, it would try to save, and then on restore it would try to save again (instead of restoring), and then give up and reboot (or something, it was confusing, which is why the account above is a mess).

I'm very happy that it now works on the new laptop though. I've got three batteries for the laptop. They only last around 45 minutes at the full 1.7Ghz AMD speed (something like 2300-2400+ intel equivalent, i think), but in combination with cpufreq and setting the speed to the slowest CPU frequency available (around 600Mhz AMD, so maybe around 800+ intel equivalent?), the individual batteries last around 1.5-1.7 hours.

But I didn't want to shutdown and swap batteries since starting up takes some time and all of that time is at 1.7Ghz, eating battery life. Swsusp is a lifesaver since, when battery gets low and the laptop starts beeping, i just run a script that tells the laptop to suspend. It writes its status to swap. The laptop turns off. I swap batteries. And when I turn the laptop on it restores from swap, switches to 600+Mhz automatically and gets back to X in far less time than it takes to boot.

I think I'm going to swsusp all the time now, even when not trying to save battery, just because it starts up so much faster when restoring.

Wednesday, November 16, 2005

A constant state of insecurity

Another link from schneier has:

As a security professional, my friend often attends security conferences and teaches security classes. She noted that the number of passwords she collected in these venues was higher on average than in non-security locations. The very people who are supposed to know more about security than anyone appeared to have a higher-than-normal level of remote access back to their companies, but weren’t using any type of password protection.


A Constant State of Insecurity

Good God.

On the other hand, I admit that I do use several levels of insecure passwords for free but insecure web based services. Any service that requires registration for free access will get one of my fake (I lied about everything, including what country I'm supposedly from [I don't know where Anguilla is, but it sounds pretty]) identities with my lowest level password (I don't mind giving it away to anyone, although I don't actually post it on a blog or anything either :-).

Maybe some of those people were doing something like that. On the other hand, well, if companies are exposing FTP or POP3 on the internet (or maybe even ssh with their users having the same ssh password as their POP3 password), well, this is depressing my donkey. I wonder if anyone learns anything at those security seminars. Maybe they're Windows security seminars and the lessons have to do with which buttons to click to turn the firewall on. A lesson that is instantly forgotten because the listeners are Windows security professionals.

Schneier this week

Schneier in the current installment of Crypto-Gram has some interesting links. This month's installment seems very good. More interesting than previous months. Although they're all worth reading.

He gets something wrong though about the manila times and profiling of terrorists (male, between certain ages, cellphone, uneasy). Maybe that's just because he needed a short tagline. The article attributes the profile to the NCR Police Office (from which stupidities like this are expected).

The article certainly doesn't sound as stupid as schneier's comment makes it sound. That there is some controversy in the police organizations about the usefulness of the stereotype would normally be a good thing. Except that might just be normal intra-governmental infighting with no good reason behind it.

In the same issue of Crypto-Gram though are a whole bunch of links related to the Sony DRM rootkit (heh, it can be used to avoid WoW spyware (what's that doing, maybe just checking for piracy and such?), a trojan uses the Sony rootkit/trojan to hide, Sony has given up on it and Microsoft will detect it and remove it. Heh.

And There's an interesting weblog on malware from F-Secure, found it linked to from Schneier. I may look at it every day now. Or maybe add it to an RSS aggregator, if i can figure out how to do that :-) Huh, madali lang pala, using reader.google.com. Too lazy to try other RSS readers.

Hmmm, now to figure out how to get a feed from TechScene (kind of bass ackwards, since I could just get feeds direct from the people, but I'm lazy or too busy at work to remember too many feeds or URLs, or both).

There's also a link to an article discussing the prank article on aluminum foil hats previously discussed here. It looks like some heavy handed humor back and forth, but I'm not so sure about that zapatopi link. Some people are nuts about this stuff, and he might be one of them. He *does* have a book on practical mind control protection with aluminum foil beanies but I haven't read that, so I don't know if it's just a relatively long lived prank feeding on the paranoids and taking their money :-).

Lots more articles, all of it interesting. Every 15th of the month (well, 16th in the Philippines) is a huge time sink (but only about 33% a waste of time) because of this monthly email :-)

Saturday, November 12, 2005

Aluminum helmets *help* the government spy on You!

although on average all helmets attenuate invasive radio frequencies in either directions (either emanating from an outside source, or emanating from the cranium of the subject), certain frequencies are in fact greatly amplified. These amplified frequencies coincide with radio bands reserved for government use according to the Federal Communication Commission (FCC). Statistical evidence suggests the use of helmets may in fact enhance the government's invasive abilities. We theorize that the government may in fact have started the helmet craze for this reason.


On the Effectiveness of Aluminium Foil Helmets:
An Empirical Study

IVRS must die

I see that Edong (he also has a "Ka", but I'm not on that side of the political spectrum) has a post on IVRS (interactive voice response system.

IVRS was all the rage in the US when I was there. It saved labor cost because the operator could be partly replaced by a machine voice. Instead of hiring 2 or 3 operators, a company could keep one receptionist/operator and fire the rest.

I can't stand IVRS though. Whenever I hear automated voice prompts I immediately hang up.

It's not the impersonality of it. I work with computers, I sort of like impersonal. What I can't stand about IVRS is the inefficiency of it, and the fact that, if it's to be used efficiently (memorize the call tree and type the digits in directly instead of waiting to hear the prompts), *I*, the customer, have to expend extra effort to remember. I have enough trouble memorizing my name, some days. I don't need to buy from a company that forces me to either waste my time listening to a ridiculously long call tree. And even if I wanted to memorize the direct route to where I want to go, no company I'm buying from is important enough to spend that much memory real estate on. Even if I had that real estate to spare. Any company that tries to communicate with me through IVRS is (1) going to lose a sail, (2) going to earn a rant, either in person, when I get to talk to someone in management, and/or a blog entry, for being more concerned with cost than customer experience. If they want to save cost, let them relocate their call center to the Visayas or Mindanao, salaries there are 1/2 to 1/3 what they are in Metro Manila and it'll help the economy and decongest the capital.

Which reminds me, is it still Metro Manila or did that go out with the Marcoses? No one mentions the Metro much anymore. Although there's still that extra M in MMDA.

Thursday, October 27, 2005

Large copies to USB 1.0

I've got a laptop that only has USB 1.0. This is great for my USB keyboard and mouse (there are no PS/2 ports at all, so external input devices *have* to be USB, I've never heard about PCMCIA keyboards or mice :-) and also for my USB bluetooth and memory sticks, but it's terrible for a USB hard drive.

Recently I was copying multi gigabyte files to the USB drive and I couldn't stand that it was taking so long and that the laptop would pause and block for tens of seconds. I thought what was happening was that the data was being buffered and then, when the data was actually written down to the hard drive the kernel was so busy handling interrupts (or polling, or whatever) that the computer blocked.

I came up with two workarounds. The first workaround was to set the buffer flushing period to very short, so that the laptop wouldn't block completely for tens of seconds but instead would just get slow every few seconds.

update -3 32 -f 1 -s 1

The second workaround was to limit the bandwidth going down the USB pipe. Rsync is a great tool. This time I couldn't use it to do incremental syncing since in the configuration given checksumming would be far slower than just copying the data file over directly. I didn't want to fill the pipe though since I thought I'd noticed the laptop choking a bit even with the update trick. So what I did was use rsync as a slow cp. Rsync has a --bwlimit option for app level bandwidth limiting which works even within the same computer. So:

rsync -a --bwlimit=384 [srcfile] [dstfile]

limits the amount of data transferred to 384 kilobytes per second. The link can handle up to around 1 megabit per second, but limiting the copy bandwidth keeps the laptop usable and then I just hid the rsync window so that I could continue using the laptop.

rsync --bwlimit wouldn't be enough here. Without the update trick (making buffer flushes shorter and smaller) the buffers would still get filled with hundreds of megabytes of data which would then take around one second per megabyte to go down the wire. With update and no --bwlimit, I was finding slowness and inconvenience. Together though, they made the laptop usable while it was copying large files in the background.

Eventually though, I'll just have to use a laptop that's got USB 2.0 :-).

tiger

Monday, October 24, 2005

600+ kBps!

I downloaded OpenOffice.org 2.0 just now and I was shocked at the download speed. I downloaded through bittorrent and I was getting 600+ kilobytes per second. that's one advantage of going with an ISP that doesn't do bandwidth capping. I use destiny cable internet and while I've never before seen 600+ kBps, i regularly see 100-200 kBps.

I guess someone else in the destiny network was seeding the torrent. Or maybe there's just a whole lot of bandwidth available on monday morning and I got all the benefit of it.

I downloaded the SUSE 10 eval DVD ISO a week ago and that came down at around 60kBps on average. There's something to be said about downloading an ISO via the official torrent too. When SUSE 10 was very new I tried to download with the official torrent and that was so slow (2-3kilo*bits* per second) i decided to download it over the edonkey network instead. That download completed but the ISO was bad, so it was quite a waste of time. When i downloaded from the official torrent again, it slowly went up to around 60 kiloBytes per second. So official torrent+wait+a+few+days is the right thing to do I guess.

Possibly, too, downloading other (more popular)distributions would be better via the official torrents. I've done my bit at misinforming people about SUSE licensing (old information from when it was still SUSE and not novell). Probably people stay away from SUSE because they still have the impression (as I did, until I was gently corrected by a SUSE/novell representative) that SUSE is not yet freely distributable. Well, the eval edition *is* now freely distributable.

Anyway, I'm waiting for Mandriva 2006 to become available to see what the download rate on that will be :-). I'm the second worst type of ISP client, really, downloading things sometimes only to see what the ISP's performance is like. The worst type, of course, are those guys with huge hard disks who download movies, games and mp3s as if there were no tomorrow.

tiger

Friday, October 14, 2005

rsync algorithm talk by tridge

Andrew Tridgell has a great talk on The Rsync Algorithm

rsync and --partial

rsync --partial doesn't do what I thought it would do. In fact, for large files (my current project involves rsync of half gig files over relatively slow networks, and often networks which are relatively unstable) it can actually make the destination file lose so much information if the rsync dies early that it would become necessary to re-download practically the whole file.

I was under the impression that --partial did something smart, e.g., maybe it would be truly incremental. That is, if a large file was to be downloaded (from source to dest) but the download died somewhere in the middle of the file, that all the *good* data from the source half that was correctly downloaded would be saved AND THEN the other half (on the dest) would be *APPENDED* to the partly downloaded half. This way, if the download were to be continued, then the already good half would all hash correctly and the download would start at the first wrong block.

Or maybe --partial would keep all of the good downloaded file in those temporary .[real_filename].[random_letters_and_digits] files and the temporary would not be removed so that the next download would use the same temporary file.

Unfortunately, --partial doesn't do either of those. It's pretty good if you're downloading a completely new file (the destination didn't exist before or it's *much* smaller than the original). In that case, the file is partially downloaded and when the rsync dies, the partially downloaded file is kept. If you tried to do this with a large file already downloaded, and the new download died before it got to the size of the already downloaded file, you'd *lose* data since the short newly downloaded file would *replace* the old, larger, already downloaded partial file.

So --partial is very useful when downloading files from scratch, but it's not useful (in fact, may be very harmful) when syncing files which you've already partly downloaded before. For that, just use the regular rsync parameters and don't use --partial.

As good as rsync is, I think there's quite a bit of improvement yet to be done for the case where the bandwidth is slow/unstable and the files to be transferred are very large. And either of the suggested mechanisms above might be a good thing to implement.

Monday, October 03, 2005

Resumes with no email address

I received a resume a few weeks ago and the candidate is very interesting. It's in the U.S. style, two pages with only relevant information in there. the philippine style, with parent's names, religion, and picture, is useful for discriminating against people based on surface characteristics. I discriminate against people based on stupidity, and while that can sometimes be seen in resumes, more often I have to wait until the interview for that.

Many of his skills are just what we need, and he's interested in learning other skills that we need that he doesn't quite have yet. That he studied at Mindanao State University and Ateneo de Davao and graduated from the University of the Philippines at Diliman doesn't hurt either.

But his resume doesn't have an email address. Or a celphone number either. Maybe HR got this from JobsDB or similar and they need to pay extra money to get his contact information. Whatever the case, I dropped the resume and the positions have since been filled.

Resumes with no convenient contact information is just not going to be seen. Well, by me, anyway. He's got a snailmail address there too. But it's in Los Angeles, CA, for some reason. Now that's another filter. If he's in the U.S., he's either immigrating (legally or otherwise) or he's just visiting. If the first, why submit a resume to a philippine based company? If the second, he should put down his philippine snail address since a U.S. address is instantly off-putting. Why waste time considering someone who is probably already making more money in the U.S. as a busboy than he would make in the Philippines?

Of course there are exceptions. There are people who go to the U.S. for a few months, work there as computer consultants or software developers for very small (t<6 months) projects and come home. But there should still always be an email address or at least a chikka-able number, so that the employer can check whether any such exceptions apply.

Wednesday, September 28, 2005

Finally, bluetooth on Linux

I've finally got Bluetooth working in linux. For a while I was just looking at bluez and I couldn't figure out how I was actually supposed to transfer files. I could connect, the pin was right, but I didn't know how to send or receive files.

I installed gnome-bluetooth-manager and I still couldn't figure it out. All that did was see the phone. I couldn't do anything to the phone connection or find a way to transfer files.

Finally though the kdebluetooth RPMs finally made it to my urpmi server and after i did urpmi kdebluetooth I'm now able to transfer files. It took a while to figure out which programs to use (didn't realize that they'd be in the icewm menus too, I'm a moron). But finally found kbtobexclient and was able to transfer a demo java game (J2ME with the wireless toolkit makes development easier, not easy [yet, but i haven't worked on it much yet except to look at the demos], but easier) and actually run it.

Moving Tech blog entries here

My main blog is at Bounding Overwatch. I've created this new blog as my tech blog though (struggling very hard to avoid using some amusing yet faux latin for the blog name) and I've copied most of the tech related blog entries from the old blog to here.

The entries are still there too though. Didn't want links to break if people are linking to me there or search through google.

I wonder what the result of all this moving will be for jijo though. His Filipino Tech blog aggregator picks up posts via RSS. But if I posted all of these entries this morning (except I used the original datetimes), will his aggregator be suddenly inundated with all my entries? Or will only this first entry make it?

It's probably the second, but it'll be interesting if it's the first. Hahaha.

Thursday, September 08, 2005

Dynamic DNS in Linux

That title is a bit over the top, of course. I only looked at one solution, and when that worked for me, I stopped looking.

I've got a sort of static IP. My internet provider, Destiny Cable Internet, as of this writing, uses the ISC DHCP Server. That server tries to allocate the same IP to the same requesting MAC if the IP is still available. So the IP is pretty stable. However, there is no promise that the IP will never change, so it's semi-stable.

I thought I'd be OK with just assuming that it was stable, but I changed my mind. It *might* change and when it does, then I'll have to go to everywhere that assumes it's stable and fix it. And then it might change again.

Instead I decided to use a dynamic DNS server. For the client software, I first looked at EZ IP-Update and, since there's a list of dynamic dns servers at that site, I chose DynDNS. Both choices were pretty random, but I'm happy with both of them.

ez-ipupdate took around 20 minutes to set up (there was a minor compile error, something wrong in the ./configure script's deciding how to use errno, so I had to figure that out and fix it, it's a quick and dirty hack though, not good enough to propagate upward, but I may email the ez-ipupdate maintainer so that he can look at his configure configuration) and DynDns took 2 minutes (plus the less than a minute to wait for the email to arrive).

So I've got dynamic dns working now. bopolissimus.homelinux.net

Port forwarding with ssh

I find forwarding ports (and, actually, also reverse forwarding ports) over ssh very useful. I'll probably find it less useful now since I've found OpenVPN so easy to setup and use, but it's still a very useful technique, particularly where I don't have root access.

I always use -v when doing this because -v will tell me if the link is slow or the link is down (while i'm still connecting, less wasted time). It also shows a message everytime a request is forwarded down the ssh tunnel, useful for debugging to make sure I'm doing the right thing.

With the setups below, surfing to http://localhost:8080 will forward requests to the internal or external server specified. Note: by default, port forwarding binds only to 127.0.0.1, so from the box itself you can connect to the forwarded service, but you can't from another box. Use GatewayPorts for exposing the service (warning, think about that, it might open security holes).

1. I'm on my laptop and I want to surf the website on remote.com.ph as if I were physically there. So what I want to do is forward my port 8080 to remote.com.ph:80 (it's an internal website and is inside a firewall, so I can't surf to that site directly from the open internet).

ssh -v -L 8080:localhost:80 remote.com.ph

what that does is, connections on my local port 8080 will be forwarded to remote.com.ph. From there, it will be forwarded to remote.com.ph's localhost:80.

2. Same as #1, I can ssh to remote.com.ph but the web server isn't on remote.com.ph, it's on another internal box inside the firewall, 192.168.80.80:80.

ssh -v -L 8080:192.168.80.80:80 remote.com.ph

What that does is, connections on my local port 8080 will be forwarded to remote.com.ph over the ssh tunnel. Remote.com.ph will then forward them onward to 192.168.80.80:80

3. Same as #2, but I'm trying to surf to some external web site that doesn't let me in if I surf from the Philippines, or if I surf there from the Philippines something bad will happen. For instance, PayPal will block paypal accounts if they're used from IPs that it identifies as being in the Philippines. This has to do with very high fraud rates. But maybe I'm not a fraudster, I just want to use my paypal account, but I can't because I'm physically in the Philippines. If I have ssh access to a host in the USA which paypal won't be suspicious about, I can do port forwarding through that server, e.g.,

ssh -v -L 8080:www.paypal.com:80 my_us_server.com

There is also reverse port forwarding. I do that when the server I need to connect to (usually not for http, but for some internal server) is inside a firewall and I can't get to it directly and the firewall won't port forward to the internal server. In that case, what I do is ssh to the gateway, and from there ssh to the internal server. Then I do reverse port forwarding. What that does is, it will ssh back to *me*, and open a localport on *me* which will be forwarded down that second ssh link back to it. This, of course, only works if the internal server has a route out to the internet, if it doesn't, then some other solution will have to be found.

# first ssh to the gateway
ssh gateway.remote.com.ph

# at the gateway, ssh to the internal box
ssh my-internal

# at my-internal, open the reverse tunnel back to me, i am, client.com.ph

ssh -v -R 8022:localhost:22 client.com.ph

What that does is get my-internal to ssh to client.com.ph (my box, outside the firewall). Once it gets there, it will set up port forwarding so that port 8022 at client.com.ph so that when I (at client.com.ph) connect to port 8022, the request will be forwarded to port 22 at the my-internal computer. I use that much less often than -L, but when -L doesn't work, -R is often a lifesaver.

Wednesday, September 07, 2005

Comparisons of different SQL implementations

Online there's a pretty long (not sure how comprehensive, but informative and useful) Comparison of different SQL Implementations

Haven't read it all yet (busy), but I'll need to get to that within 1-2 days.

Saturday, September 03, 2005

Destiny Cable Internet SMS number

To get Destiny Cable Internet to call me (in case of technical problems),SMS to 09175771111 MD [account name] [account phone number]

Very convenient since I have a cell phone and cell phone charges being exorbitant in the Philippines, I'd rather not have to call them. Besides, I don't know what the tech support number is and I'm too lazy to look :-).

It's weird that one has to put one's phone number in there, but I guess that's for people who have landlines. They can put their landline number there. I don't have a landline so it seems a waste to type the number in since they should just call me on my cel and they've already got that when they receive my text.

It's nice to be able to add (after the phone number) some text about the particular problem. So they can look at things on their side (e.g., signal level, problems in my area, etc) before even calling me. Saves me time explaining things, and saves them some telco costs.

Friday, September 02, 2005

Windows Solutions

Long ago I saw a list of "Windows Solutions". I had it in my sigs and now I can't find them. Fortunately, I keep my old sig file around even though I don't use it anymore (I haven't seen a tool yet that will modify the gmail sig via cron).

Anyway, here they are, so I don't forget them, hopefully google will cache them and google can find them with "Windows Solution bopolissimus" :

1. Don't use Windows
2. Stop the program and start it again
3. Ctrl-Alt-Del and kill the program in Task Manager
4. Reboot
5. Uninstall the program and reinstall it.
6. Reinstall Windows and Everything else
7. Backup, format the disk, install windows, reinstall everything else, restore your data.

to be fair, you sometimes need to do all of these (or the transliteration) for other operating systems, and #2, #3, #4 aren't so common anymore in XP. But they're still much more common there than in Linux. All that spaghetti inside is turning into much and Windows is turning more and more into a big pile of mud (if I may be allowed to mix metaphors)

Tuesday, August 30, 2005

Charles Murray on The Inequality Taboo

http://www.commentarymagazine.com/production/files/murray0905.html

Friday, August 26, 2005

GoogleTalk

I was going to whine about GoogleTalk invites that I got. Three people invited me and when I read the invite it says that the client is windows only.

But on the GoogleTalk site there are indications that the protocol used is Jabber. And in the Help Center link (they really should put this information on the main site) there's a link to How to install GoogleTalk on Gaim. So I'm happy. I don't have the voice chat features of GoogleTalk. I wish I did. I'll probably figure that out at some point. Particularly since I've now got lots of bandwidth, enough for voice chat or video. But I'll do that some months in the future. I don't have a headset and mic setup and I just don't have the time to deal with that right now.

Saturday, August 20, 2005

Zotob and why the victims deserved it

I didn't know anything about Zotobuntil days after it had been out. Well, two days after it came out. Or maybe three.

I have the same reaction as every other time something hits the Microsoft side of the world. Everyone who was cracked deserved it. If anyone was injured because a Microsoft controlled warship misbehaved, or maybe some sort of system in space stations, orbiters, space shuttles or hospitals injured or killed someone, well, the people injured weren't to blame. However, the armed forces, space or hospital agencies (and all those financial and banking agencies and power generating and distribution agencies) that chose to deploy and continue to use Microsoft software for anything mission critical are certainly to blame.

They should have patched.

They should have bit the bullet and spent the millions necessary to upgrade to the most current version (XP), and then they should have patched, once, twice, and three times. There would still be holes in there, but they'd not have fallen to Zotob (if they were still at W2K) or any of the various XP remote root attacks.

They could also have switched to Linux. I think that there's a lot of value in Linux for common office workers. They won't get windows viruses, they probably won't get any Linux remote compromises (since they're desktop users and most remote compromises are to commonly run daemons or to web based applications, which regular desktop users just don't run, and even if they got compromised, most compromises on web programs can't promote themselves to root, so they're limited to web page defacements which are embarrassing [to professional linux administrators] but probably unnoticed among regular users, if they even run web servers]).

There's a lot of value in Linux for server applications too, but those need to be locked down more seriously than do desktop systems. Although, generally, they're far less insecure than comparable windows systems. But this isn't an anti-windows rant. It's more a rant against unthinking windows administrators who aren't paranoid about their systems. Any administrator should always be paranoid. But windows administrators should be certified insanely paranoid. They'll get cracked anyway, but at least they'll do their best. And they'll get cracked less often. The amateurs who let Zotob into their systems should be fired and then shot as a favor to the gene pool.

Thursday, August 18, 2005

Chikka on gaim needs gaim-devel

There's been a Chikka plugin for Gaim for a while now. I got it to work some time ago, then reinstalled linux and never found the energy or need to get it to work again. It's pretty easy, but I got sidetracked by a desire to make it work with source built gaim rather than using the gaim that already came with my distribution.

I decided to look at it again from the standpoint of just getting it working instead of trying to make it smarter (so that it could detect where gaim was installed if it wasn't in the standard places, etc). When I last did that, I lost interest again since it didn't work.

Today I tried again and I realized that I probably needed to install gaim-devel to get chikka working. and after a:

urpmi gaim-devel

and a simple ./configure;make;make_install in both the chix and chix_gaim directories, i've got chikka functionality in gaim. Now, I don't think it's working yet. I can go online in chikka and i see when other people are online (they just received or sent a chikka message), but messages i send don't seem to be working. I'll need to look at this again some more. I'll probably get it working tonight.

anyway, installing chikka does need gaim-devel. Without it, i couldn't even get gaim to notice that it was possible to connect to the chikka network, or that chikka even existed.

Saturday, August 06, 2005

Incredible Speed with Destiny Cable Internet

I was testing downloads overnight and I was getting speeds in the range of 200+ kBytes per sec, which is around 1.5mbps. I didn't think that was possible since I thought the modems themselves were capped at some level (even if destiny internet doesn't have a network appliance doing bandwidth capping per individual user). But I guess the modems aren't capped. I think I did hear mention of 1.5mbps being the physical limit though. Maybe the modems aren't capable of any more than that. But I'm going to test some more to see what the real limits are :-).

Boy, this is great.

Thursday, August 04, 2005

Destiny Cable Internet

I need Internet access at home since, as a consultant, I often work offsite and while the QSR offices are close enough to walk, sometimes I just need to work past midnight or on weekends. I can still get to QSR on weekends, but it's just nice to be able to work from home.

I considered DSL (mainly, PLDT MyDSL), but i decided on Destiny Cable Internet due to price and the fact that PLDT MyDSL is capped at 384kbps while Destiny has no caps. Of course it's cable though, so the bandwidth is shared. Destiny just doesn't do any bandwidth capping at all. The management thinks that the thing to do is just to provide the bandwidth (their bandwidth is greater than they need,that is, they buy more bandwidth than actually distribute). I like that attitude, even though it might be better for me (or for the generic customer) to have a cap so that bandwidth hogs don't slow down everyone connected to the same node.

The service has been pretty good. I'm getting consistent downloads in the 400-600 kbps range. I haven't really been monitoring very much yet, or consistently enough. But I'm waiting to get to 1Mbps. QSR has destiny bandwidth too and once or twice I've seen downloads in the megabit range.

Of course, to people from the U.S. (all my brothers and my sister have been through here lately to be with my stepfather before his heart surgery) that's not much. 1.5mbps at home is pretty common there, if you're close enough to a telco switch. But 400-600kbps is great here. and i'm confident that Destiny will continue working on their line quality and will eventually keep me in the megabit range continuously.

Not that I can do all that much with that bandwidth. But it'd be great to have it when necessary.

Monday, August 01, 2005

at-spi really slows down evolution

I couldn't stand how slow evolution was anymore. I know that I'd fixed that problem long ago (with some sort of gconf-editor, ah, edit). I remember that it was some sort of accessibility thing.

Well, I reinstalled linux a while ago and I got evolution slowness again. I've finally found out what I need to do to remove the slowness.

urpme at-spi

Back when I was still installing mandriva, I thought I'd try out the accessibility stuff to see if mandriva was good enough to use for blind people. I still don't know since, well, I can't get gnopernicus working on this laptop. Maybe I can try it out on a desktop. If it's usable then, I may try to get my brother in law (great programmer, and blind) to try out linux.

For any blind programmer, I think the thing to do is to work in linux since the command line development environment and demand for command line developers is high in linux. In windows, well, he programs in VB, and he's pretty good at it, but he's always going to be hobbled by the
graphical emphasis of VB and the command line weakness of Windows. It's probably a great environment for blind computer *users*, but it's a terrible development environment for a blind programmer.

Tuesday, June 21, 2005

vim settings i always use

set ruler
set noincsearch
set nohlsearch
syntax off
set ts=4
set ai

i should probably also use
set ic

ok, that's it, i will :-)

Saturday, June 11, 2005

svn keyword "Id"

svn propset svn:keywords "Id" [files]

Saturday, June 04, 2005

Mandrive kernel 2.6.11 and cpufreq

I installed Mandriva 2005 and my scripts for slowing down the CPU stopped working. Well, the cpufreq stuff has been modularized and now, with the stock 2.6.11 kernel that comes with Mandriva I have to modprobe p4-clockmod before the entries in /sys/devices/system/cpu/cpu0 become available.

I'm glad that it's all still there though, I was thinking I'd have to config and rebuild the kernel, something I try to avoid when running at 2.4Ghz in an un-aircon room :-).

Friday, May 27, 2005

Webmail and internet cafes

I'm in Dumaguete, at the SurfShop (an internet cafe right by Silliman University). They have kindly allowed me to use my own laptop for going online. I asked the other internet cafe just a house down (MC )and they didn't let me do the same.

It's nice to be able to work with an internet cafe that knows what they're doing, or at least are helpful and flexible (it's likely that the other cafe is run on fascist leadership lines, something far too common in the Philippines, so that the staff won't allow anything they don't explicitly know is allowed, and they won't call the owner to check if they come across something that's not allowed, here at the SurfShop, they called the owner.

Now, I'm perfectly willing to just surf around on insecure internet cafe PCs (they're invariably windows and, i'm sure, just chock full of viruses and spyware no matter how fascist the security posture because, well, it's just basically impossible to secure windows computers that are open to be used by the public), but I'm just not going to trust any important passwords to internet cafes. As it happens, I had to send an email, so I just created a throwaway email address (bopolissimus_temp@hotmail.com) and used that to send the email, and then all I did was surf around.

But now I brought my laptop over and I'm online, so it's downloading my gmail and I'll read and reply offline (that's around 2 weeks worth of email and there's going to be a lot of it).

Saturday, May 14, 2005

evolution spamassassin and gmail on slow links

I'm on camiguin and the internet access here is slow. I was trying to download my gmail to evolution and it wasn't working, I was getting frequent timeouts (gmail would stop responding). Finally I thought to turn off the automatic spam checking in evolution. Now, the CPU isn't pegged at 100 percent and i'm getting my mail. The mail still takes a while to arrive, but at least I'm not getting disconnected from gmail so often.

I'm using the 2.6 kernel's CPU frequency scaling control functionality to set my laptop at only 300Mhz. that's not why i was timing out though, since even at 2.4Ghz and with remote checking disabled, it was still timing out. SA is just fat and slow and should be disabled when checking gmail on slow links.

I'm running at 300Mhz because this laptop runs so hot (it's not a low power CPU) that it would burn my thigh. I work around that by putting the laptop on top of a hardbound book, but then the CPU still runs so hot when i'm doing CPU intensive things (e.g., compiling postgresql, or the kernel, or even just some modules in the kernel) that the laptop will beep three times and turn off without giving me the opportunity to shutdown properly. Running at 300Mhz lets me do anything I want (albeit slowly) and not run the risk of heat related outage. I've tried at .600,.900 and 1.2 Ghz and those all work too, but i'm being conservative since i don't really need all that power anyway.

Thursday, April 21, 2005

Cagayan de Oro online directory

I had to get phone numbers for some Cagayan de Oro hotels so that friends who were going there could query prices and book (they still don't have web sites, or the web sites don't come up early enough in google) and I remember that there's a handy site for searching for CDO phone numbers. CDO-ID is a searchable site for numbers.

Now, the front page hasn't been modified since around 2001, so possibly the phone numbers aren't the latest either, but most of the numbers will still work.

I remember that when I used to work for an internet provider in CDO I wanted to do something like CDO-ID too. Years before that I'd gotten hold of the Misortel phone data and had a searchable site (long gone now, hosted by weblink i think). Well, I tried to get the data out of the telcos, but they were incredibly unimaginative and I never really could get the data out on a regular basis so that I could have a reasonably up-to-date directory. Eventually I gave up trying, particularly when I found out that CDO-ID already had a site.

This resistance to putting up phone numbers on the internet is a major stupidity for any telco. After all, if the phone numbers are online, people can search for them and they're then likely to make long distance landline calls. The cellular telcos should probably also do something similar, at least for their postpaid accounts, and maybe some sort of interactive prepaid directory too (aliases allowed, but maybe filtered for obscenity and illegal names). That would probably drive up voice traffic.

But I don't see the celco's doing that either. They don't like change, despite their business being founded on technology, where change is so fast, it's essential to embrace it.

Wednesday, April 20, 2005

one letter passwords

I was listening in on a conversation on passwords the other day. One of the people in the conversation mentioned that they knew someone whose password was "A". Just the one letter. And there was a conjecture that maybe that was a really hard password to guess since, most people would think 3 was short and no one would think that anyone would use a one letter password. This was for windows login passwords, so it's not something that would normally be attacked via brute force somehow, although in an organization with lax password controls, the same password might also be used for email, file shares, etc. In which case the situation suddenly becomes worse.

Anyway,I thought about that a bit, and yesterday the refutation came to me. A one letter password is a dead giveaway because it's so easy to shoulder surf. It's not even necessary for the shoulder surfer to actually see what's being typed. It's sufficient to see that only one character was being pressed. After that, it would be trivial (in the lax organization, more security conscious organizations would have failed password limits and timeouts) to try all the one character printable characters on the keyboard and log in.

Tuesday, April 12, 2005

usb thumbkey svn repository speed

I store (most of) my source code in a USB flash drive. Originally it was ext2 (because my previous thumbdrive was ext2, because i couldn't put reiser on it, at 32MB the old drive was too small and reiser wouldn't mkfs on it). Yesterday I got frustrated because commits were taking too long (small commits, but they were taking 2 minutes or more). So I did an svnadmin dump and also tar-gzipped the repository and rebuilt the filesystem as reiser.

Reiser said how much of the disk it would eat, and then proceeded to make the filesystem. After the load, checkouts, commits and updates are very much faster. I think I'll stay with this. I'm not sure what the problem was though. It might just have been that svn was trying to update too much data there (maybe the strings file, which is 6MB) and was just slow yesterday. Or it might be that reiser is just doing something good and it's a better match for SubVersion. I'll keep track of when it gets slow again. If it never does, then I'll stick with svn, but if it does get slow, I'll try xfs and jfs for svn, just for fun :).

boost regex!

I posted on the PLUG mailing list about regex libraries in C/C++. Matt Baluyos pointed me at PCRE and paolo falcone pointed me at Boost. I decided to go with boost because I couldn't stand the PCRE library.

That's not to denigrate the quality of the library, it works, very well. I use PCRE implicitly in PHP and it's a great help and is incredibly easy to use there. But C is now just too low level for me and while I can work there, I don't like it much. If I can, I work in PHP (perl is far too ugly for me, although that could change if i were to work in it instead of just reading it). I'll be working in java soon, but I tend to have a bias against it for small projects. I'm sure it's great for large projects, but even there, the libraries and frameworks seem over-engineered. But that's probably just a function of the fact that they're large enough that I can't get my brain around them in a week.

For anything lower level that I can't do in java or php, i like C++. And that's where I use the boost regex libraries. I've written utility functions that hide some of the details of the boost implementation (just pass string pattern, string data, vectormatches, and internally it does everything else). there's also a similar set of utility functions that passes the pattern as a regex const reference instead (so that patterns that are used all the time aren't re-compiled).

it's very nice to be able to say:

vector matches;

if ( re_search(pat, line, matches) )
{
for (int ctr=0; ctr < matches.size(); ctr++)
do_something_with_match(matches[ctr]);
}

Thursday, April 07, 2005

php, popen, stream_set_blocking(...,false)

So I thought that I could make a "multi-threaded" (but not really) program in php by using stream_set_blocking. Well, the things I need to do simultaneously involve calling popen on external programs and then reading the data and doing something with them.

Well, popen doesn't work with stream_set_blocking(..., false), so I can't do things that way. Unfortunately, I've spent a week or two setting up the framework for all this (so it would be easy and convenient) but now I find it can't be done.

Oh well, time to give up on trying to do that in PHP then, and just buckle down to doing it in C++ (the original version is in C++, but frankly, I just don't want to maintain that code anymore, no choice now though).

Tuesday, March 15, 2005

So you want to be a consultant

I saw So you want to be a consultant? long ago. I'm looking at it again since someone on a mailing list pointed at it.

I'm learning again. Some of those lessons I haven't internalized yet. It'll take a while, a lot of things do. I'll get there yet :).

Friday, March 11, 2005

rsync and compressed files

I did some testing and I find that generally, it is better to rsync uncompressed files rather than the corresponding compressed files or archives. at any rate, tar.gz archives are bad for rsync. tar files are OK.


  1. i took a directory of source code and test data, around 9MB.

  2. copied it to a remote box

  3. tar cvzf on both sides to one file and also tar cvf to another file.

  4. on the source box, edit one source file, insert only one line.

  5. tar cvzf and tar cvf on the source box. the source box should have sources, tar and .tgz which vary in only one line in only one internal file.

  6. rsync of the source gives a speedup of 450 (14K sent, 94 received), rsync of the tar file gives a speedup of 85000+ (78 bytes received, 20 bytes sent), rsync of the .tgz gives a speedup of 1.48, (2.4MB sent, 12K or so received).

so rsync of a tar file is best (because only one file needs to be analyzed to see where the differences are). rsync of a compressed file (at any rate of .tgz, but probably of any compressor) is bad. not sure why, but i wouldn't be surprised if the compressed representation of a lot of data depends on what has come before, and there may be other effects like that which
confound the difference finder since too much is found to be different.

Friday, March 04, 2005

Corporate peer-to-peer

is almost always a mistake. at any rate, it is anywhere where bandwidth is expensive.

I was just talking to someone at a company I do some consulting with. I was working remotely, and the link was ridiculously slow. Ping times were at around 1 second, and sometimes 1.5 seconds. I could still work (i've got some techniques involving rsync, for very bandwidth starved links, and i just type ahead), but I could work better if the bandwidth weren't so slow.


So I talked about the serious need in corporations to take steps to block p2p, and then, since it's impossible to block it completely, probably, to do as much as it can to monitor p2p and then to have a policy about p2p use (probably that it should not be allowed at all, and that it would be blocked and monitored, and violation would affect performance reviews).

That may sound draconian, but it's necessary.


  1. bandwidth costs money. even if it were cheap, if peer to peer didn't soak bandwidth the company wouldn't need that much bandwidth and could contract for less, thus paying less every month. That's money that goes straight to the bottom line.

  2. the company i'm using for my example runs its own publicly accessible mail and web servers and therefore their bandwidth is all fixed IP. That's a bit of a bug on the part of IT management, they could go with 80% dynamic IP bandwidth and then 20% fixed IP for mail and web. They would save quite a bit of money right there since fixed IP bandwidth carries a very high premium in the philippines. they would save more money just by buying dynamic bandwidth for staff time-wasting surfing and buying less fixed IP bandwidth for those services that require the bandwidth.

  3. in a litigious world, it's for the company's good that peer to peer is blocked and violations monitored and punished. The same company has received a warning letter from a RIAA/MPAA related agency, apparently someone had left their bittorrent client on and had been downloading and serving enough files that they attracted someone's notice.


Naturally, this sort of thinking won't sit well with employees. But frankly, I don't think it matters. The staff aren't being monitored for wasteful surfing (of which, perhaps half of all surfing at the office is wasteful and not work related or only very peripherally work related), so their surfing for entertainment is a free benefit of employment. It's only fair that those online activities which might be damaging to the company be disabled so that other online activities of neutral or only mildly negative value may be allowed.

Wednesday, January 26, 2005

Normal distribution

I had to write a program that needed to simulate random data that was normally distributed (bell curve distribution).

Fortunately, I was able to grab a copy of Numerical Recipes in C" and found some code on p. 217. The code produces a normal distribution in the range -1.0 to 1.0 with a standard deviation of 1.

My program was PHP, so i translated it from C (K&R!, boy that's old :)

The PHP source is here

Unfortunately, I'm not a mathematician and I'll need to find ways to adapt that code or find some other code where I can adjust kurtosis, standard deviation, etc.

I'd look in Knuth's Seminumerical algorithms, but it might not be there. And anyway, my copy is in Mindanao. I won't be able to refer to that until I go in May.

Friday, January 21, 2005

Email from idiots is spam

I use gmail, and the spam marking feature, and the fact that it's so easy to use, is very nice.

Every once in a while I see vacation messages posted to mailing lists. Every single one of those I mark as spam in gmail. Partly I do that because people who don't know enough to set selective filters on their vacation messages are too dumb to listen to.

The gmail filter will learn from vacation messages which words score high as spam and perhaps future vacation messages will be marked spam and I'll see less of them. Also, the authors may start to score higher as spammers. That's a good thing too, for me. I'll see less of their mail since their mail will automatically go to the spam mailbox, and when I go in there to confirm which emails are spam, I get a chance to despam those emails which are important.

I don't think I've seen gmail do that yet though (filter mainly on the sender's email address), I've seen Bob Reyes' spams about his hosting service end up in the spam mailbox, but that's just because the email was spam, not because the emails were from him.