Diary of a geek

September 2024
Mon Tue Wed Thu Fri Sat Sun
           
15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30            

Andrew Pollock

Categories

Other people's blogs

Subscribe

RSS feed

Contact me

JavaScript required


Wednesday, 13 May 2015

LWN Chrome extension published

I finally got around to finishing off and publishing the LWN Chrome extension that I wrote a couple of months ago.

I received one piece of feedback from someone who read my blog via Planet Debian, but didn't appear to email me from a usable email address, so I'll respond to the criticisms here.

I wrote a Chrome extension because I use Google Chrome. To the best of my knowledge, it will work with Chromium as well, but as I've never used it, I can't really say for sure. I've chosen to licence the source under the Apache Licence, and make it freely available. So the extension is available to anyone who cares to download the source and "side load" it, if they don't want to use the Chrome Web Store.

As for whether a userscript would have done the job, maybe, but I have no experience with them.

Basically, I had an itch, and I scratched it, for the browser I choose to use, and I also chose to share it freely.

[22:03] [tech] [permalink]

Saturday, 07 March 2015

Honey, I wrote my first Chrome extension!

I love reading Linux Weekly News. It's a great source of high quality Linux and FOSS journalism, and I've been a subscriber for years.

One mild annoyance I have with the site is the way articles are cross-linked. All the article URIs are in the format /Article/531114/, which isn't particularly descriptive about that article's content.

When faced with an article that links to another article, with perhaps a word of anchor text, it's hard to tell if the new article is worth opening in a tab, is indeed already open in a tab, or has been previously read. (Yes, the "visited link" colour can help to a small degree, but even then, it doesn't tell you which previously read article it is).

This is what God the W3C invented the title attribute for.

Back in April 2011, I emailed Jonathan Corbet and asked if his content management system could just do this, but it was apparently a bit tricky, and it got filed in the "feature request" bucket.

I was sufficiently irritated by this deficiency last Monday, when doing some heavy reading on a topic, and so I decided to take matters into my own hands, and also learn how to write a Chrome Extension into the bargain.

I was delighted to have scratched the itch under 24 hours later and developed something that solved my particular problem - lwn4chrome I'm calling it.

I'm just finalising an icon for it, and then I'll have a stab at putting it in the Chrome Web Store as a freebie.

I might even have a crack at writing a Firefox extension as well for completeness, but I suspect the bulk of LWN's readership is using Chrome or Chromium.

[00:06] [tech] [permalink]

Monday, 08 December 2014

A geek Dad goes to Kindergarten with a box full of Open Source and some vegetables

Zoe's Kindergarten encourages parents to come in and spend some time with the kids. I've heard reports of other parents coming in and doing baking with the kids or other activities at various times throughout the year.

Zoe and I had both wanted me to come in for something, but it had taken me until the last few weeks of the year to get my act together and do something.

I'd thought about coming in and doing some baking, but that seemed rather done to death already, and it's not like baking is really my thing, so I thought I'd do something technological. I just wracked my brains for something low effort and Kindergarten-age friendly.

The Kindergarten has a couple of eduss touch screens. They're just some sort of large-screen with a bunch of inputs and outputs on them. I think the Kindergarten mostly uses them for showing DVDs and hooking up a laptop and possibly doing something interactive on them.

As they had HDMI input, and my Raspberry Pi had HDMI output, it seemed like a no-brainer to do something using the Raspberry Pi. I also thought hooking up the MaKey MaKey to it would make for a more fun experience. I just needed to actually have it all do something, and that's where I hit a bit of a creative brick wall.

I thought I'd just hack something together where based on different inputs on the MaKey MaKey, a picture would get displayed and a sound played. Nothing fancy at all. I really struggled to get a picture displayed full screen in a time efficient manner. My Pi was running Raspbian, so it was relatively simple to configure LightDM to auto-login and auto-start something. I used triggerhappy to invoke a shell script, which took care of playing a sound and an image.

Playing a sound was easy. Displaying an image less so, especially if I wanted the image loaded fast. I really wanted to avoid having to execute an image viewer every time an input fired, because that would be just way too slow. I thought I'd found a suitable application in Geeqie, because it supported being out of band managed, but it's problem was it also responded to the inputs from the MaKey MaKey, so it became impossible to predictably display the right image with the right input.

So the night before I was supposed to go to Kindergarten, I was up beating my head against it, and decided to scrap it and go back to the drawing board. I was looking around for a Kindergarten-friendly game that used just the arrow keys, and I remembered the trusty old Frozen Bubble.

This ended up being absolutely perfect. It had enough flags to control automatic startup, so I could kick it straight into a dumbed-down full screen 1 player game (--fullscreen --solo --no-time-limit)

The kids absolutely loved it. They were cycled through in groups of four and all took turns having a little play. I brought a couple of heads of broccoli, a zucchini and a potato with me. I started out using the two broccoli as left and right and the zucchini to fire, but as it turns out, not all the kids were as good with the "left" and "right" as Zoe, so I swapped one of the broccoli for a potato and that made things a bit less ambiguous.

The responses from the kids were varied. Quite a few clearly had their minds blown and wanted to know how the broccoli was controlling something on the screen. Not all of them got the hang of the game play, but a lot did. Some picked it up after having a play and then watching other kids play and then came back for a more successful second attempt. Some weren't even sure what a zucchini was.

Overall, it was a very successful activity, and I'm glad I switched to Frozen Bubble, because what I'd originally had wouldn't have held up to the way the kids were using it. There was a lot of long holding/touching of the vegetables, which would have fired hundreds of repeat events, and just totally overwhelmed triggerhappy. Quite a few kids wanted to pick up and hold the vegetables instead of just touch them to send an event. As it was, the Pi struggled to play Frozen Bubble enough as it was.

The other lesson I learned pretty quickly was that an aluminium BBQ tray worked a lot better as the grounding point for the MaKey MaKey than having to tether an anti-static strap around each kid's ankle as they sat down in front of the screen. Once I switched to the tray, I could rotate kids through the activity much faster.

I just wish I was a bit more creative, or there were more Kindergarten-friendly arrow-key driven Linux applications out there, but I was happy with what I managed to hack together with a fairly minimal amount of effort.

[17:04] [tech] [permalink]

Sunday, 17 August 2014

Solar follow up

Now that I've had my solar generation system for a little while, I thought I'd write a follow up post on how it's all going.

Energex came out a week ago last Saturday and swapped my electricity meter over for a new digital one that measures grid consumption and excess energy exported. Prior to that point, it was quite fun to watch the old analog meter going backwards. I took a few readings after the system was installed, through to when the analog meter was disconnected, and the meter had a value 26 kWh lower than when I started.

I've really liked how the excess energy generated during the day has effectively masked any relatively small overnight power consumption.

Now that I have the new digital meter things are less exciting. It has a meter measuring how much power I'm buying from the grid, and how much excess power I'm exporting back to the grid. So far, I've bought 32 kWh and exported 53 kWh excess energy. Ideally I want to minimise the excess because what I get paid for it is about a third of what I have to pay to buy it from the grid. The trick is to try and shift around my consumption as much as possible to the daylight hours so that I'm using it rather than exporting it.

On a good day, it seems I'm generating about 10 kWh of energy.

I'm still impatiently waiting for PowerOne to release their WiFi data logger card. Then I'm hoping I can set up something automated to submit my daily production to PVOutput for added geekery.

[17:31] [tech] [permalink]

Wednesday, 23 July 2014

Going solar

With electricity prices in Australia seeming to be only going up, and solar being surprisingly cheap, I decided it was a no-brainer to invest in a solar installation to reduce my ongoing electricity bills. It also paves the way for getting an electric car in the future. I'm also a greenie, so having some renewable energy happening gives me the warm and fuzzies.

So today I got solar installed. I've gone for a 2 kWh system, consisting of 8 250 watt Seraphim panels (I'm not entirely sure which model) and an Aurora UNO-2.0-I-OUTD inverter.

It was totally a case of decision fatigue when it came to shopping around. Everyone claims the particular panels they want to sell at the best. It's pretty much impossible to make a decent assessment of their claims. In the end, I went with the Seraphim panels because they scored well on the PHOTON tests. That said, I've had other solar companies tell me the PHOTON tests aren't indicative of Australian conditions. It's hard to know who to believe. In the end, I chose Seraphim because of the PHOTON test results, and they're also apparently one of the few panels that pass the Thresher test, which tests for durability.

The harder choice was the inverter. I'm told that yield varies wildly by inverter, and narrowed it down to Aurora or SunnyBoy. Jason's got a SunnyBoy, and the appeal with it was that it supported Bluetooth for data gathering, although I don't much care for the aesthetics of it. Then I learned that there was a WiFi card coming out soon for the Aurora inverter, and that struck me as better than Bluetooth, so I went with the Aurora inverter. I discovered at the eleventh hour that the model of Aurora inverter that was going to be supplied wasn't supported by the WiFi card, but was able to switch models to the one that was. I'm glad I did, because the newer model looks really nice on the wall.

The whole system was up at running just in time to catch the setting sun, so I'm looking forward to seeing it in action tomorrow.

Apparently the next step is Energex has to come out to replace my analog power meter with a digital one.

I'm grateful that I was able to get Body Corporate approval to use some of the roof. Being on the top floor helped make the installation more feasible too, I think.

[05:36] [tech] [permalink]

Sunday, 23 February 2014

My thoughts on the Ozobot Kickstarter campaign

I'm not an avid Kickstarter follower (if I were, I'd have gone broke long ago). I tend to wind up backing campaigns that come to my attention via other means (in other words, they're in the process of going viral). That said, I've backed plenty of campaigns over the years, and so I'd like to think I have a little bit of familiarity with how they usually operate.

When Ozobot came to my attention, I found it unusual, because they were pre-promoting their Kickstarter campaign before it opened. To me, this looked like a case of them trying to build hype prior to the campaign opening, which was a new one to me. The whole thing seemed incredibly slick, and I was surprised they were "only" seeking $100K.

The product looked like it'd be something cool for Zoe to play with, so I decided to back it anyway. Then all the updates started flowing in about how well it was being received at various trade shows and whatnot. Yet the amount of dollars flowing into the Kickstarter campaign didn't seem to be reflecting the external hype. I was watching the campaign's dashboard with great interest, because as time marched on, it was looking more and more likely that it wasn't going to make its funding target. This seemed highly unusual to me, given the slickness of the product and purported external interest in it.

And then they pulled the plug on the campaign. Purportedly because they were pursuing equity funding instead. They admitted they'd also read the writing on the wall and it was unlikely they were going to make their funding target. I haven't followed other campaigns to see how much of a last minute funding "pop" they have. Usually I've found they've closed at many multiples of their original target, and hit their target well in advance of their deadline, when they're ridiculously popular. My interpretation of Ozobot's campaign, from a funding perspective, is that Kickstarters gave it a big fat "MEH", which surprised me somewhat.

Then the question comes up: was the Kickstarter campaign a ruse all along? Was it just a new way of pitching for venture capital? The videos seemed pretty slick. The product seemed already complete, and $100K didn't seem like enough to take it to manufacturing.

It'll be interesting to see what becomes of Ozobot now.

[15:56] [tech] [permalink]

Sunday, 16 February 2014

Crypto is hard, let's go shopping

(Or "Today's yak shaving exercise")

I'm trying to get back into some Debian work today, which involved trying to send an email from my laptop, relayed through daedalus, which is in a colo facility.

Then I started shaving my yak.

daedalus rejected the email, because the relay attempt wasn't authenticated. It wasn't authenticated because Exim on my laptop wasn't happy about the Diffie-Hellman parameters Sendmail on daedalus was advertising, and had failed STARTTLS, and like a sensible person, I only allow encrypted authentication. Along the way, I figured I'd need to address the fact that the host certificate has expired long ago (truth is, I probably just needed to fix the Diffie-Hellman parameters file), but my whole CA certificate expired over a year ago.

I couldn't find the encrypted USB key with my CA on it, so I decided to cut my losses and start from scratch. It's good to see OpenSSL hasn't got any more user friendly with time. I futzed around and generated a new CA key and certificate, a new CSR for Sendmail, signed it, did the appropriate sacrifices, and confirmed that STARTTLS now worked using OpenSSL s_client.

Still couldn't send an email. Still had some bleating in the Exim logs:

2014-02-17 11:29:26 1WFD1M-00086p-KI TLS error on connection to daedalus.andrew.net.au [203.33.60.161] (gnutls_handshake): The Diffie-Hellman prime sent by the server is not acceptable (not long enough).

Back on daedalus, /etc/mail/tls/sendmail-common.prm has been an empty file since time immemorial. I managed to find something in Google's cache suggesting I should put something in it with

openssl dhparam -out /etc/mail/tls/sendmail-common.prm -2 1024

I initially went for 4096 bits, because it must be better, right? I'm still waiting for the key generation to come back, while I'm writing this. At least it warned me in advance it was going to take a long time. How thoughtful. What would be more thoughtful was verifying the write permission to the file before the long operation so it could fail fast, rather. Sigh. I ended up giving up on 4096 and sticking with 1024, because I was getting old waiting.

Yak shaved, I can send mail from my laptop now.

[19:37] [tech] [permalink]

Sunday, 24 February 2013

On owning a Nissan Leaf

I'll soon be disposing of the Nissan Leaf that we leased a few months ago, so I thought it a useful exercise to write about my experiences with it.

I am not a car man. I am a gadget man. For me, driving is a means to an end, and I'm much more interested in what I call the "cabin" experience, than the "driving" experience, so this is going to slanted much more that way.

That said, I found the Nissan Leaf a fun car to drive, from my limited experiences of having fun driving cars. I liked how responsive it was when you put your foot down. It has two modes of operation, "D" and "Eco". I've actually taken to driving it in "Eco" mode most of the time, as it squeezes more range out of the batteries, but occasionally I'll pop it back into "D" to have a bit of fun.

The main difference between the two modes, from a driving perspective, is it seems to limit the responsiveness of the accelerator. In "Eco" mode it feels more like you're driving in molasses, whereas in "D" mode, when you put your foot down, it responses instantly. "D" is great for dragging people off at the lights. It's a very zippy little car in "D" mode. It feels lighter.

I've noticed that it has a bit of a penchant for over steering. Or maybe that's just my driving. If I have floored it a bit to take a right turn into oncoming traffic, I've noticed slight over steering.

That's about all the driving type "real car stuff" I'll talk about.

Now to the driver's "cabin experience".

It's absolutely fabulous. I love sitting in the driver's seat of this car.

Firstly, the seat itself is heated (in fact all of them are). As is the steering wheel. Nissan has gone to great lengths to allow you to avoid needing to run the car's heating system to heat the car, as doing so immediately drops at least 10 miles off the range. Unfortunately I found the windscreen had a tendency to fog up in the winter rainy periods, so I'd have to intermittently fire up the air conditioning to defog the windscreen. Of course, in the summer months, you're going to want to run the AC to cool down, so the range hit in that situation is unavoidable. I've only had this car from late Autumn until late Winter so far, so that hasn't been an issue I've had to contend with.

The dashboard is all digital, and looks relatively high tech, which appeals to my inner geek. It's a dashboard. It tells you stuff. The stuff you'd expect it to tell you. Enough said.

The audio system is nice. It supports Bluetooth audio, so one can be streaming Pandora from one's phone, through the sound system, for example. Or listening to audio from the phone. There's also a USB port, and it will play various audio files from USB storage. I found the way it chose to sort the files on a USB stick to be slightly surprising though. I haven't invested the time to figure out how I should be naming files so that they play in the order I expect them to play. The ability to play from USB storage compensates nicely for the fact that it only has a single CD player. (We have a 6 disc stacker in our 2006 Prius).

The car also came with a 3 month free trial of Sirius XM satellite radio. This was fun. The only dance music FM station in the Bay Area has a weak signal in Mountain View, and I hate dance music with static, whereas there was an excellent electronic music station that I could listen to in glorious high definition. As long as I wasn't driving under a bridge. There's no way I'd pay money for satellite radio, but it was a fun gimmick to try out.

The navigation system is really, really good. I haven't bothered using Google Maps on my phone at all. It gives such good spoken directions, that you don't even need to have the map on the screen. It names all the streets. I couldn't figure out a way to get distances in metric.

The telematics service, Carwings, is probably my favourite feature. This is what really makes it feel like a car of the future. Through the companion Android application, I can view the charging status (if it's plugged in) or just check the available range (if it's not plugged in). From a web browser, I can plan a route, and push the route to the vehicle's navigation system. If the car is plugged in, I can also remotely turn on the vehicle's climate control system, pre-warming or cooling the car.

It's a little thing, but the door unlocking annoyed me a little bit. I'm used to the Prius, where if you unlock the boot (that's trunk, Americans), or the front passenger door, all the doors unlocked. This was a convenient way of unlocking the car for multiple people as you approached it. With the Leaf, unlocking the boot only unlocks the boot. Unlocking the front passenger door only unlocks that door. It requires a second unlock action to unlock all the doors. I've found this to be slightly cumbersome when trying to unlock the car for everyone all at once, quickly (like when it's raining).

Another minor annoyance is the headlights. I've gotten into the habit of driving with the headlights on all the time, because I believe it's safer. In the Prius, one could just leave the headlights switched to the "on" position, and they'd turn off whenever the driver's door was opened after the car was switched off. If you try that in the Leaf, the car beeps at you to warn you you've left the headlights on. It has an "auto" mode, where the car will choose when to turn the headlights on, based on ambient light conditions. In that case, when you turn the car off, it'll leave the headlights on for a configurable period of time and then turn them off. This is actually slightly unsettling, because it makes you think you've left your headlights on. The default timeout was quite long as well, something like 60 seconds.

The way multiple Bluetooth phones are handled is just as annoying in the Leaf as it is in the Prius, which disappoints me, given 6 years have passed. The way I'd like to see multiple phones handled is the car chooses to pair with whichever phone is in range, or if multiple phones are in range, it asks or uses the last one it paired with. In reality, it tries to pair with whatever it paired with last time, and one has to press far too many buttons to switch it to one of the other phones it knows about.

Range anxiety is definitely something of a concern. It can be managed by using the GPS navigation for long or otherwise anxiety-inducing trips, and then one can compare the "miles remaining" on the GPS with the "miles remaining" on the battery range, and reassure oneself that they will indeed make it to where they're trying to go. The worst case I had was getting to within 5 miles of empty. The car started complaining at me.

The charging infrastructure in the Bay Area is pretty good. There are plenty of charging stations in San Francisco and San Jose. I'm spoiled in that I have free charging available at work (including a building at the end of my street, so I never bothered with getting a home charger installed). I've almost never had to pay for charging, so it's been great while gas prices have been on the rise.

The car's navigation system knows about some charging stations, so you can plan a route with the charging stations it knows about in mind. The only problem is it doesn't know if the charging stations are in use. If you use the ChargePoint Android app, you can see if the charging stations are in use, but then you have to do this cumbersome dance to find an available charging station and plug the address into the vehicle's navigation system. Of course, what can then happen is in the time you're driving to the charging station, someone else starts using it. I actually got bitten by this yesterday.

Would I buy a Leaf again? Not as my sole car. It makes a perfect second/commuter car, but as a primary vehicle, it's too limited by its range. They're also ridiculously expensive in Australia, and Brisbane has absolutely no charging infrastructure.

[20:34] [tech] [permalink]

Sunday, 18 November 2012

Better late than never

Last year (I think for my birthday) Sarah backed the Twine project on Kickstarter.

Well, to say the project experienced some delays would be a bit of an understatement, but in yesterday's mail, it finally arrived.

Twine box

I'm planning on using it to replace the cat water bowl sensor that never got reinstated when we moved house.

[20:21] [tech] [permalink]

Wednesday, 11 July 2012

Polar epically anti-developer? A product review of the Polar WearLink®+ transmitter with Bluetooth®

Continuing in my better running through technology phase that I'm currently going through, I got all excited when I discovered that Polar make a Bluetooth-enabled heart rate monitor chest strap.

Prior to this, I'd be lamenting how there wasn't a large choice of ANT+ enabled Android phones (to my knowledge there's just something Sony makes)

So I got all excited and ordered a strap and it arrived today. This is my initial review (I haven't gone running with it yet, I've just had a tinker)

I'll start with the bad first

the battery cover is a bit dicky
I took the battery cover off to confirm it came with a battery and then had a whale of a time getting it back on. I haven't managed to get it on as flush with the back as before I took it off. This appears to be a common complaint in the Amazon product reviews also.
hard to tell what its auto off timeout is, whether it's on, etc
Basically there's a large lack of feedback as to what's going on. Is it on or off? Is it wet enough to turn itself on? Is it reading a heart rate, or is the Bluetooth pairing just not working? The most accurate way I've found to determine if it's "working" or not is to unpair it and then repair it again. If you get a PIN prompt it's talking. By accident, I've found indications that it will turn itself off after not reading a heart rate for 10 minutes, and to reset it, you detach it and wait 30 seconds and reattach it. This would have been good information to include in the manual

Now for the good

reasonable Android app support
Heart Rate Monitor for Polar claims to support it, but I haven't managed to get it to do anything useful yet. I was pleasantly surprised to discover that My Tracks has support for it. Noom CardioTrainer is one that I was already trying out alongside Strava Run (which doesn't have support for it). There's also Sports Tracker and Endomondo. The UI for both of these also show the battery level. I've found that trying to have multiple apps reading the heart rate monitor simultaneously seems to be an exercise in fail, and that the Sports Tracker app seems to start a background service which subsequently causes all sorts of problems for any of the other apps started after it has started.

And the ugly

Looking back at all the Amazon product reviews, they're pretty much split equally between 5 stars and 1 stars. The product seems to either work flawlessly or absolutely dreadfully. I was beginning to think I was in the latter group and that I'd bought a white elephant, but now that I've gotten to the bottom of the idiosyncrasies of talking to it from Android, it seems to be behaving fairly reliably.

Talking to it from Linux, and the reason behind the title of this post

One of the first things I tried doing after Android was being initially flaky for me, was to try and talk to it from a Linux laptop. This proved fairly straightforward using rfcomm. I didn't get anything human readable out of it though. Being curious as to how these Android apps were able to decode the data, I went looking for some API documentation, and a Google search led me to this sad forum discussion.

So the API information is not freely available, and their CEO personally signs off on who gets access to it. How ludicrous. Do they want to sell product or not? That said, I did also find this blog post which, courtesy of the aforementioned Open Source My Tracks Android app, lays it all out for you. So them being all anti-development with their information is kind of pointless. Polar's website doesn't mention anything about their management, so I have no idea who their CEO even is. Wikipedia is also none the wiser.

I'm looking forward to going for a run tomorrow and seeing how this thing pans out.

[22:01] [tech] [permalink]

Saturday, 25 June 2011

On trying to find the resource limits of a running process on an old kernel

I had cause to try and get a core dump from a segfaulting process at work the other day, and I wanted to figure out if fiddling with /etc/security/limits.conf was going to do the trick (it didn't) or if I had to modify the initscript to include a ulimit -c unlimited call.

Of course, on a modern system (>= 2.6.24), one would just take a look at /proc/PID/limits and get on with life, but unfortunately the system in question was running 2.6.18, so more fiddling around was required.

I'd found something once before that told me how to do it with GDB, but all I could find this time around was a rather over-complicated Knol article, which made a bunch of assumptions (well mainly that the binary in question wasn't stripped). So with some help from the Knol article, I muddled through it.

Disclaimer: I don't profess to be an expert on system internals like this, so if this eats your first-born, don't come crying to me.

Firstly, you need to know that it's the getrlimit(2) system call that you want to be using, and then you need to figure out the number for the resource limit you want to retrieve. The man page for getrlimit(), tells you it's defined in /usr/include/sys/resource.h, but I've found that the actual useful bits end up being in /usr/include/bits/resource.h

I wanted the resource limit for the maximum core dump size, which is RLIMIT_CORE and has a value of 4.

Next, you need to know that the getrlimit() system call takes an integer and a pointer to an rlimit structure as arguments. We've just figured out the value for the integer, but we're also going to need to pass a pointer as the second parameter. A pointer to enough memory to hold an rlimit structure. Fortunately, the rlimit structure is pretty simple:

struct rlimit {
    rlim_t rlim_cur;  /* Soft limit */
    rlim_t rlim_max;  /* Hard limit (ceiling for rlim_cur) */
};

After a bit of grepping around in /usr/include, I determined that an rlim_t is essentially an unsigned long int, so we need to allocate a pointer big enough to hold two of them.

Note that if we had an unstripped binary, we could have saved a lot of faffing around by just going

print sizeof(struct rlimit)

in GDB (assuming that the binary has the getrlimit symbol in it)

The sure-fire way of figuring out how much memory we need for this pointer is to go

print sizeof(unsigned long int)

in GDB, and then double that (since we want two of them). On my system an unsigned long int is four bytes, so I'm going to want to allocate enough memory for 8 bytes.

Now it's time to attach GDB to the offending process and see what the resource limit currently is. (gdb -p PID)

(gdb) print malloc(8)
$1 = 152186904
(gdb) print getrlimit(4, $1)
$2 = 0
(gdb) x/2xw $1
0x9123018:	0x00000000	0x7fffffff
(gdb) quit

So in this particular case, we've retrieved the soft and hard limits of the RLIMIT_CORE resource limit, and you can see that the soft limit is zero, and the hard limit is unlimited. Note that the getrlimit() function returns an integer as its return code, which is what the $2 = 0 is above.

Now it's just a case of altering the the resource limits via the preferred mechanism, restarting the process and then repeating this GDB examination of the process to check they were changed successfully.

Circumstances where this can all fall down would appear to be ones where the getrlimit symbol isn't present in the binary, or the binary is compiled as a position-independent executable. I'd think that in the latter case, the system is probably modern enough to be running a kernel that supports directly examining the resource limits via the /proc filesystem.

[11:19] [tech] [permalink]

Wednesday, 08 June 2011

I'd pay for an ISP service that excluded China

I say this after probably 48 hours of what I estimate is a sustained 60 kB/s of brute force SIP traffic from 61.146.178.173 has been sent my way.

I'd very cheerfully sign up for a service that had all Chinese IP addresses null routed, and provided a proxy server for any HTTP access. I have no business needing direct IP connectivity to China, and I certainly do it not want it from China to me.

I was pleasantly surprised to be able to call up Comcast's Business Class technical support tonight and ask them if they could null route the above IP address, and they seemed to imply they could (they opened up a ticket for a level 2 person to do something at least).

Now if only I didn't have to pick up the phone to interact with Comcast's Business Class technical support, it'd be just lovely.

I'm just glad I'm not in Australia where this would be blowing my monthly quota and/or causing me to receive excess usage charges. Bloody UDP. You can firewall it off, but it keeps coming. My initial stopgap Netfilter rule counted 1.5G of traffic before I replaced it with an adaptation of my SSH brute force mitigation rules, and the new rule has seen nearly 5G of traffic now. It's one thing to try to brute force something, but to keep trying after you stop getting responses is just plain stupid.

[22:08] [tech/security] [permalink]

Tuesday, 01 February 2011

Getting my IPv6 on

Somewhat motivated by Geoff Huston's keynote at linux.conf.au 2011, I've decided to pull my finger out and make a foray into IPv6. My colo provider, where daedalus.andrew.net.au lives is IPv6-enabled, so a quick email to their support department, and boom, thanks to the black magic of SLAAC, and daedalus has an IPv6 /64. I still don't quite understand how it works.

Comcast Business, on the other hand, doesn't seem to be quite so on the ball, so I've organised for a Hurricane Electric IPv6 tunnel to play around with at home.

Next, I need to throw around a few AAAA DNS records, and then probably watch everything grind to a halt.

[23:57] [tech] [permalink]

Wednesday, 22 December 2010

Find of the day: pianobar

Cube-mate and friend, Kynan, put me on to pianobar today. This thing is 100% sheer awesome. A console Pandora client!

No longer do I have to find the browser tab that Pandora is running in to pause it, or to see what the hell it is I'm listening to. Nor do I need to worry about Flash crashing.

It's also extensible in that it can call out to an external program for various events, such as starting a new song. This makes desktop notifications of what the hell you're listening to trivial.

It'll also read commands from a named pipe, so at work, where I have this crazy Microsoft Natural Keyboard with all sorts of multimedia keys, I've set up evrouter (which isn't in Debian for some strange reason) so that I can pause, skip and like/dislike using my keyboard (regardless of what application I'm in). So I can just kick off pianobar in a terminal somewhere and forget about it. This is so cool.

Interestingly, it also doesn't play any advertising, so I've finally pulled my finger out and paid for a Pandora One account.

Overall, this makes for a fantastic desktop Pandora client. Now I just need to figure out why I get weird HTTP errors from it at home...

[20:08] [tech] [permalink]

Friday, 17 December 2010

Probable solution for Pandora 417 errors on Google TV

This has been bugging me ever since we got our Sony Internet TV, and I think I've finally figured out what's going on.

Pandora hasn't been working on the Google TV ever since we got it, it just spits out Unexpected HTTP error occurred code: 417 while StartupAsyncTask and that's the end of that. I have vague recollections seeing this on my phone in the past also, but this doesn't appear to be the case any longer. My phone has version 1.5.2 on it, and the TV has 1.0, so I think I see the problem here.

I took a full packet capture while starting up Pandora, and it was a lot more revealing. I could see that it was my Squid proxy (I transparently run all HTTP traffic through Squid) was barfing. That'd explain the errors I was seeing in Squid's cache.log

parseHttpRequest: Unsupported method ''
clientTryParseRequest: FD 153 (172.16.1.5:55001) Invalid Request

Looking at the headers of the HTTP request that the Pandora client is sending out, I see Expect: 100-Continue, and the text that accompanies a HTTP 417 error is "Expectation Failed", so I think this is the culprit.

Searching for Expect: 100-Continue squid turns up information for an ignore_expect_100 directive, but this appears to only be available if Squid is compiled with --enable-http-violations, which the Debian Squid package doesn't appear to be compiled with.

I may have to adjust my transparent proxying rules to not transparently proxy the TV, but this makes me a bit sad.

Update

I tried adding ignore_expect_100 on anyway, and Squid took it, and now Pandora isn't barfing. Yay!

[22:15] [tech] [permalink]

Sunday, 05 December 2010

And if only I'd read Planet Debian first...

Argh. So of course there's a difference between CONFIG_USB_SERIAL=y and CONFIG_USB_SERIAL=m, and that would explain the behavior I'm seeing. Thanks Ben Hutchings.

I guess I'm recompiling my kernel again. Kill me now.

[15:24] [tech] [permalink]

If you're trying to use a USB serial adapter as your Linux kernel console, part 2

Seriously, the amount of yak shaving I'm doing these days...

So I discover this CONFIG_USB_SERIAL_CONSOLE config option. I proceed to edit my .config, throw it in there and rebuild my kernel. (For the record, I hate building kernels). I'm about to reboot into said new kernel, and I thought I'd just double check the resulting config-2.3.36 file to check that CONFIG_USB_SERIAL_SERIAL did indeed get enabled. It's not there.

I spend a few minutes WTFing, and assume I've been an idiot and should have run a make oldconfig or some such thing after hand-editing my .config file. So I try that. The additional line vanishes again afterwards. More WTFing ensues.

I fire up make menuconfig, and go looking for the menu entry for the option. It's not there. The plot thickens.

I hand-edit drivers/usb/serial/Kconfig and change depends on USB_SERIAL=y to just depends on USB_SERIAL, and rerun make menuconfig, and lo and behold, the option appears!

So now I'm off rebuilding the kernel again.

Sigh.

[15:17] [tech] [permalink]

If you're trying to use a USB serial adapter as your Linux kernel console...

The kernel needs to be compiled with CONFIG_USB_SERIAL_CONSOLE=y

Debian kernels do not appear to be. Somewhat strangely, the config option doesn't even wind up in the config file in the typical commented-out manner, so you don't even know the option exists until you discover it from random Web searching, and then looking at drivers/usb/serial/Kconfig in the kernel source.

[12:05] [tech] [permalink]

A cleaner puppet manifest

Jon Downland is tinkering with Puppet, and wrote a class with a package resource to install a bunch of packages, but didn't quite like the way it was written.

The way that I'd achieve the same thing, and be slightly cleaner would be something like this:

class jon-desktop {

	$packages_to_install_by_default = [
		'icedove',
		'git-core',
		'vim-gnome',
		'vinagre',
		'build-essential',
		'devscripts',
		'subversion',
		'git-buildpackage',
		'mutt',
		'offlineimap',
		'ascii',
		'gitk',
		'chromium-browser',
	]

	package { $packages_to_install_by_default:
		ensure => installed,
	}
}

This way your actual package resource definition never needs to change. The variable you choose to use to define the list of packages to install can be abstracted away as much as desired.

[10:05] [tech] [permalink]

Thursday, 18 November 2010

Finally decided to order something

Okay, I've finally settled on something. I decided to pass on the ASRock box. The price was just too high for my liking, and I've ordered a Zotac ZBOX HD-ID11. Yes, it's going to have a dirty proprietary NVIDIA graphics chipset. The interwebs tell me that it's the only one that will do H.264 decoding in hardware. I'm not sure I particularly care, for my application.

It comes with DVI and HDMI, and a DVI-to-VGA dongle, so I should still be able to plug it into whatever floats my boat. What I did did just realise, about 15 minutes after I placed the order and was changing Zoe's diaper, was that I don't have a remote control solution now. The ASRock box included a remote control.

I figure I can use one of the 6 USB ports and get some sort of USB infra-red adapter, and probably keep using the Hauppauge remote that came with my PVR-350 card.

With the money I've saved buying the ZBOX, I've ordered a 40Gb Intel SSD, so that should make for a nice, no-moving-parts front end (assuming everything works).

Everything should arrive before Thanksgiving, so I'll have a few days off work to tinker.

Here's hoping I haven't bought a lemon.

[18:17] [tech] [permalink]

Avoiding Pouslbo like the plague

I got a fair bit of feedback from my latest post about the Boxee Box, none of it particularly favourable about the GMA500/Poulsbo chipset, so regretfully, I won't be buying a Boxee Box. Too bad, it looked cute, and was cheap.

A David Härdeman dropped me an email, and brought to my attention the ASRock Core 100HT-BD

I was nearly going to buy the wrong thing off Amazon (there's also an NVIDIA ION-based variant), but I found a non-BluRay variant on NewEgg for about $100 less than the BluRay version. It's certainly more expensive than something like a Boxee Box, but hey, if it's going to work...

I think if I had the choice, I'd get an SSD for it, but I can see how this goes with a rotating disk. The current MythTV box has a rotating disk in it (just not for storage of recorded content) and it seems sufficiently quiet.

The other nice thing about this box is it has VGA and HDMI, so I can totally try this thing out on the current TV before replacing it.

Now the question is, do I place an order today...?

[11:21] [tech] [permalink]

Tuesday, 16 November 2010

More research into the Boxee Box

I received a few emails from my blog post about my MythTV dilemma.

Suggestions ranged from using VGA to DVI to HDMI adapters, to getting different TVs that still have VGA connectors, to getting an older NVIDIA card off eBay that has lower power requirements.

Firstly, VGA is dead. It's time to make the move. I don't think VGA for a TV the size I'm looking at is ideal for HD anyway. I'm pretty much fixated on the Sony TV as well, because I want to get Google TV. So I need HDMI.

Similarly, I suspect that the picture quality of something run through a VGA to HDMI conversion will be similarly shite, so I'm not too keen on that option, although it would tide me over initially, so I'll look into that further.

Another suggestion was the fit-PC2, which was something I was already aware of, but I'd heard terrible things about the general Linux support for the graphics adapter.

I did some more research into the Boxee Box, (it really is new, it's only been out for 6 days) and it appears to have the same video chipset as the fit-PC2, which seems rather unfortunate. But then, if the Boxee Box is running Linux, then they've obviously got a driver, so the question is how easy is it to transplant this driver from the Boxee distro to, say, Debian?

I'm curious enough to place an order soon. Anyone got any interesting Poulsbo-on-Linux war stories they want to share? I know it makes Matthew Garrett sad.

[21:01] [tech] [permalink]

Monday, 15 November 2010

The knock-on effects of buying a new TV

I'm considering replacing our 5 year old 32" TV with a 40" Sony Internet TV, largely "just because" it has Google TV. That and I have spousal approval to get a new, bigger TV (it just has to be wall mounted).

The dilemma is MythTV. It's currently still running on the Dell Dimension 3100 that I bought years ago, and connected via VGA to the TV. The Sony Internet TV is HDMI or nothing, so at the very least, I need to get a DVI-capable video card in my MythTV box.

The more or less natural choice would be an NVIDIA card, despite them having a closed-source driver. At least it's well supported under Linux. The problem is, as far as I can determine, NVIDIA cards require at least a 300 watt power supply, and the Dimension 3100 has only a 230 watt power supply.

So now I'm looking at having to replace the MythTV box. I'm currently running the front end and the back end on the same box, so it'd be a fairly simple case of splitting the back end from the front end, and just getting something new to be the front end.

So now I'm in the market for something small, low-power, has a HDMI (or DisplayPort) interface, and enough grunt for HD playback. I'm wondering if the Boxee Box is the solution. Interface-wise, it has HDMI, so I'm good there, and I believe the marketing claims it can do HD playback. I just don't know how hackable it is. I think it's running Linux, so in theory this should be doable, so long as there isn't some weird hardware in it with a binary-only driver. For the price, it's probably worth just getting one to see.

The annoying thing is, I have nothing in the house that is DVI or HDMI capable, so I have no way of testing this out prior to buying the TV.

[21:54] [tech] [permalink]

Monday, 09 August 2010

Farewell, Sonic.net, hello Comcast Business

In all my born days, I never thought I'd be saying that.

I've been a happy customer of Sonic.net for the entire time I've been in the US, but I recently became aware of an offer through work with Comcast Business that I couldn't refuse.

So I've gone from

Speedtest.net results for Sonic.net showing
2.31 Mb/s down and 0.43 Mb/s up

to this

Speedtest.net results for Comcast Business
showing 72.35 Mb/s down and 10.34 Mb/s up

You really can't beat it. I'll still recommend Sonic to all and sundry, but if you want some real speed (and with baby photos and videos galore to upload, I was really more interested in additional upload capacity), it's time to give DSL the flick.

[22:17] [tech] [permalink]

Monday, 05 July 2010

Detecting capabilities with strace

One for the note-to-self file...

Linux's maturing support for POSIX.1e capabilities is cool. Here's how to figure out what capabilities a binary needs, using strace.

[21:56] [tech/security] [permalink]

Monday, 14 June 2010

The ondemand CPU frequency governor should be better in Linux 2.6.35

I'm still wading through my backlog of LWN articles, but I found this one to be of interest.

From my interpretation of the article, I/O intensive workloads currently cause the ondemand governor to drop the CPU frequency, whereas it ideally shouldn't.

It looks like the fixes have been merged into what will become 2.6.35

[22:56] [tech] [permalink]

Thursday, 29 April 2010

How to set up my Mustek 1200 UB Plus scanner under Linux

One for the note-to-self file...

Strangely, it's labeled as a "Mustek 1200 UB Plus" on the unit itself, but lsusb says it's a "Ultima Electronics Corp. Artec Ultima 2000 (GT6801 based)/Lifetec LT9385/ScanMagic 1200 UB Plus Scanner"

apt-get install xsane sane-utils

http://www.meier-geinitz.de/sane/gt68xx-backend/ says I should be downloading sbfw.usb, which needs to go into /usr/share/sane/gt68xx

Uncomment override "mustek-scanexpress-1200-ub-plus" in /etc/sane.d/gt68xx.conf

adduser apollock scanner

newgrp scanner

scanimage -L should now report something like "device `gt68xx:libusb:001:002' is a Mustek ScanExpress 1200 UB Plus flatbed scanner"

[19:22] [tech] [permalink]

Thursday, 15 April 2010

On the iPad

I tend not to be a huge Apple fanboy. They make nice stuff, but it's just too closed for my liking. I like to tinker. I had a PowerBook for a while, but I gave it to Sarah in favour of a Linux laptop.

Sarah's been a happy Mac user for a number of years, and had an iPhone (until I gave her a Nexus One). For a "normal" user like her, a Mac is fine, especially if you want to embrace Apple's entire ecosystem.

Anyway, the iPad. When it was announced, I sat up and took notice. Why? This seemed like something I might actually use as a casual computing device. I mean, I'm almost in the target market for Chrome OS these days. I spend most of my time in a web browser, and the rest of my time in a terminal window SSHing to another computer. I could leave this thing lying around on the coffee table in the living room, and instead of digging my phone out of my pocket to look something up, I could pick this up instead.

It is also appealing because I found Microsoft's Surface to be pretty cool. The iPad is like a more affordable, portable, version of that.

It also appeals to me as a computing platform for my parents. Their computing needs have simplified over the years, but they're still running Windows, largely because I've never had the time to try and foist Linux on them. Since I moved to the US, my visits back home have been too brief to do a proper migration. I think an iPad that supported user switching would be perfect. Mum and Dad could share it, and read their email and do their web browsing from anywhere in the house.

Since the first generation iPad doesn't do user profiles and lacks a camera, I'll wait impatiently for the second generation one. I heard a rumour today that it would have a camera, and do user profile switching based on the face of whoever was in front of it. That would be pretty cool.

I'd also be very interested in an Android tablet. I love Android's speech to text input support, and I could really see an Android tablet stuck to the wall in the kitchen, instead of a whiteboard on the fridge and a paper calendar.

The WePad also sounds intriguing.

So I'm not so bothered by the iPad's closed nature. I think for the set of users who have basic computing needs, and don't care about openness, it's very cool.

[22:09] [tech] [permalink]

Sunday, 11 April 2010

Using capabilities from Python

I've become passingly interested in Linux's capabilities functionality, as a way of reducing full-blown UID 0 requirements.

Unrelated to this, one of my few gripes about Python, coming from Perl, was the inability to do anything like Perl's $0 to alter the appearance of the running program. I used to use this functionality in Perl a lot to provide cheap insight into what a long running Perl script was up to.

Well the other day, I was rather excited to learn that Dennis Kaarsemaker has written a Python interface to capabilities, which also implements a set_proctitle() function.

The python-prctl module isn't currently available in Debian, but as Dennis has all of the packaging in the Git repository, I've offered to sponsor it for him if he wishes.

[22:08] [tech] [permalink]

Friday, 09 April 2010

How not to do it

From http://www.symantec.com/connect/articles/active-directory-and-linux...

An alternative to allowing anonymous searches on your Active Directory is to allow the nss_ldap routines to bind as an administrator DN to your directory and perform searches in privileged mode. To do this, insert the following lines in your /etc/ldap.conf file:

binddn cn=Administrator,cn=Users, bindpw

You should be used to the "" thing by now.

WARNING: The above example shows that the administrator user name and password have been coded in clear text in the /etc/ldap.conf file! Unfortunately, this file must always remain world-readable, because otherwise users logged on to the system will not be able to read data from the directory. You should not do this on a system where any user has shell access to your system, or can in any other way read this file.

If you've put the Administrator password in a world-readable file, you've already lost.

[17:54] [tech/security] [permalink]

Monday, 29 March 2010

Netflix instant streaming on the Nintendo Wii

On Saturday we received the disc from Netflix that enables instant streaming on the Wii.

This gave me a good excuse to plug the Wii back in, as I hadn't gotten around to it since we moved house.

The set up procedure is very nice. You put in the disc, start the "game", and it spits out a code. You go to http://www.netflix.com/Wii and enter the code. I imagine it looks for a request from the Wii with that code, and an authenticated submission with the same code from a web browser, within a certain period of time, and puts two and two together. It beats having to enter your username and password for your Netflix account on the Wii.

The UI is very nice. You get a horizontal list of the titles in your Watch Instantly queue, and you just pick what you want to watch, and away you go. The video quality seemed fine.

We'll definitely be watching more stuff from our Watch Instantly queue now.

Full disclosure: I am a Netflix shareholder

[08:25] [tech] [permalink]

Thursday, 18 March 2010

How to get GPG to sign with multiple keys

I spent way too much time trying to figure out how to get GnuPG to sign a file with multiple keys. It's not at all obvious from the man page, but you can use the -u option multiple times, with each key ID that you want to use.

[23:23] [tech] [permalink]

Sunday, 07 March 2010

Backspacegate

I just updated to the latest beta of Chrome, and the backspace key stopped working as a keyboard shortcut for the Back button.

After a few times of stabbing the backspace key and not getting the result I wanted, I decided to go looking into what was going on here.

It looks like it all started with bug 30699, where someone didn't like the default behaviour. That led to bug 36533, when the people (like me) noticed the functionality they were relying on disappeared.

Now I fully understand that Backspace == Back is not the default behaviour of Firefox (on Linux), but it is a configurable option, and I'd had it enabled there for years. I think it all started with when I migrated from Windows to Linux. It's normal for Backspace == Back with IE and I think Firefox for Windows, and I've just developed the muscle memory for it, and I've never had a problem like what the submitter of bug 30699 was complaining about.

I look forward to it becoming a configurable option in Chrome.

[18:51] [tech] [permalink]

Wednesday, 20 January 2010

Finding the maximum message size

This was born from a need to see how big a ZIP file I could send my accountant in Australia, and scratching the itch to write some code.

The fact that most SMTP servers talk Extended SMTP makes this relatively easy, and Python has some great modules for DNS and SMTP.

One gripe I do have is how long it takes the Python modules to mature. It's taken until Python 2.6 for smtplib.SMTP() to gain a timeout parameter, for example.

Anyway, I was able to write something nice and generic (it works for any domain) in around 100 lines, thanks in no small part to the DNS module, which makes getting a list of MX records stupidly easy.

$ ./maxmessagesize.py andrew.net.au
daedalus.andrew.net.au: -1
$ ./maxmessagesize.py pollock.id.au
ASPMX.L.GOOGLE.COM: 35,651,584
ALT1.ASPMX.L.GOOGLE.COM: 35,651,584
ALT2.ASPMX.L.GOOGLE.COM: 35,651,584
ASPMX2.GOOGLEMAIL.COM: 35,651,584
ASPMX3.GOOGLEMAIL.COM: 35,651,584
ASPMX4.GOOGLEMAIL.COM: 35,651,584
ASPMX5.GOOGLEMAIL.COM: 35,651,584
$ ./maxmessagesize.py debian.org
master.debian.org: 104,857,600
$ ./maxmessagesize.py cameronp.com
mail1.mysmarthost.com: 30,000,000
mail2.mysmarthost.com: 30,000,000
$ ./maxmessagesize.py ubuntu.com
mx.canonical.com: 62,914,560
$ ./maxmessagesize.py clug.org.au
mx.clug.org.au: 50,000,000
$ ./maxmessagesize.py linux.org.au
morton.linux.org.au: 52,428,800

It's good to see that in most cases of domains I tried, all of the MXes had the same maximum message size.

Source code is here

[23:51] [tech] [permalink]

Sunday, 29 November 2009

Review: iGala digital picture frame

Now that I'm not letting the cat out of the bag (I bought these as gifts) I can write a review.

The use case was pretty simple: I thought it'd be a cool Christmas present for our parents to give them something Internet-enabled that would give them regular updates on their grandchild (when he/she arrives). I was thinking a picture a day type of thing.

So I hunted around for a WiFi-enabled digital picture frame, and found the iGala being sold pretty much exclusively by ThinkGeek.

The reviews seemed pretty good. I know WiFi-capable frames have been around for a few years, but they always seemed to be pretty lacking in terms of WiFi functionality. Like they wouldn't do any security, or they'd only do WEP. This particular product claimed to do the whole gamut, including WPA2. The fact that it was a touch screen and ran Linux also made it appealing.

So I ordered a couple of them for a few weeks before we headed to Australia, with the intention of making sure that they'd work. Here's the highlights.

WPA2 didn't work (nor did WPA)

Despite the software on the frames claiming to be able to talk WPA2, the frame would not associate with my Linksys WRTU54G-TM. I had to drop it all the way back to WEP to get it to connect. For me, this was the most disappointing failure. I bought the product specifically on the strength of its claim that it supported WPA2, and it just didn't work. It was also pretty impossible to debug the failure.

I downloaded the latest firmware update, and that added additional settings for TKIP or AES when selecting WPA2, but neither option helped.

The automatic updates are brain-dead

Speaking of downloading firmware updates, the latest firmware that I downloaded and installed on the frames added automatic over-the-air firmware updates. Nice enough feature, except for the implementation. The frame tried to make an HTTP GET request for a non-existent file, every 6 seconds.

So the frequency of checking alone is totally ridiculous, but couple with this the fact that it's making a GET request (this is what $DEITY invented the HEAD request for, people!) and the website has a "friendly" 404 Not Found page that weighs in at a little over 10 Kb. By my calculations, that's nearly 150 Mb of failed update checking traffic a day. Taking these frames to a backward country like Australia, where ISP users still have monthly quotas, gives the frame a pretty horrendous running cost in terms of traffic. Not to mention the outbound bandwidth requirements for the server hosting the updates. Crikey, the mind boggles.

I'd have thought checking once a day and on power on would be perfectly sufficient.

Transitions are unavoidable

It may be just me, but I hate cheezy transitions. Digital picture frames tend to come with a myriad of them, but they all look cheap. It's impossible to tell the frame to just change the picture, it has to use at least one transition effect all the time. It defaults to randomly choosing from all the available ones. At least you can tone it down to just one.

Automatic on/off time

I liked that it was possible to configure operating times. No need to have the thing chewing power 24x7. It just seems to turn off the backlight outside of the programmed operating hours, so it's still doing the lame uber-frequent and bandwidth-intensive checking for updates even when it's "off".

Photo check frequency is configurable

Another nice feature was the ability to check for new photos at varying intervals. What I wanted for my parents was to just update once a day, so they'd get a new photo every day (assuming we put a new one in the Picasa web album that it's checking). This was very doable, and coupled with the automatic on/off time, means they should wake up to a new photo every day (that we change it).

Built in photos are a bit too sticky

There's 3 or 4 in-built photos as part of the firmware. If there's nothing accessible or available online, it'll cycle through these. Somewhat annoyingly, you need to have at least two photos in your online source for it to stop wanting to incorporate the stock photos in the mix. The workaround is to put the same photo in the online album twice, so you don't realise it's switching between two images. Lame, but it works...

Touch screen UI was adequate

Given the alternative user interface for digital picture frames is a little IR remote control and some dinky menus, the iGala was nice to configure. A full on-screen QWERTY keyboard pops up for entering WEP/WPA/WPA2 keys and configuring the Picasa/Flickr connections.

Fairly responsive support

The main near-showstopper for me was the lack of advertised WPA2 support. I emailed the Aequitech support folks quite a bit during my "evaluation" period. They got back to me fairly quickly most of the time and wanted to know exact details of my setup so they could reproduce it in the lab. They'd be well served having an actual ticketing system, instead of hiding behind an email address, as it made it hard to keep track of the multiple issues I was raising with them.

It's written in Lua?!

I have no familiarity with Lua, other than I know of its existence as a programming language. I'm curious as to what their motivation was for this language choice. All of the Lua code shipped in the complete firmware refresh ZIP files is bytecode. I have no idea if it's possible to decompile it. The CPU architecture would appear to be a Blackfin based on the few compiled binaries included in the full firmware.

Easy to update

Prior to the new update "functionality" I've already railed against, it was pretty easy to update. Download a ZIP file and a shell script, put them in the root directory of a USB key, and plug it into the frame and stand back. The updates don't seem to cryptographically verified (even the over-the-air ones), so I wonder if it's possible to break into the frame by way of a cleverly crafted "update". I have no idea what breaking into the frame would buy you. I don't know what sort of computing power they have.

Conclusion

I still think the iGala is a reasonable, if somewhat immature product. If the software is going to be actively worked on, and the support people continue to be responsive, then I think it's got good potential. For the price, I expected a more polished product, though.

I received an email from their support people shortly after returning from Australia saying that they'd fixed the WPA2 problem. Unfortunately I had no intention of trying to remotely talk my parents through how to reconfigure their access point or the frame (interestingly WPA2 didn't work with their Linksys WAG54G2 either, so I'd love to know what WPA2 devices it was tested with), so it's 128-bit WEP until I next go to Australia.

[22:57] [tech] [permalink]

Friday, 27 November 2009

mirror.linux.org.au upgraded

I took advantage of the four day weekend for Thanksgiving, and finally got around to upgrading mirror.linux.org.au from Debian Etch to Lenny.

The upgrade went fairly well. Notably, Drupal completely blew up, but it looks like we were still running the package from Sarge, as Drupal wasn't in Etch at all. I cut my losses, installed Drupal 6 and put something basic together from scratch.

MoinMoin upgraded fairly painlessly, and I figured out how to fix Cacti for my installation at home at the same time, so that was a general win all round.

[18:21] [tech] [permalink]

Saturday, 21 November 2009

LVM gaining the ability to merge snapshots

I love LWN. It's the best value for money way of keeping abreast of what's going on out there.

I also love LVM. I'm thrilled to learn from a comment on this article about Btrfs, that LVM is soon to gain snapshot merging support.

This is going to be absolutely fantastic for rolling back upgrades that go bad. I can't wait.

[09:52] [tech] [permalink]

Friday, 20 November 2009

"#!/bin/sh -e" considered harmful

Russell Coker advocates putting -e on the shebang line of shell scripts.

I disagree. From my experience this is extremely unhelpful to people who may be debugging your shell scripts in the future.

Consider this, you've added -e to the shebang of a script, and some poor schmuck down the track is trying to debug why it spontaneously exits. What's the most obvious way to do this? Run the script with sh -x or bash -x.

What happens when you do this? The shebang is completely ignored, and the script is directly run by the shell interpreter. If the person doing the debugging doesn't happen to transpose all of the shell options on the shebang line to the manual shell interpreter invocation, you're going to get different behaviour.

So I advocate an explicit set -e on the second line of shell scripts instead.

As much as making shell scripts set -e is a good practice, it drives me absolutely batty having to deal with scripts that spontaneously exit as soon as something they run exits non-zero. Particularly when you've chained a bunch of shell scripts together, or have one sourcing a bunch of script fragments from a directory. For this reason, I prefer to write in Bash and use an exit handler, to make it very obvious when a shell script has abended due to set -e.

[10:04] [tech] [permalink]

Sunday, 30 August 2009

Finding out about conferences

Paul Fenwick says he finds out about conferences by word of mouth.

Not that I ever do more than skim the page, but the Announcements page of Linux Weekly News (a fine publication well worth more financial support) mentions upcoming conferences. I've just learned that this page is derived from the LWN.net Community Calendar

[21:52] [tech] [permalink]

Monday, 17 August 2009

Having your cookie and eating it too

Russell Coker seemed to be of the impression that Firefox lacks support for manual cookie acceptance, whereas Konqueror has it.

Never fear, Russell! They just hid it very well:

Firefox 3.5 Privacy/History preferences

I must admit I had to do a bit of digging to find it, but I just couldn't believe that they'd take away a feature like that.

[21:07] [tech] [permalink]