No more pink walls

    Still kickin

    Browsing Posts published by noisymime

    Hallelujah! God [Allah/Budha/FSM] be praised! Something has happened that I’d completely given up hope on. A working, released and effective h.264 hardware decoding solution for linux has been let loose, from the most unlikely of sources, nVidia. OK, so maybe they’re not the most unlikely of sources, that’d be someone like Motorola or something, but given their recent history, it was unexpected.

    To put this in perspective, I’ve had a fairly serious linux HTPC setup for about 3 years and it has been our sole source of TV/Music/Video (We don’t own a stereo or DVD player) for the past 18 months or so. Whilst CPU power has incresaed to the point where HW h.264 decoding is less important, when I started, it (or more generally MPEG4 decoding) was the holy grail. High Def h.264 video was virtually unheard of as, until the hardware decode support arrived for Windows, there was no way to play it. Now the two major players, ATI and nVidia, have each had their own implementations of this, UVD and PureVideo(HD) respectively, for quite some time now, but neither released anything for linux. Recently nVidia even went as far as removing XVMC support entirely. Intel have had patchy implementations, even for Windows, but it has only been their newest G45 chipset that has made HighDef decoding possible (Even the G43 and G41 chips don’t do it). Now sure, you might be saying “Who cares? We have fast CPUs!” but the fact of the matter is that to play 1080p content properly, you need at least an E8400 Intel CPU and if anyone tells you different then they haven’t played full resolution/bitrate files. A little while ago my bro in-law gave me a 720p camcorder recording of my nephew and, most embarrasingly, as the whole family sat around we got to watch a slideshow of frames as a 2Ghz Core2 Duo struggled to play it. Clearly, this is an area that linux has severely lagged behind Windows on.

    There have been a few false starts along the way for linux (Don’t even get me started on the Via implementation of it!) but now nVidia have released a working, albeit beta, driver and patches for mplayer, ffmpeg etc. Better yet, it actually seems to work! It would be remiss of me not to mention the ATI beta driver that was also released recently with support for something they’re calling XVideo Bitstream acceleration (XvBA), but as no one actually seems to be able to get this to do anything, yet alone produce any meaningful benchmarks, it is a curiousity at best. So, it seems, nVidia take the prize in this pony race.

    This puts me in a bit of a bind. A few months ago when my HTPC hardware was giving up at life, I decided I’d stick with Intel as they’re fairly good Open Source players and at the time seemed like the most likely to get out a working h.264 implementation. Now, as Murphy would have it, they’re the last of the major contenders to release something. Now I know they’re still working on it, Keith Packard has spoken about it numerous times and there’s an upcoming talk at linux.conf.au that claims it will provide a working demonstration, but the fact remains that its not here yet. The one thing preventing me from running out and buying a cheap nVidia card right now is the fact that the Intel implementation will be open source.

    This brings me around to another issue, standards. Whilst XvMC for MPEG2 was a bit of a mess and never worked the same way twice, at least it was a standard for everyone to converge on. Now it appears as though each of the vendors will have their own way of doing MPEG4, each requiring different patches and doing different things. This is a real shame. After waiting 2+ years, a period of time where any one of the vendors could’ve created a defacto standard simply by getting something out there, we’re now in a position where 3 different, and likely incompatible, solutions get released all around the same time. Sure, people are working on CUDA and Gallium stuff, but given the way nVidia have implemented PureVideo its clear to see that’s where things are heading.

    So, in summary, YAY! Finally a sigh of relief for something that has been missing a LONG time. I hope that, if nothing else, its some motivation for Intel to get an open source solution out the door, but until then, a big THANKYOU to nVidia.

    [UPDATE] I just noticed that Andy Ritger from nVidia posted a note to the MythTV dev mailing list with some further details: http://www.gossamer-threads.com/lists/mythtv/dev/357332

    So it seems that there’s quite the election ‘fever’ (yay sportsfans!) going around at the moment. Obviously the US election has had the bulk of the limelight and I have to admit to having followed it far too closely since late last year (Along with my election buddy, Soporific Frog. Thanks for the memories). Somewhat less grand (yet still of debatable importance) are the Victorian local council elections that are going on at the moment.

    I have a colleague who has, for the past few years, worked for both the Victorian and Australian Electoral Commissions and this has lead to some interesting discussions into the way things are run in the background. Here are a few of the more interesting (I think) tidbits:

    • There are a worrying number of minor ‘issues’ with the in-house developed election software they use. Certainly nothing that would alter the outcome of an election (I hope), but enough to make me wonder again, why software like this isn’t open source? Surely the best way to debug software like this is to publish the source 6 months before its used and wait for the feedback from a raving pack of civil libertarians.
    • As an example of the above, I present the following. You may or may not know that candidates are allowed to provide a 150 word spiel about themselves which is sent out with elections papers so that people get an idea of who all the candidates are. Now whilst there are some guidelines for this (ie no bad language, no personal attacks on other candidates etc) there is basically no editing of this allowed by the electoral commision, Eg if there’s typos or gramatical errors in the text provided, they will get printed verbatim. Now, given the number of candidates in these elections, the process is relatively automated in the software. It does a word count and, assuming all is ok, it goes through. Now a little bird tells me that the algorithm used in this software initially counted the number of spaces and used that to determine the word count (WC=N_Spaces+1). So, combine this algorithm with the fact that gramatical errors will not, as a matter of policy, be corrected, and you might begin to see a problem.
      Example: “Here is my spiel.Do you think its,any good?”
      So the above 10 word sentence, now gets read as being only 7 words long. Whilst this is a small thing, there are a lot of ‘small’ people who tend to stand for council elections that take this stuff very serously. It is also something that should’ve been picked up LONG ago. Now, I’m not suggesting this has altered the outcome of anything, but complaint from a candidate would cause great headaches for the electoral commission this late in the game.
    • The rules behind preferential voting systems are complex. Whist the premise itself is simple, there are an awful lot of edge case scenarios that have to be dealt with and these make it next to impossible for a human to accurately and quickly determine a result. Computers to the rescue! Specifically, a spreadsheet to the rescue. Yes, all the rules of this system have been applied within an Excel spreadsheet, which has then gone through appropriate certifications, then sold to the VEC, who plugs in votes counts and it spits out the winners. Whilst there’s nothing implicitly wrong with using a spreadsheet for this function, to me it just all seemed a bit amateurish.
    • Despite, all of the above, the VEC and AEC do a phenomenal job of running elections.  They enforce processes to cover nearly EVERY scenario and I have nothing but confidence in the elections held within this country. This said, hearing about all these procedures etc only makes me more concerned about the way elections are run in the US. So many things you read about that occur there simply leave me gobsmacked and I can’t get my mind around the fact that a country as developed as the United States, can’t even follow rudimentary (and commonsense) guidelines to ensure a genuine result. Craziness.

    Finally, I feel it would be remiss of me not to mention Mr Troy Anthony Platt, a candiadate for the Ballarat North ward. As mentioned above, candidates are allowed to provide a 150 word summary of themselves and, I can only assume, in the tradition of the Official Monster Raving Looney Party, Mr Platt has provided a most interesting blurb:

    Vote 1 Troy Anthony Platt, the Starmaster Ranger Wizard of the North ward. North ward Internationale Airport:Romamesque gothic style 5 gates Iternational, Marine Tera Aero tunnel,3 domestic gates, Immigration Hotel 888 levels, 100 security Home Defence Guards, 12 courts rooms, 3 lounges, two customs,motorbike race track,car race track,car workshops,zoo,swampland undersea living trials,Aquanauts harmonic shaft,16 medical centres,swampland viewing displays,horse stables,equestrian centre, Auditorium Entertainment facilities,7 theatres, stock exchange, chicken sale yards,multicultural university, 144 thousands jobs bringing in 101 billion yearly. Internationale Wathaurong Eureka Musical Sounds of Poetry Tour:Canada,USA,Ireland, UK,Spain. Eureka Stockade:The Movie Kristina Bumble Bee. isbn-1-4120-3222-9, A knights Fire Volume 1&2 ISBN 1-4120-6423-6, Fire exits, light fittings (radiation covers)Traffic hazards, customer service behaviour, 10 o clock public House lockout, seven drinks man/four drinks women, legalised canibis, touch one mate, touch waltzing Matilda

    The above was indeed published and distributed along with voting papers etc, to all members of the appropriate ward. Now, whilst such randomness maybe common in the UK, apparently here it caused something of a stir. Infact, this went all the way to the VECs legal council to determine whether or not it was allowable. It was eventually decided that it had not broken any of the rules (At least, the rules as they currently stand, ahem) and therefore it had to be published. Good for you Troy Anthony Platt.

    Ambition

    1 comment

    “Until a man is twenty-five, he still thinks, every so often, that under the right circumstances he could be the baddest motherfucker in the world. If I moved to a martial-arts monastery in China and studied real hard for ten years. If my family was wiped out by Colombian drug dealers and I swore myself to revenge. If I got a fatal disease, had one year to live, and devoted it to wiping out street crime. If I just dropped out and devoted my life to being bad.

    Hiro used to feel this way, too, but then he ran into Raven. In a way, this was liberating. He no longer has to worry about being the baddest motherfucker in the world. The position is taken.”

    Snow Crash – Neal Stephenson

    I had a moment like this today. Just sat back in awe of someone elses work and realised that no matter what I did, how much money I spent, what mad crazy skills I learned (Even though I would never have the time to learn them)… I could never best what he had done.

    Unlike Hiro though, I can’t really say I found it liberating. In a way it was inspiring to see what is actually possible (Some of these things I never would’ve thought could be done previously), but it was also slightly depressing. Made my own efforts seem rather feeble.

    Anyway, if ever I needed an excuse to quote some Neal Stephenson, this was it.

    As suggested by Stewart Smith:

    • Grab the nearest book.
    • Open it to page 56.
    • Find the fifth sentence.
    • Post the text of the sentence in your journal along with these instructions.
    • Don’t dig for your favorite book, the cool book, or the intellectual one: pick the CLOSEST.

    Result:
    “The first step is achieved through effective marketing”

    from Systems Analysis & Design (Hawryszkiewycz)

    So, yes, the mandatory clean feed has been generating a lot of noise lately and the Federal Member Email/Letter bomb has been dropped. In this spirit, I felt I might as well contribute an entry with the letter I sent of to my local member (Ballarat – Ms Catherine King), last week:

    Ms King,

    I am writing to express my concern at the proposed mandatory ‘clean feed’ internet concept being put forward by Mr Stephen Conroy. Such a concept poses serious risks to both free speech liberties and Australia’s ability to prosper in a digital world. As each of these is of such importance, I will address my fears about them separately:

    • Free speech and censorship: I understand that it is far from the governments intent to create a system capable of wholesale censorship, however intentional or not, this is what a mandatory clean feed will accomplish. Already, proposals have come from other political organisations (See http://www.theage.com.au/articles/2008/10/27/1224955916155.html) that the scope of the filtering should extend to online services such as gambling. This is the thin end of the wedge. Even if we are able to trust the current government not to bow to pressure from such groups, it is not a stretch to believe that future, less trustworthy, governing bodies could extend the blacklist to cover areas that are not in fitting with its ideals (Eg political dissent). Whilst it may not be the Labour party making these changes, they will have certainly enabled the actions. Finally, given the serious opposition this proposal has faced from both individuals and the business sector, I cannot help but question where the motivation for the filters comes from in the first place. If this system is not (and cannot) serve the public interest, who’s interests are it serving? Mr Conroy has so far failed to satisfactorily answer this question.
    • Australia in a digital world: Given that I work in the technical sector and have a reasonable understanding of the technology behind a typical Internet Service Provider (ISP), I find it staggering that a proposal such as the mandatory feed can make it past even a cursory investigation into its feasibility. The governments own studies have shown how detrimental the implementation of a system such as this would be to the quality of Australia’s broadband networks. I do not wish to quote statistics from these studies, they are freely available elsewhere, however I do want to note that the government loses much credibility (particularly in the technology industry) if they are going to undertake feasibility studies with no intention of taking any notice of the results. Mr Conroy has listed ‘similar’ systems in place overseas that have had success, however this fails to take into account significant differences between the aims of these systems and the one proposed here. Australia is already lagging significantly behind much of the developed world when it comes to high speed broadband availability. On the one hand we have a government willing to hand out $4.1 billion to help improve this situation, yet on the other the same government wishes to cripple the system before it has even left committee.

    Given the ever increasing importance of digital infrastructure in an economy, Australia is now at serious risk of becoming a technological backwater and any gains we have made in this area over the past 5-10 years will quickly become irrelevant. I urge you to oppose this mandatory ‘clean feed’ from Mr Conroy and pass on the concerns that I (and others in the Ballarat community) hold regarding it.

    I would be happy to discuss any of the points raised above with yourself or a member of your office in case clarification or further information is required.

    Thank you,

    Josh Stewart

    To date, I have not received a reply, though to be honest I do not expect more than a form letter at best.

    After my post yesterday about the ‘rallyduino’ a few people emailed me asking for the source, so I threw everything together quickly into a Google Code hosted project. I did want to get it into SVN anyway as having only local copies was starting to get me nervous.

    The project page can be found at:
    http://code.google.com/p/rallyduino/
    Or to view the code directly, checkout:
    http://code.google.com/p/rallyduino/source/browse/trunk/rallyduino/rallyduino.pde 

    The power regulator I’d ordered arrived last night (Turned out to be well and truly overkill, but it only cost $10) so now just awaiting the Arduino prototyping shields and I can give this a real worlds test.

    Rallyduino?

    7 comments

    [Update: This system is now complete and working. See details at http://noisymime.org/blog/2009/05/rallyduino-lives/]

    So recently I’ve started playing slightly more seriously with my Arduino, I’ve even gone so far as having a plan on what the hell to do with it. For some time I’ve wanted to try my hand at putting together a homemade Rally computer and the Arduino seems to almost be the perfect platform for this (Though not without one or two shortcomings). For those not familiar with them, a rally computer in its most basic form is essentially just a glorified odometer, in other words, it measures the distance your car has gone. Very briefly the differences are:

    • Considerably more accurate
    • Can count up and down as well as being able to pause
    • Resetable, usually via a remote control
    • Usually also calculate average speed

    If you want to lookup existing rally computers, I recommend you checkout manufacturer websites such as Brantz and Terratrip.

    So why make your own when there’s already such good existing products? Well I can give you 3 reasons: price, openess and fun. All the units currently on the market are quite expensive ($500+) and are all very much proprietory, closed systems. My goal is to put something together than costs about $200 and is as open as possible.

    Do demo what I’ve got going so far, checkout the video below:
    YouTube Preview Image

    So as you can see, the system is up and running with all the basic functionality. I need a few extra pieces of hardware (Case, power regulator and prototyping shield), before I give it a test in a real world scenario, but at the moment I’m confident things should work OK. The biggest problem I’m having at the moment is the responsiveness of the Wii nunchuck as a controller. It works OK in tests, but when you get the Arduino processing the input pulses (At around 150Hz) things start to slow down. Obviously priority is given in the following order:
    Pulse Counting->Speed/Distances Calculations->Screen Updates->Nunchuck input

    That’s about all for now. I’ll probably post about this again after I’ve done a real world test on a car so it will have to wait until the extra bits and pieces arive. The source code isn’t yet available (Simply because I don’t think there’d be much interest), but if anyone would like to see it, just leave a comment and let me know.

    I know I’ve been promising to do another screencast, specifically showing the music player in Gloss, but I’m afraid it isn’t done. What is done however is Gloss 0.1, including a new screencast showing all its features!

    Whilst being very much a developer / testing release, it is moderately stable and works ‘out of the box’ on multiple different systems I’ve been testing on. It is available for download at http://gloss-mc.googlecode.com/files/gloss_0.1.tar.gz or if you’re on a debian / ubuntu based system, an apt package is available in my launchpad PPA: https://launchpad.net/~gloss-development/+archive. If you want to check it out before downloading, a screencast is below (Quality warning: Where is the killer linux screencast app?):

    YouTube Preview Image

    So what’s actually been done? A LOT of work getting what was in there already to a stable, portable state. That’s not to say it won’t crash, some areas are still quite fragile, but everything that’s there will work given the right setup. The new Gloxygen theme has been included also. This initially started life based around the KDE Oxygen iconset, but has evolved a little bit since then. It has the advantage of being fully open licensed, unlike the Pear theme which is of questionable origin (For this reason it is not included in the apt package above).

    The plan going forward is to get as much testing as possible done ASAP with the hope of having a 0.1.1 and maybe a 0.1.2 release out the door quickly. When Clutter 0.8 is released, SVN will temporarily break as the API is updated accordingly, meaning that the ubuntu Hardy package will also break given that there is no Clutter 0.8 available for it.

    That’s probably about all for now. I urge you to have a play with the release above if you have a chance and log any problems you find (And you will find them) in the bug tracker!

    For a long time, one thing that has always solicited gasps from my technological friends is that I run my home wifi connection without any encryption or MAC blocking. It is free for anyone within range to use without my permission and, in most instances, I won’t even be aware it’s happened.

    I do this for a number of reasons, partially because I believe in openess and community, but mostly because its something that I would like to see become a common thing. If I’m away from home then typically my net addiction will kick in and I’ll want to check email / blogs. In all honesty, if there’s an open wifi nearby, chances are I’ll use it.

    This morning I read an article in The Age (http://www.theage.com.au/news/security/wifi-wronguns-on-rise/2008/06/02/1212258735795.html) that had been syndicated from The Guardian in the UK. This dealt with the legalities of someone coming along and using an open wifi connection for nefarious purposes and there were a couple of things in it that got me a bit worked up. The main offending comment was from Susan Hall, a UK specialist in IT law:

    “The prosecution needs only to prove the communication came from a particular system. Once this is achieved the onus is on the individual to prove his or her innocence.”

    What I simply don’t understand is how this fits in any way with the trusty old ‘innocent until proven guilty’ ideal. Proving that data travelled through your router should definitely NOT mean that the onus now falls upon me to prove my innocence. That same data probably passed through 10+ different routers before reaching its destination, why is it my router that gets targeted and me that gets hauled along to court facing 10 years for child pornography? Its not even reasonable to argue that my router was the last in the chain as for all anyone knows, someone could’ve bridged my wifi through another device and shared to 10 people over a wired ethernet. This comes down to the fact that I am simply the ‘easiest’ person to point the finger at and 90% of people won’t understand the technical reasons why the logic is rubbish. Really the *only* thing that proves my guilt in a case like this is the data being stored on my systems, and even then it is questionable if I’m running my connection through some form of local caching proxy.
    I don’t want to point fingers at authorities for why we are in this position as my emotions would become far too unrestrained (Words like laziness and ignorance might get used), but the fact remains that clearly we need to form of official legislation in this area. In my opinion (And this is a whole other rant), it should not be illegal to use an open wifi connection without permission, the onus should be on the owner to secure it if they do not want others to use it. Of course enacting any laws that ensure the innocence of the home user running open wifi is going to increase the workload on authorities when cases come up, so will no doubt be met with serious opposition from these corners. It is a worrying trend however when the focus of law making becomes making life easier for authorities rather than to protect innocent citizens.

    So for now I will continue to run my open wifi. I will lament the utter lack of technological expertise within out legal system and simply hope that this changes sometime soon (Though I won’t hold my breath). I’d love to hear others thoughts on how this should be handled froma legal point of view if anyone wants to leave a comment.

    I’ve read a few blogs posts recently regarding consumer NAS devices, specifically linux based ones (See entries from Peter Hardy and Matt Bottrell) and, for the most part, these have been fairly positive. So as a warning to others, I thought I’d pop up my experience.

    A few weeks back I got a call from my Dad saying he’d bought a network HDD and wanted to know if i could come around and set it up for him. No problems, I thought, should just be a matter of plugging it into his (fairly basic) network and setting up mappings on the various machines he runs.

    I head around and find a Western Digital ‘My Book World Edition’ 500gb drive. It came in some fairly slick packaging (Very Apple) and generally looked pretty good. The hardware part of the install was a breeze and within minutes I was installing the ‘MioNet’ software on his main machine. My instincts had told me that installing software for a device like this shouldn’t be necesary, but one of the things it offered was secure remote access via a browser, which could come in handy at times (my dad travels a bit and is forever asking me to pop around and email him files he forgot to put on his laptop). This is where the problems started. Upon installation and reboot of the machine, the network interface was completely gone. Nothing, nada, dead. I tried everything I could think of for the next hour or so but nothing I did would bring it back. Finally I uninstalled the software, rebooted, and everything was fine again. Not the most reassuring experience.
    Nevertheless, I continued, Dad can live without the crummy software as the device offers a direct SMB share ability. I setup a few shares via the devices web interface and mapped them on 2 machines. Everything seemed fine. So we started the initial backup (About 30gb of data) but almost immediately noticed 2 things:

    1. The speed. At its best, this thing got up to about 4mb/s, pretty poor for something that was on a gigabit network (It is a gigabit interface on the hdd).
    2. After about 1gb of transfer, the copying terminated saying that we no longer had permission to write to the share. I tried creating a text file. No go. In fact, it wasn’t until we rebooted the unit (Takes about 90secs) that we again had write access… For about 2 minutes when it again cancelled the transfer and refused to let us make any changes to the drive.

    Being somewhat stumped, I called Western Digital to see if we had a dud unit. I eventually got onto a rather stressed sounding guy who advised me that this was a known issue, to upgrade my firmware and cross my fingers (ie there was no promise the upgrade would solve the problem). He also advised that the 4mb/s was about the theoretical maximum throughput of the device due to a hardware restriction (The speed of its CPU). Now, putting the firmware issue to one side for a second, how can you knowingly advertise a device as having a 1000mb/s network interface when the rest of the hardware struggles to cope with 100mb/s speeds!?! I wanted to unload on this guy, but deep down I knew he’d heard it before and that a return of the drive was already looking likely.

    Getting back to the firmware upgrade. This did eventually go through (Was about an 80mb download) however it downright refused to do this until I had blanked the drive. It continually advised that there were current connections (Despite me disconnecting all shares, rebooting PC/Device, plugging a laptop directly into the unit etc) and refused to do the upgrade with these in place. Long story short, the firmware upgrade made absolutely no difference. The problem having write access still occurs for transfers over about 800mb making proper backups all but impossible.

    I was almost disapointed then to discover that this device actually runs linux. Presumably the Windows sharing is provided by Samba which makes me really wonder what is causing the access problems. The unit is obviously under powered hardware wise based on its transfer rates, so I’d like to think it is dubious quality hardware causing the other problems as well. Whilst I could delve in and start tinkering to try and fix things, this is a consumer device, that was definitely not purchased to be a tinker-toy, and I didn’t want to unkowingly change something that could potentially lead to data loss (now or down the track).
    The result of this ordeal is that the unit will be going back shortly, it is simply unusable in its current state, and all I can really say to other people is to avoid this device at all costs. It really is a poorly designed package (HW and SW) that will cause you move grief than benefit.