Wrecks in the Gulf Of Mexico

Since I moved to Houston I’ve been trying to figure out the dive scene here. Up in New York there was a vibrant wreck-diving community and plenty of reliable boats to take you twenty miles offshore to dive interesting shipwrecks. Sure, you needed a dry suit most of the year and the diving was mainly deco diving but that was all pretty easy to solve.

In Houston folks seem to do more recreational diving to the Flower Gardens or jump on a plane to Mexico to do reef diving. The more intrepid divers visit the oil rigs and there is a vibrant spearfishing/free-diving scene. You wouldn’t think there were any wrecks here.

So I did some research and uncovered the Freeport liberty ship reef: in the 1970s, a tanker (the VS Fogg) exploded and sank. It was a veteran of the Second World War. So in typical Texas can-do style they made it safe for navigation, then cleaned up four Liberty Ships and sank them around her to make an artificial reef. At a max depth of 100ft, min 70ft it’s looks very diveable and only 36 miles form Freeport. There are also some oil rigs and ash blocks that must make up a good habitat for fish.

But that’s just a small part of the wrecks off Texas in the Gulf. There were twelve Liberty Ships sank at five sites, and the Texas Clipper is a more recent addition of South Padre island on the Mexican border.

Houston, Galveston and the Gulf Coast have been visited by ships for centuries and the wrecks have piled up in that time. Meanwhile the oil and gas industry has mapped every inch of the northern gulf and located all kinds of objects (see Historic Shipwrecks and Magnetic Anomalies in the Northern Gulf of Mexico). So there are plenty of interesting wrecks out there.

Texas AWOIS numbers

Then there are the more recent more mundane wrecks of fishing boats and cargo ships and general shipping that went down over the years. NOAA keeps track of this information in its AWOIS database (Automated Wreck and Obstruction Information System). These precise numbers are not necessarily reliable, but when you put them on Google Maps for just a section of the Gulf you get an idea of how many wrecks are down there.

I took the coordinates from a helpful page (thiswaytothe.net) and used GoogleMap’s open API to write a piece of JavaScript that puts a pin on the map for each supposed wreck and a link to the information page. It gives you a good feel for where most wrecks occur and can be located (near port or shore). The result is here.

Next step is to download the entire AWOIS database (available in, er, pdf and MS Access? Really? How hi-tech) and generate Google maps scripts for them. But that will have to wait until after baby Max gets another diaper change…

Posted in Diving, Maritime, Programming | Leave a comment

Python MeetUp

I’ll be giving a talk on Python concurrency summarising many of the themes from this blog post at the Python Houston Meetup at the Stag’s Head on Portsmouth Road on April 17th.

The slides are here.

Posted in Programming | Leave a comment

St Patrick’s Day

The weather’s getting warmer here in Houston. Max’s birth means we’re missing the rodeo but hopefully we’ll get to that next year.

St Patrick’s Day sees us doing his passport application at the Public Library downtown and finally finding ‘real’ bacon (known as Irish Bacon) at Central Market.

I’ve been following the Rugby Six Nations tournament made available on UStream by  Premium Sports. $100 for all the tournament internet pay-per-view, ideal since I won’t be getting down the Richmond Arms much to watch the game much here. The quality started becoming a bit spotty the last two weeks but they sorted it out today and although the quality isn’t Hi-Def it was worth the money today. Ireland playing England at Twickenham buoyed up by St Paddy’s Day and Wales chasing a Grand Slam in Paris – two great games. I won’t spoil it by saying who won (but here’s the result from Twickenham).

Posted in Rugby | Leave a comment

World, Max. Max, World.

Max SmilingAt the beginning of the month our little boy Maxwell Hamilton was born. We had him at home with the help of our midwife, Nanci and it went smoothly.7lb 12 oz.

Here are some photos of him, mum and baby are doing well.

We named him after the Scottish scientist James Clerk Maxwell who went to Cambridge and Alexander Hamilton, the only immigrant founding father and whose old house would have been down the street from us in New York.

Chris and Max

Posted in Uncategorized | 2 Comments

Concurrency in Python

Snake quilt
Snakes on a plain by Linda Frost

In an earlier post I described how the free lunch was over and how there was a renewed interest in concurrent and parallel programming.

So, what can we expect from Python? It’s a modern language, evolving rapidly under the watchful eye of Guido Van Rossum, the Benevolent Dictator For Life and has a package for everything including antigravity. The boast is that ‘batteries are included’.

So needless to say, Python has a module for  threading. Called threading. There’s a lower-level thread module which has the raw access to platform-independent threads, but it lacks the full set of objects to make working with threads palatable. So the module has been deprecated in Python 3 and you’re encouraged to use threading instead, since it has the high level constructs like Locks and Semaphores.

Something I rather like about Python is how the with statement does automatic cleaning up for you, regardless of what exceptions are called to interrupt the block. (Java added a multicatch last year in version 7, but it only handles multiple exceptions, you still have to clean up yourself). Locks can be used in the same way:

import threading
rlock = threading.RLock()

with rlock:
    print "code that can only be executed while we acquire rlock"
    #lock is released at end of code block, regardless of exceptions

The code above will execute the print statement, or whatever code in its place and release the lock regardless of any exceptions thrown. That’s helpful for avoiding deadlock and recovering from problems.

One thing to remember with concurrency is that the old Python staple, print starts to get less useful for threads. You normally need to know what thread a statement came from, and if a program is complicated enough to need threads, it’s time for proper logging.

A note on importing modules: remember that code can be executed when you import a module. Normally this is sensible and a useful thing, but when threading is introduced it can cause some confusion.  Spawning a thread during a module import is frowned upon, since you might cause a deadlock. Let the importer pick the threading model. Another general rule for safety is only non-daemon threads should import. Get another thread to set everything up before spawning a daemon thread.

Overall, Python has the usual threading idioms that you find in most modern languages. So long as you’re careful and do your coding with a large cup of coffee and your fingers crossed, you’ll be fine. But that’s not the Pythonic Way – rather, there are higher level tools that you can use to get your underlying job done, which I’ll come to later.

First we have to go through a cycle of surprise, doubt, fear, anger and acceptance regarding the Global Interpreter Lock, or GIL for short.

Let’s take a typical computation that can be broken up into pieces, and see how multiple cores can help. Looking at this simple single-threaded example SimpleMaclaurin.py, we take a simple Maclaurin series and sum it up to a very high degree of precision. The 1/(1-x) expansion converges very rapidly, but it’s just an example to keep the CPU busy. Let’s use C-Python 2.7.2 for this example.

How would we speed this up on my 8-core Intel i7? Obviously we’d break the problem into parallel pieces and let different cores hammer away. There’s no shared memory, just passing a simple float around for each parallel task’s results. Threads should speed it up linearly per CPU, right? A simple threading example is here, ThreadMaclaurin.py and you can use any number of threads. (For production code you would control access to the results array with a lock but I’m illustrating a different point here.) Using 1 thread matches the simple single-threaded example (4.522 secs). Then as you scale up the threads, the time changes:

1 thread     4.623 secs for 12800000 iterations
2 threads    6.195 secs for 12800000 iterations
4 threads    6.047 secs for 12800000 iterations
6 threads    6.357 secs for 12800000 iterations
8 threads    6.006 secs for 12800000 iterations
Rusty Padlock
by Ian Britton

What? You increase the number of threads and the time goes up? (In fact, the less loaded my machine, the more the time goes up with the thread count!) Looking at the task manager, multiple cores are in use, so what’s going on? The answer is the GIL problem has struck.

Python is an interpreted dynamic language. In a nutshell, the executed code can be changed at any time, so only one thread can run in the interpreter at once. Worse, there’s constant signalling and lock acquisition between the threads to see who gets the interpreter lock next. This chatter between threads is not necessarily fair (to use the concurrent term) so every thread gets starved apart from the one running. It depends on the operating system whether locks are fair or not. Effectively, using multiple cores has re-serialised your problem, with a lot of lock acquisition overhead added for good measure.

So, what happens if I use Jython? Jython is an implementation of Python that runs on the Java Virtual Machine. Essentially it compiles down to Java bytecode – it’s not interpreted, to there’s no Global Interpreter Lock. Here are the results if you run the same ThreadMaclaurin.py code on my machine with Jython 2.5.2:

1 thread     5.855 secs for 12800000 iterations
2 threads    2.836 secs for 12800000 iterations
4 threads    1.581 secs for 12800000 iterations
6 threads    1.323 secs for 12800000 iterations
8 threads    1.139 secs for 12800000 iterations

That’s more like the scaling that we expect. A little slower for one thread, perhaps due to a slower JVM startup time, but blazing after that.

You may ask: why not use Jython all the time, to avoid this problem? Well, truth is Jython’s support lag’s behind mainstream Python. Many modules aren’t available for it or are added later and it has tended to be a bit behind in handling later language features. But it’s perfect for interacting with Java programs, even if it doesn’t have modules like multiprocessing.

Bear in mind that these tests are run through my PyCharm IDE and you’ll no doubt get different timings on your machine but you get the general idea. If you add threads to a computationally-intensive task and the overall speed goes down, start to suspect an interaction with the GIL. It will not slow down a problem that is purely IO intensive. So many of the normal benefits of concurrency are not lost, such as subjectively smoother response for a user, better management of network or disk IO.

A great talk by David Beazly on the GIL is here where he really manages to convey the issue nicely and put forward some solutions for the mainstream. He has slides here for that talk, but I’d recommend watching as he’s a good speaker.

Before you start getting too disgusted with Python, let me assure you – this isn’t a big problem, just something to be aware of. If your program slows down when you add cores, you can simply jump to the multiprocessing module. Here we get separate processes instead of threads, and since every process gets its own GIL, no problem! Inter-process communication is dead easy with pickle, including passing arguments. You can use Queues to pass messages very easily between processes. Doug Hellman has a useful tutorial on multiprocessing, and several that follow on from the basics. Obviously there is an overhead to multiple processes, but unless there are huge amounts of memory involved, it’s barely noticeable for a computationally intensive task.

Looking at our Maclaurin series problem we get these numbers using the multiprocessing approach, MultiprocessMaclaurin.py

1 thread     4.561 secs for 12800000 iterations
2 threads    2.339 secs for 12800000 iterations
4 threads    1.464 secs for 12800000 iterations
6 threads    1.201 secs for 12800000 iterations
8 threads    1.120 secs for 12800000 iterations

Now, that’s better – proper scaling with the number of cores, and we get to use all the modules of C-Python. We can distribute these processes and easily pass messages between them with queues or named pipes. As we will see, there are even higher-level ways of dealing with our problem, building on this.

Do be careful when spawning processes from a script! The child process needs to be able to import the script or module containing the target function, which means if you code up the target in your current script then it will recursively spawn off processes until your machine locks up. For this reason I recommend saving all your work and shutting down your other applications for the first session that you explore the multiprocessing package in Python. The ways to avoid recursive behaviour are:

1. Have the target method in another module/script

2. Protect the executed code with a test for __main__:

if __name__ == '__main__':
   p = multiprocessing.Process(target=worker, args=(i,))
   p.start()

3. Use a properly object-oriented structure in your code, which is what you would do for production code anyway.

Feel free to ignore this advice and after you’ve rebooted your machine you’ll definitely remember the point!

So, multiprocessing gets past the GIL, which is great. But still, it’s not quite the Pythonic Way. Far too many fiddly details when we want to be operating at a higher level and only dropping down the layers when we need to. What we really need is a module that simply parallelises tasks for us.

There are several packages that address the problem outright – including parallel python (pp). So here is our Maclaurin expansion implemented using pp: ParallelMaclaurin.py – it uses the logging API so we get control of its output, which is nice. Not only will this package run and use the CPUs on the local machine, you can farm the tasks out to other servers very easily. That changes how we think about parallelising the problem – notice how the other examples look at number of threads or processes; in this implementation we break the problem up into tasks and let the module use all the CPUs by default. I chose 64 tasks, not for any particularly smart reason but you would normally tune it to the available cluster. We lose any complications about passing results back using a queue – our function returns the result so each job returns the result. We start opening up the problem seamlessly into symmetric versus asymmetric parallel processing – we start looking at the problem at a higher level.

For completeness, here is the time taken using parallel python on just a local machine:

Time taken 1.050000 secs for 12800000 iterations
Corn snakes
Photo by Justin Guyer

Now, Parallel Python can feel like a job management system rather than a concurrency solution, but that’s ok. It gives you stats back for jobs, lets you set up templates for tasks, gives you control over the number of CPUs involved. You can start thinking in terms of Big Data interactions and functional programming, map/reduce or whatever the high-level problem space is. Given that the main motivation set out at the beginning of this rather long post is to get our free lunch back, to get processing power moving again with less risk and complexity, then Parallel Python fits the bill nicely. But no risk of lock-in, there are plenty of other high level packages too, a good list of them is here. I haven’t even got into Stackless Python, which aims to solve concurrency in yet another way. But that’s a topic for another post.

To sum up – Python offers something for concurrency and parallel processing at every layer of abstraction. Threads, threading, multiprocessing with inter-process communication, clustering or any number of higher level approaches. Start at the top and stop when you’ve solved your problem!

Links:

http://pyinsci.blogspot.com/2007/09/parallel-processing-in-cpython.html – useful application of much of this post, with more numeric examples.

A talk on how threading is unleashing hell by David Beazley in Chicago – very amusing.

Bruce Eckel on PP and its creator and concurrency in Python using Twisted (an asynchronous module)

Posted in Programming | Tagged , , , , , , | 1 Comment

Who am I dealing with?

Complex network

Figuring out which entity is owned by whom can get tricky (image by Cesar Hadara)

How many trading companies don’t have a golden (master) source for counterparty data? Answer – a surprisingly large number. Many firms have different counterparty codes for each of their systems, with every silo setting up clients at the level that makes sense to their immediate needs. The fraud and money-laundering regulations that require financial firms ‘Know Your Clients‘ as well as the need for accurate credit risk has pushed companies to tidy up their counterparty references. But even so, two different firms can’t easily describe the same legal entities between their two centralised systems.

B3 scrabble letter

(c) Leo Reynolds

Enter the memorably-named ISO 17442, a standard for Legal Entity Identifiers (LEIs) – twenty alpha numeric digits, plus two numeric check digits. Bit tricky to read out on the phone, sure, but with 3 x 10^25 possible entries, it should keep us going for a while, even with potentially every single managed fund getting its own code. Making sense of that in a database could be interesting since there are going to be a whole lot of these codes. Thankfully they won’t be reused like ISINs. When you imagine that a large investment bank has hundreds or thousands of legal entities this could get interesting.

So why are we bothering? The Dodd-Frank Act makes it important that an independent regulator, in this case the CFTC, can keep track of reported positions and they can’t do that with a patchwork quilt of homegrown identifier systems. How can we stop another AIG, stealthily accumulating trillions of dollars of exposure in derivatives positions if no one can figure out what the various bits of AIG are?

So the CFTC (who have been empowered by the Dodd-Frank Act to keep an eye on positions in OTC derivatives) needs a unique global code so trading positions can’t be quietly warehoused at different random subsidiaries. It needs to be able to figure out what the final top position is, across all branches, funds and accounts of a firm. Apart from anything else, I guess that would make the appropriate bail-out easier to figure out.

Hence the introduction of global Legal Entity Identifiers. So, who gets one? The answer is ‘any entity party to a financial transaction’. Now, one of the important things to remember is these LEIs are hierarchical. MegaBank-London will have a parent LEI of MegaBank-Investment-Bank, which will have a parent of MegaBank-Holdings (the ultimate parent).

But you can’t figure out the hierarchy by looking at the code itself. You’ll have to go to something called the ‘Facilities Manager’ to find that out – and they will record the network of control based on who owns what and the local regulations. Hopefully they will have an up to date view of the corporation’s structure. Definition of ownership will be by the regulator – CFTC has decided on a 25% level of control threshold in their final rule 45.6 on page 97. So, you can you get the ultimate parent from any particular code by looking it up? Well, it could be protected by that legal jurisdiction. Tax havens are not likely to change their rules just because some ID codes are now published.

SWIFT will be the registration authority for legal entities and distribute ISO codes for companies. They’ll charge a fee for registration. So you can roll up, pay a fee and get a code that says you’re a subsidiary of MegaBank Ltd, get a huge line of credit and head to Vegas? Alas, no. SWIFT will rely on DTCC to check with the relevant legal authority to establish the hierarchy, although you can imagine some scam artists trying it on as the system beds in. DTCC will also do the actual work of managing the LEI database as ‘Facilities Manager’.

On a side note, wouldn’t it be great if you could get personalised LEIs? DTCC and their subsidiary AVOX (who will actually do the work in this whole set up) could charge more for codes with lots of 7s or 8s. Which bank do we think would get the code 666 666 666 666 666 666 XX?

Let’s be clear that ISINs and CUSIPs won’t change. Just because MegaBank has LEI code ‘XYZ123’, it won’t change the fact that the bonds it issues will have a variety of instrument identifiers.

LEIs will die when the legal entity ceases to exist for the usual reasons: merger, bankruptcy and so on. Keeping this up to date with the hierarchy is going to be pretty interesting. But an entity won’t get a new LEI for just changing their address or name. Of course, I wouldn’t put it past the Bernie Madoffs of this world, or the drug dealers, to move addresses often and rename regularly. I wonder how the government will structure itself and set up its hierarchy. What will the ultimate parent of a bailed-out municipality be? What is the parent of the World Bank or IMF?

So when’s all this happening? Basically… now. SIFMA really threw their weight behind ISO and their standard and it’s become the clear leader. The CFTC has published the Swap Data Reporting Rule and the Real-Time Reporting Rule which require that swaps get reported within a certain time frame and with certain unique identifiers. They’ve dropped their original name (Unique Counterparty Identifier, UCI) from the rules and started using the acronym LEI instead and on page 92 of final rule 45.6 the commission has formally plumped for ISO’s standard. They’ve also clarified the reporting – now when you’ve traded a swap you just need to report the LEI code of the counterparty and the LEI code of its ultimate parent.

October 15th 2012 will be the end of the grace period for using LEIs in reporting. So time to get that data squared away. Here’s an AVOX search on ‘Barclays’ – 171 non-branch results when I tried it. They have a free site which is being re-branded to www.avoxdata.com and interestingly you can suggest a new counterparty. Let’s hope they can detect any amusing or fraudulent suggestions. The free web API uses JSON and various ISO codes for mapping search terms like countries.

Some firms are already on the move – the ultimate parent of Morgan Stanley has been registered with an LEI of IGJSJL3JD5P30I6NJZ34.

A good presentation is here. Purchase a copy of ISO’s memorable 17442 standard here. If you have 66 CHF to hand. (CHF being the ISO currency code for the Swiss Franc…)

Posted in Finance, Programming | Tagged , , , , | Leave a comment

Blast from the past

Leaving the pitch after the Ponsonby game
Leaving the pitch after the Ponsonby game

While looking for old tax files today, I stumbled on a bunch of photos on a thumb drive. A Lexar drive holding all of 128 Mb, it’s definitely vintage.

The photos are of a friendly game I played while down in New Zealand following the 2005 Lions rugby tour. Taken by an Irish buddy of mine, they show a game we played against a local New Zealand team.  We were hosted by Ponsonby RFC in a rain-soaked game on a late Thursday afternoon. The Lions were in town to play a test against the All Blacks on Saturday and we played a veterans team before retiring to the bar.

Throwing into the line out against Ponsonby

Throwing into the line out near the Ponsonby line

Ponsonby is a suburb of Auckland and their rugby club is one of the oldest ones in New Zealand. They have produced 52 All-Blacks over the years. Needless to say if our scratch amateur team had played their first team, it would have been a joke, and pretty dangerous!

The downpour and muddy conditions were captured somewhat blurrily on an ancient Sony digital camera, along with random shots of plastic pint glasses. I was struck by the date code in the bottom right: 2005 7  7.

Emergency Services at Russell Square station
Emergency Services at Russell Square station by Francis Tyers

7/7? Then I suddenly remembered – just a few hours later as we had arrived back in central Auckland from Ponsonby, word swept through the bar about the London bombings. It reminded me a bit of hearing about the 9/11 bombings – disbelief at first and then the mood change swept through the bar we were in. A wave of surprise, horror and then anger, you could see it visibly from our perch on the balcony over the crowd. The mobile phone network began to collapse as tens of thousands of Lions fans rang home to England to check on friends and relatives in London. You have to remember there were about 30,000 Lions fans in downtown Auckland that night, with more New Zealanders coming in for the weekend for the game.

It certainly was a mood dampener. I’m always fond of socialising after a game of rugby and the a few drinks makes the tales get taller and the bruises less noticeable. Within an hour the bar we were in was emptying as most of the Lions fans seemed to head off looking for an internet connection or somewhere the phone would work.

Locations of 7/7 London bombings

Locations of 7/7 London bombings

It all seemed very remote. London was the other side of the world, and I had been travelling for months already. When two weeks later the 21st July attempted bombings hit London (only the detonators went off, not the main explosives) I was on an island in Fiji. As it turned out I never really returned to London. The crazy events of July 2005 (including the death of an unfortunate Brazilian student who was killed the next day) could have been another country to mine.

In many ways I’d left London for good by then. In September I arrived back to my flat in Mile End, simply packed up my things and moved to New York. And that game of rugby in Ponsonby turned out to be the last I’d ever play.

Posted in Rugby | Tagged , , , , , , | Leave a comment

Frozen Planet

Greenland Glacier by Christine Zenino

Greenland glacier by Christine Zenino

We’ve just started watching the BBC documentary Frozen Planet on Blu-ray. It doesn’t seem appropriate to watch it with the air conditioning on, but it was 75F here today in Texas…. what can I say?

It looks spectacular. There are time lapse photography master classes, icicles growing and shrinking in a year, ice sheets growing and shrinking… it must have taken incredible dedication.

“What did you do in 2010, Frank?”

“Mainly I filmed an icicle. It came out sweet.”

Sir David Attenborough

Sir David Attenborough (photo by thirtyfootscrew)

Sir David Attenborough (or for the US English dictionary loaded on this blog, Dave Addenboro) narrates the UK version, which takes me back a few years. He’s been the voice of the BBC’s nature documentaries for decades. Discovery Channel will show the series in the US later in 2012, but have used Alec Baldwin to narrate for the American audience. I don’t mind this actually – he’s a fine actor, has a great voice and anyone who can give us both spoof NPR adverts and the ‘steak knives’ monologue from Glengarry Glen Ross can do justice to BBC Bristol’s show. But one thing we will miss in the US are the on-camera pieces from the grand old man of the BBC, where he presents from each frozen pole. I can’t help hoping he’ll have a close encounter with a polar bear and emerge all shaken up but grinning, just like that scene with the gorillas that made his name all those years ago.

Discovery originally announced that they weren’t taking the seventh show in the series, which tackles climate change. But they reversed their decision and have said they will take that show and air it with the original narration by Sir David, so his unmistakeably voice will be heard at least once.

Just like Blue Planet and Planet Earth, the production values are sky-high, the photography is amazing and, yes, baby animals will be hunted by mean nasty wolves and orcas and bears. That’s nature. And I’m looking forward to hearing Alec Baldwin’s take on it too.

Posted in Uncategorized | Leave a comment

Diving with unknown oxygen

Oxygen tank

(c) LucienTj

Those of you who into technical diving will know that tweaking the amount of oxygen in your breathing gas changes your decompression profile. The later stages of decompression are often completed on pure oxygen to help flush our nitrogen and get us out of the water in an acceptable time.

Closed circuit rebreathers use two tanks: pure oxygen and something to dilute it with (called diluent).

Usually, diving oxygen is provided from the same sources as medical oxygen. This means it’s essentially pure (99.99% is so close to pure it doesn’t matter). In the more remote parts of the world, where there are no modern hospitals, it’s quite hard to get oxygen. One way to come by it is to ‘generate’ it out of the atmosphere. Remember, dry air composition is basically 20.9% oxygen, 78% nitrogen and the remainder is Argon, CO2 and trace elements. So if you can filter out the rest, you’re left with pure oxygen. Industrial oxygen concentrators (generators) will use a molecular sieve to draw out the oxygen, unlike medical oxygen that comes from huge plants that liquify air until each gas condenses out. It’s portable, but doesn’t get the same ultra-purity that a large industrial plant can give you.

On my trip to Bikini in August 2011, the boat MV Windward had an oxygen generator on board that made technical diving possible. It produced 91-96% oxygen. That meant we could go diving on our rebreathers.

So, what are the effects of diving a rebreather with almost-pure oxygen? Well, I dive a rebreather from AP Diving, the Evolution, as did most people on board. There were guys with Megs and Mk 15 rebreathers but essentially the same principles apply. There are two effects when you’re not diving with pure oxygen:

  • you’re no longer closed circuit: as you inject and metabolise the oxygen while diving, the volume of gas in your counterlungs will increase, potentially affecting buoyancy
  • you need to take care to calibrate your unit carefully to make sure you’re breathing the right amount of oxygen underwater

The first issue is not a problem for the dives we’re doing. We’re not special forces operatives, where a bubble gives away your position. You’re moving up and down enough that the amount of flushing is not noticeable for your buoyancy. On deco you notice the PO2 on the loop at your last stop, but it’s not a problem. Since you’re not paying for oxygen, you just flush a lot on your last stop. Buoyancy control was identical, the amount of extra gas was small.

Section 6.4 of the Evolution rebreather manual describes how you calibrate the unit with its oxygen cells, and reminds you to use pure oxygen where possible. Section 6.8.4 Periodic Calibration Check gives you a routine to follow when you’re unsure of your calibration and Appendix 2 gives you detailed instructions for using your own cells to determine what the gas quality is (oxygen percentage). One of the things you do for pure oxygen is tell the unit to calibrate at 98% (to allow for mixing in the lid and other factors), and you would start by taking 2% off whatever percentage of oxygen you really have in your tank.

The thing is… this aspect of technical diving is not regularly visited. It’s not normal to dive in the middle of nowhere in the Pacific and not normal to dive with less than 99.99% pure oxygen. So many of us, while we knew the general idea, had not practiced the techniques. We had to do that when we arrived.

The other obvious thing is that we normally check the oxygen with an analyser that’s calibrated against a known source. But for that trip we didn’t have one, and one of the things that can put an analyser off is heat and humidity (we had plenty of that in our tropical paradise). So… the analyser says 95%. How accurate is it?

Of course, for future trips the organiser and the boat now know to have a small quantity of 100% pure oxygen to calibrate the analysers and maybe the rebreathers as well.

So let’s say you calibrate with 96% but you’re actually diving with 92%. You think you’ve got 96% O2 but you’re actually injecting 92% all the time and have calibrated with that level. Let’s say you choose to dive with a set point of 1.1: you’re actually breathing 1.054 bar PO2. (In fact the 0.05 bar difference is pretty stable from 1.0 to 1.3)

So you actually need more decompression than you think. If it were the reverse (you think you’re diving 92% but you actually have 96%) then you would be running higher oxygen toxicity than you thought. This is the reason why the Evolution Instruction Manual points out in 6.8.4 (Periodic Calibration Check):

you should continue to use values of setpoint +/- 0.05 bar for calculating decompression and oxygen toxicity. e.g. If the setpoint is 1.3, use 1.25 for deco planning and 1.35 for oxygen toxicity planning. This takes into account other factors that affect accuracy, such as humidity.

But what does that mean? Just how much of a difference does that make? Well, let’s take a typical dive that I did in Bikini and see what the difference could be.

USS Anderson Log Trace
Profile of a dive on the USS Anderson in Bikini
Depth charges at the stern of the USS Anderson
Depth charges at the stern of the USS Anderson

The profile above is a two hour dive on the USS Anderson about a week into the trip, using 14/45 trimix. Most of the dive the controllers did a good job of holding to 1.1 bar, or the maximum they could when I was shallower.  I essentially did 40 minutes at 50m and then ascended with the deco ceiling of the computer. If I really was breathing 1.05 bar of oxygen, what would be the difference in deco? If I was really breathing 1.15, what is the difference to my oxygen toxicity?

By this point in the trip I’d been diving several days so my tissue compartments had a variety of levels of nitrogen and helium in them. But we can get a feel for the difference by just plugging the numbers into a dive planner like V-Planner.

V-planner tells us that using 1.05 we’d be finished in 105 minutes with a CNS oxygen toxicity of 38% of max. Using our nominal 1.10, we get 101 minutes and 42% CNS. If we run again with 1.15, we get a run time of 97 minutes and CNS oxtox of 42%. That’s a small but noticeable difference: in other words, if you trim the deco down to its minimum then you could leave the water with 4 minutes of uncompleted deco, and this could build up over several days until you got bent.

So clearly you should try to know exactly what’s in your oxygen tank. You should calibrate carefully and accurately. But above all, try to pad your deco a bit so you can don’t fall prey to these subtle factors. After all, does another 5 minutes in the water in a warm lagoon hurt you?

Posted in Diving | 2 Comments

Google Code Search is shutting down

I came late to the party on Google Code Search. I only started using it recently, mainly as a learning tool to look at real world uses of obscure parts of Java and Python. Now I find that it will be shut down in Jan 2012. You can read the announcement here.

Google Code Search

Miguel de Icaza (the guy who founded Mono) sums up the usefulness of the service here and how the field is wide open since any similar offerings are way behind.

Google cut their ‘lab’ offerings like Code Search and explained the rationale in a recent posting, describing it as a “Fall Sweep.” (a reverse spring clean, perhaps?) Google wants to focus on its core strengths. Google Buzz and Wave are dead, they’re focusing on Google+ now to directly compete with Facebook. Some lap applications did escape the axe by either being released to Open Source or promoted out of the lab. A list of them is here.

I’m as annoyed as the next guy at losing something that was free. We can only hope that this isn’t the beginning of a decay in new thinking at Google and that the free market can come up with a way to replicate the service and turn a buck.

Posted in Programming | Tagged , | Leave a comment

Tech videos online

Effective Java talk Josh Bloch

Josh Bloch gives a talk on Effective Java to a Java User Group in Mountain View

The days when YouTube ruled the roost online are over, particularly for specialised videos like technical presentations from conferences. That said, the majority of tech talks are available there and the quality available is getting better and better. As well as better resolution you’re now seeing better editing, in-screen images of either the presenter or the slides and the audio and video synching up. There are now dedicated channels on YouTube such as GoogleDevelopers and Java (including the amusing Java Rap music video. Oddly polished production values.)

While JavaOne’s 2011 keynotes are hosted at Oracle, many of the seminars are on Parleys.com where you get a strip along the bottom that matches up with slide transitions. On the other hand, you don’t get a view of the presenter very often. For the more interesting and skilled presenters, that’s a shame.

You can get the presentations for the Houston Techfest 2011 in October from UserGroup.TV

blip.tv seems to be a niche for Python presentations such as PyCon.

And often even though the talk may not be filmed, you can often find the slides at places like slideshare.net

Which really means that the point of going to a conference really is the corridor discussions, networking and drinking. You can read half of it on the plane home.

Posted in Programming | Tagged , , , | Leave a comment

USS Texas and the San Jacinto Monument

San Jacinto Monument
San Jacinto Monument

Over the last week I’ve visited the San Jacinto Monument and the USS Texas battleship, which are near Laporte on the eastern suburbs of Houston. A gallery of photos is here.

The Battle of San Jacinto  in 1836 is considered the key moment of the Texas Republic. Compared with the massacres in continental Europe during the Napoleonic wars or the later Civil War it was a fairly small battle but proved decisive – the Mexican President/General/Dictator Santa Anna was captured and forced to grant independence to Texas. It’s marked by an impressive Monument where you can gaze out at the surrounding bayou and battlefield from 500ft up. The city of Houston is in the distance and USS Texas is a short walk to the North.

USS Texas
USS Texas

Commissioned into the US Navy in 1914, USS Texas is the oldest remaining Dreadnought class battleship still above the waves. With five pairs of huge 14″ guns, she was the most powerful ship of her day. She fought in both World Wars and in 1948 Texas raised enough money to have her installed in the San Jacinto Park. She’s withstood a number of storms including Hurricane Ike with next to no damage.

We went on a hard hat tour that is held on a Saturday every two months in winter. It’s too hot in summer, and this huge ship has modest air conditioning. This takes you to areas of the ship that are not open to the public like the bridge, the boiler room, the code room and even into one of the turrets where you get to poke around under one of the enormous 14″ guns.

Engine Room
Engine Room

The public areas of the ship are worth a trip all by themselves, and remind you that Texas has a different attitude to the rest of the US. Fancy looking round the tight engine room? No problem. We trust you not to damage the inches of steel armour, or come running to us if you bump your head. Care to wander the ammo passage? Be our guest.

USS Texas and the tanker Bow Saga

USS Texas and the tanker Bow Saga

Meanwhile, huge tankers are being escorted past on their way up to the refineries across the river. It gives you some idea how big they are when they make even the battleship seem slim and pocket-sized.

Deck plans for the battleship (BB-35) are available online here at the Historical National Ships Association. You can find naval plans there for many famous ships including the USS Saratoga (CV-3).

 If you visit, I would recommend some bug spray if you’re there at dusk as the mosquitoes from the Bayou will show you no mercy.

USS Texas and San Jacinto Monument
USS Texas and San Jacinto Monument

Photos here.

Lacey in the drying room

Lacey in the drying room

Ed in the Distribution Board room

Ed in the Distribution Board room

Posted in Maritime | Tagged , , , , | 1 Comment