Last Saturday I was saddened to hear of the passing of Martin Gardner. My own relationship with Martin’s work was through his republication and of commentary of Silvanus P Thompson’s Calculus Made Easy. In this little book, over the course of a summer holiday (can’t remember now if it was 2002 or 2003), I learned about calculus for the first time. Martin’s commentary helped to put the subject into perspective in light of the radical changes in style that had taken place since Thompson’s first publication of the work, emphasizing that the core ideas were essentially no different. I can’t emphasis enough just how pivotal this event was in my journey through mathematics.

When I at school doing A-Levels first time around I had little interest in studying the subject. In fact I was far more interested in doing art using my computer. However as I got further into the techniques of computer graphics and tried to code my own computer graphics tools I very quickly realized I didn’t know enough math to progress any further. There was no getting around it, before I could do any of the cool things I wanted to do with my computer I’d have to learn calculus and the book that was recommended to me (I think it was in some SIGGRAPH course notes) was Calculus Made Easy – what one fool can do, another can.

There are several obituaries and tributes you can read to learn more about his life and work (for instance here, here, here and here). However the best way to appreciate his work is just to pickup one of his books and start reading.

News of Gardner’s death broke late on Saturday evening. The following day I headed over to the Jam Factory in Oxford for a day of coding at Oxford Geek Jam 6. Leading up to the day we had discussed various formats we could follow but hadn’t really agreed on any one of these so I suggested that we code up one of Martin Gardner’s puzzles as a tribute to him.

Once a critical mass of coders had arrived at the event and after a little searching we decided that the Game of Hip would work really well. I think we were all initially more interested in coding up solutions to puzzles since at the previous geek jam, which took the form of a coding dojo, we’d worked on a problem that we could tackle computationally. However, we could only find one example where someone had coded the game before and that was in Pascal. We therefore also agreed we would implement a JavaScript UI for the game which would be svg rendered on an HTML5 canvas. This would make it possible for us to get a version of the game running on the iPhone (although it turned out that there were some HTML5 and/or svg issues which meant it didn’t really work on Android).

Oxford Geek Jam 6 in full flow

This was the hottest weekend of the year so far and the Sunday felt even hotter that the Saturady, but the pitchers of Pimms got us through. (Incidentally that reminds me that the Cambridge puntcon is coming up fairly soon. Unfortunately I missed it last year, though it might be on a weekend this year I can’t make again. Perhaps we need an Oxford puntcon?) Even after getting kicked out of the Jam Factory earlier than normal for staff training we managed to find a pub that has good wifi continued coding. You can see the end result of our day here.

The format on Sunday differed somewhat from previous Oxford Geek Jams (I’d previously only been to Oxford Geek Jam 5) where hackers mostly worked individually on projects, although there was some pair programming at the last one. As we launched into developing the Game of Hip we decided that we would tackle the problem on three fronts; core game engine, front-end UI and game playing AI (sadly we had insufficient time to fully develop the latter). We then sub-divided the five man team into smaller groups around these themes and programmed the various parts in pairs. As a result I feel we really gelled as a team and in part this was helped by our use of the Oxford Geek Jam svn code repository. We committed changes fairly frequently, although everything got committed into the project trunk so we often found we needed to resolve conflicts. In one respect I think this shows just how closely integrated we were working as a team though it’s clearly not ideal to have to fix conflicts manually. However, I believe it provides a good reason for trying out git next time round to compare the effect on our workflow.

I’m really proud of what we were able to achieve and the day has reaffirmed my belief in the principle that if you get enough bright and motivated people in a room collaborating on a great idea you can do amazing things.


At most points in my life I have had a good idea about what my goals are and a plan about the ways I could make that happen. Perhaps the clearest goal I have ever had was when I sat down one afternoon in mid-2003 and decided there and then I would secure myself a place at Cambridge on the Computer Science course. Although I didn’t realise it at the time, this would take me two years of extremely intensive work, but with this singular goal in mind throughout, I finally achieved it.

As goals go this one was pretty good; the result was very specific and the route to achieving it, through studying for A-Levels, was also very clear and well understood. For various reasons, that aren’t important here, I had missed out on going to University directly from school and so at the time I felt this was the most challenging goal I could set myself to progress my career. These qualities strongly lend themselves to goals that are ultimately achievable.

The only problem was, all my goal said was that I would get in to Cambridge. In particular it said absolutely nothing about what I was going to do once I had got in. I was so focussed on just this one aim that I had left no room to consider anything beyond it. So when I actually found myself there I very quickly lost direction.

This might seem rather strange, because surely the goal was to study hard and do well in my finals? This is certainly a goal, and a jolly good one at that, but it was never explicitly mine. Neither for that matter was the opposite my goal. Simply put I just didn’t have any particular goal in mind. I was, however, motivated by a vague desire to explore and understand as much as possible about Computer Science intially, and then later Mathematics more generally, and in the greatest detail possible by following my interests wherever they took me within those fields. At best this could be described as a yearning. It was certainly not a goal. It was no basis for a plan; especially not one involving studying hard for exams.

It is perhaps glib to observe that without a goal I had nothing to aim for. However, the key driving force that gave me the confidence to continually strive to achieve the highest possible grades I could my A-Levels second time around was that in my mind’s eye I could visualise my acheiving that goal because it was so clearly defined, even though the grades I got doing A-Levels at school said I hadn’t a chance. In constrast, at no point during my degree did I buy into and truly believe (although plenty of people around me did believe) I could achieve First Class grades at the end of each year in Cambridge. Sure enough I didn’t .

None of this is in any way intended as an excuse but there are important observations here that it is worth making. Especially since now, almost two years since the end of my course where I potentially missed out on the benefits of the PhD I might have persued had I given myself the chance to, I’m in a place where I’m ready to be serious again about what direction I’m going and what I want to achieve along the way. I currently have lots of goals. In fact I have too many of them and for most I have yet to fully buy into them. So in my next post on this subject I shall describe the most important of these, outlining for each a specific goal, a simple plan and a reason why it’s challenging and worthwhile to do. In effect I will be pitching these goals to myself as a way of buying into them so that I can believe that I can eventually achieve them and hence start the process of making that happen.

Tags: , , , ,

The following post is a response to Peter Murray-Rust’s post “Time flies like an arrow; fruit flies like a banana. Or do they?

Peter, I fully agree on the fundamental importance of NLP to AI (for me it’s the most important of the so-called AI-complete problems). Indeed, it’s interesting to note that Chomsky’s work on natural language linguistics gave rise to the subject of formal languages which includes all computer languages in which AI solutions must somehow be written. Clearly for efficient human-computer interaction NLP would be extremely beneficial.

However, I strongly believe we should be continually striving for increasing formalism in the end product of our labours independently of the means of how we got to them (I’m mainly thinking of scientific end products here). My definition of formalism in this context includes some form of reduction in ambiguity by some degree of agreement on the meaning and prescribed usage of terms.

I base this last assertion on what I think is the key scientific example of the importance of clarity in meaning and in the logical consequences of that meaning. I speak of course of the revolution in thought about space and time brought about by Einstein’s Theory of Relativity. Post-Einstein wherever you needed to talk about “time flying” (at least in scientific discourse, and more specifically physics) what was meant by that was necessarily fundamentally different to what it had been before. The previous sloppy usage was now simply unacceptable.

All of which affords me the opportunity to quote in-extenso the following passage from Eddington’s Mathematical Theory of Relativity (p. 8):

Those who still insist on the existence of a unique “true time” generally rely on the possibility that the resources of experiment are not yet exhausted and that some day a discriminating test may be found. But the off-chance that a future generation my discover some significance in our utterances is scarcely an excuse for making meaningless noises.

Conversely, where I think NLP techniques have even greater potential, beyond simply working out what someone said, are in the following two ways;

  1. In a grammatically accurate phrase such as “All men are mortal. Socrates is a man. Socrates is not mortal.” it should be possible for a machine to identify the obvious logical error. Much scientific discourse essentially comes down to formulae expressible in simple logics, in which it is possible for a machine to tease out seemingly subtle flaws. (Any formal structure capture in this process must form the basis of the end product if it is to be worthwhile)
  2. Although ambiguity is problematic if we are trying to understand what a person has said in an amenable automated way, there are classes of case in which ambiguity arises where it has beneficial consequences. I can make the analogy here with an abstract algebra such as Group Theory where the ambiguity in exactly what sorts of thing the elements of a group are enables one to prove general theorems about groups that apply to any arbitrary types of elements of groups. Alternatively, we can take the example of Dirac’s bra-ket notation where in a complete bra-ket the individual components have different interpretations which means we can view the complete bra-ket ambiguously from different perspectives although it turns out the they are in any case equivalent.

    So my hope would be that the machine that encounters such ambiguity is able to either abstract away from it a more general concept or allow the ambiguity to pass whilst acknowledging that the interpretations of which it admits are equally valid and possibly intended. Without this latter observations much of poetry would be impossible.


So once again I return this blog and I’m going to start another post with an empty promise to myself to update this blog more often. At some time since the last post I silently changed the name and subtitle of this blog – feel free to lampoon as you see fit. Here are five things I’ve been up to since last time:

  1. Handwriting tweets using the Wacom Bamboo Pen and Touch tablet.
  2. Creating a revised version of the chem visualiser gadget – it’s not ready for the samples gallery yet but I’m working on it.
  3. I started a FriendFeed conversation about minimal scientific artefacts which I then synthesised into a Wave – I think the minimality criterion is actually quite powerful for reasons I go into in the Wave
  4. We had a Wave hack day at RAL which was very successful
  5. I bought a larger antenna for my wifi card – I know this sounds really minor but it’s made a massive difference to my workstation connectivity which in turn has made me a happy bunny

So prepare for the deluge – I’m back in the blogosphere and this time I have something to say. Hopefully…

Really quick post about this because I’m really squeezed for time (I shouldn’t have been working on this today but couldn’t resist it, especially when I worked out how it could be done).

I’ve now got a proof of concept \LaTeX gadget for Google Wave to try out:

(Image of \LaTeX Gadget to be inserted here)

The URL is here:

To install click the jigsaw icon when editing a Wave. Enjoy!

I have to say a massive thanks to Cameron Neylon for this because without his support in getting me a Wave account this wouldn’t have been possible. I’ll blog in more detail about my first impressions hopefully later this week.

Tags: ,

Had a great day in Oxford on Friday at Social Media Convention 2009. I was hoping to blog during the conference but given the amount of issues and ideas that were being thrown about at the time that now seemed overly-optimistic.

Below are summaries of what I think are the main themes of the day from my own perspective.

Conference-craft, backchannels and the battle for attentionspace

This was the first conference I had been to where I’d brought a laptop and so it was my first real exposure to live-tweeting, although I had seen the feeds at other conferences. Perhaps because I was unfamilar with the process (or maybe I was just a little tired after an early start that day) but I found it incredibly difficult at first to keep up with both the discussion from panelists and from twitterers.

In a way this served to make the sessions more practical than it would otherwise would have been since quite a bit of the “official” discussion centered around how the backchannel can often be more important. Another issue that got discussed, I think started by Nigel Shadbolt, was how we as individual users find it increasing difficult to service all the demands on our attention that social media generates. “The battle is for our attention space.” One important metric I picked up on was that 20 mintues per second get uploaded to youtube. The discussion emphasised too that there is an incredible amount of noise in these (back)channels.

Since the same hashtag (#oxsmc09) was used for both of the parallel sessions there was an interesting mupliplexing of comments from differing discussions. However the discussions were not so dissimilar that they didn’t make sense in the context of the discussion I was listening to. During the first sessions one member of the audience, when asked what their twitter id was said they had never used twitter to which there was rapturous applause from the rest of the audience.

During the session on “The growth of the corporate blog – ‘Letting go’ of information control or maintaining the official line?” I took a different approach, with less twittering and instead I created a mind map which you can download from here (Please note: to read this file you will need a copy of Freemind installed on you machine). A slightly unreadable PDF rendering is available from here. [Disclaimer: these documents are released under the same Creative Commons BY-NC-SA License as the rest of this site at time of publication. I lay no claim to and/or responsibility for the accuracy of the details of who said what and when during the discussion those documents purport to portray but would be more than happy to correct any errors that might have crept in.]

Barriers to entry and flat namespaces

One take-home message for the points Bill Thompson (@billt) made was that breaking down barriers to entry is what can facilitate innovation by those who could never have imagined they could do things they have. A quote I like from him was “There are no conferences about fax machines.” (I.e. it’s virtually obsolete technology)

Nigel Shadbolt, perhaps no unsurprisingly, argued that unusual names help to unify the otherwise flat namespace used across all the various social media sites people use. I wondered about the merits of a more hierachical namespace along the lines of the Domain Name system. However this issue is probably less important than solving the problem of maintaining a genuine, authentic and verifiable online persona/identity. For all user of the web this is important but for scientists in particular who are attempting build a reputation through work published online the issue is highly relavent.

Making Science Public – some thoughts on the panel session

Cameron Neylon drove home the point that we all have a stake in science since it is funded from the government coffers and thus we should be encouraging scientists to engage with us and themselves in ways which make their methods and results more transparent and available.

I think Ben Goldacre accurately described the current state of science communication and public understanding when he said that the “Mainstream media cover science badly.” I totally agree. Most mainstream science programs on TV, for instance, are a total joke and have next to zero pure science content. They could be more accurately reclassified as technology programs because that’s where their focus is mostly.

Maxine Clarke, setting out her position at the start of the discussion, explained that essentially Nature’s role is to manage the peer-review process and heavily filter the stream of possible publications. Later in the discussion by the other panelists gave counter-arguments along the lines that readers are actually quite a good at this filtering by themselves anyway.

I was interested to hear Cameron plug a book which I think is Beautiful Data from O’Reily. He cited this as giving examples of the advantages of online sciecnce. I’ll have to get me a copy.

I know there was someone in the audience from GalaxyZoo because Cameron gave them a shout-out and they retweeted one of my comments about how citizen science could be key to engaged anyone who’s currently not a scientist. I was hoping to get to talk with them following the session but they left before I had a chance.

What the rest of the blogosphere is saying

The tweeter who captured most of the limelight and put the the “disruptive” back into “disruptive technologies” was @caffeinebomb. You can read her blog post about the event here.

There’s a really good in-depth summary of some of the sessions on Sara Fletcher’s blog. Another comment peice and link nexus is Brian Kelly’s post about the event.

Tags: , ,

Please note: this post is likely to change a few times before it stablizes as I try out different ideas.

Last night I fixed some configuration problems with the \LaTeX settings for this blog and so I thought it might be a good opportunity to get to know how to use xy-pic better.

This example is the associativity axiom of composition for categories and by definition it commutes:

\xymatrix{A \ar[r]^f \ar[dr]_{f \circ g}& B \ar[d]^g \ar[dr]^{g \circ h} \\& C \ar[r]_h & D }

It clearly shows how associativity is really just a higher-order form of commutativity. Commutativity normally denotes the exchangability of operands; associativity denotes the commutativity of operator application.

I was interested to see if I could draw 2-morphisms from n-category theory, mainly because I wanted to write some notes about this as I watched a video lecture (about which more later). With a bit of help from here I got it working although it did require a couple of tweeks to the site’s config, but here it is:

 \xymatrix{ A\rtwocell^f_g & B }

For now all of this works well enough but I have been struck by how xy-pic tends to focus heavily on the presentational aspects of drawing diagrams. I would prefer a package that allows me to think and express in terms on the diagram structure, i.e. what connects to what and how, and then renders that definition based on certain general rules. It could be argued that this lack of semantic emphasis stems from the fact that the package aims to be very general-purpose in the types of diagrams it allows (I have seen cobordisms and knots typeset using it) but my gut feeling here would be that there are enough things in common between these structures to build a common semantic diagram description language. It almost certainly already exists.

Tags: ,

I’m on my way to Oxford Social Media Covnetion 2009 which is being held at the Said Business School in the University. I have to say big thanks to the organizers of the event for squeezing me in at the last minute, but it’s good to hear that they have been over-subscribed.

The itinerary for the day is given here. I’m particular interested to see the Parallel Session II: Making science public: data-sharing, dissemination and public engagement with science. Hopefully they will allow live-blogging so I can update thoughs during the sessions but anyway it should be a fun day.

Tags: , , ,

Just got around to seeing the Google Tech Talk that Nico Adams and Jim Downing did earlier this summer.

A great introduction to Semantic Web technologies as they apply to any of the sciences. However, the biggest take-home message is that the technology is all for naught if data providers are not willing to make their data open.

Check out from around 45 minutes onwards for a description of the Lenfield build system which is something I prototyped last summer as an Ant build file and is now implemented in Clojure.

Clojure is a functional language which inter-operates very well with Java. There’s a question from the audience asking about Haskell but Jim replies that inter-operability with Java is poor in Haskell and that Clojure is purely functional whereas Haskell is strongly-typed; apperently “so last year”.

That reminds me, next month Simon Peyton-Jones will be giving this year’s Strachey Lecture at the Oxford Comptuer Lab. I think it’s a public lecture, which case I’m definitely going. Was hoping to be going to Oxford Social Media Convention 2009 this Friday but was too late in registering.

Tags: , , , , , ,

Just as I was writing this blog post there has been some development on this from Number 10. You can read the Gordon Brown’s full statement here.

There’s a petition on the 10 Downing Street website running until 20 January 2010 to gain greater recognition for Alan Turing as well as a formal pardon for his treatment under the law of the time for homosexuality.

The purpose of the petition is as follows:

Alan Turing was the greatest computer scientist ever born in Britain. He laid the foundations of computing, helped break the Nazi Enigma code and told us how to tell whether a machine could think.

He was also gay. He was prosecuted for being gay, chemically castrated as a ‘cure’, and took his own life, aged 41.

The British Government should apologize to Alan Turing for his treatment and recognize that his work created much of the world we live in and saved us from Nazi Germany. And an apology would recognize the tragic consequences of prejudice that ended this man’s life and career.

Read more on the BBC News Website.

If you’re a British citizen or resident please consider adding your signature to this petition.

Tags: , , ,

« Older entries § Newer entries »

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
This work by Daniel Hagon is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales.