Your browser doesn't appear to support the HTML5 canvas element.

Sunday, 4 December 2022


Saturday, 5 November 2022


Wired magazine 1.01 was published in 1993 by Nicholas Negreponte & Louis Rosetto.

Every issue contained inside the front cover a 'Mind Grenade', and the one from the very first issue (1.01) is -- in hindisght -- creepy. Here it is:

The 'mind grenade' from Wired 1.01, Circa March 1993.

Damn Professor Negroponte, you ring truer every day.

Thursday, 20 October 2022


Recommender systems are huge outside of Australia and USA such that most marketing managers now consider their optimisation as important as Search Engine Marketing (SEM). I can't believe we have totally missed the ball on this one, and nobody on the other side of planet, from Dubai to London has bothered to tell us!

Anwyays, here's the original seminal paper that Andreas Wiegend (ex Stanford, market genius and inventor of Prediction Markets and The BRIC Bank, Chief Scientist Emeritus of and inventor of recommender systems) directed and promoted this paper. It's based on proper West Coast Silicon Valley AI, with a quality discussion about a number of related techologies and market related effects that impact recommender systems.


Sunday, 17 July 2022


I've been playing with building a Swarm Intelligence simulator based on a fourier domain discretisation to schedule the placement of drones in 3D space and cars in 2D space. Here's a little video demo of it's basic structure in action, on top of this is some differential equations to capture the displacement field, then drone position coords:

LinkedIn post with a video demo of the simulator in structural mode

If you want to have a play with this class of sine wave, you might notice a simpler simulation in the background of this blog. It has a few extra features not normally seen of these types of simulation: instead of a single point being able to move along one axis (usually the Y-axis), every point in my simulation can move anywhere along the X, Y or Z axis. Take a look yourself, left-click and drag the mouse on the background (where the 3D simulation is happening) to rotate the simulation in realtime. Look below the surface to see the mesh, above it and you get a flat view. 

For best effect, try full-screen browser, remove all content and view just the background wave simulation.

Saturday, 30 March 2019


How Google used vizualisation to become one of the worlds most valuable companies

At VizDynamics we have done a lot of 'viz'-ualisation, so I’ve seen more than several life-times worth of dashboards, reports, KPIs, models, metrics, insights and all manner of presentation and interaction approaches thereof.

Yet one Viz has always stuck in my mind.

More than a decade ago when I was post start-up exit and sitting out a competitive-restraint clause, I entertained myself by travelling the world in search of every significant thought leader and publication about probabilistic reasoning that I could find. Some were very contemporary; others were most ancient. I tried to read them all.

A much younger me @ the first Googleplex (circa 2002)

Some of this travelling included regular visits to Googleplex 1.0, back before they floated and well before anyone knew just how much damn cash they were making. As part of these regular visits, I came across a viz at the original ‘Plex that blew me away. It sat in a darkened hall in a room full of engineers on a small table at the end of a row of cubicles. On this little IBM screen was an at-the-time closely guarded viz:

The "Live Queries" vizualisation @ Googleplex 1.0

Notice the green data points on the map? They are monetised searches. Notice the icons next to the search phrases? More “$” symbols meant better monetisation. This was pre-NPS, but the goal was the same – link $ to :) then lay it bare for all to see.

What makes this unassuming viz so good?

It's purpose.

Guided by Schmidt’s steady hand, Larry & Sergey (L&S) had amassed the brainpower of 300+ world leading engineers, then unleashed them by allowing them to work independently. They now needed a way for them to self-govern and -optimise their continual improvements to product & revenue whilst keeping everyone aligned to Google's users-first mantra.

The solution was straightforward: use vizualisation to bring the users into the building for everyone to see, provide a visceral checkpoint of their mission and progress, and do it in a humanely digestible manner.

Simple in form & embracing of Tufteism, the bottom third of the screen scrolled through user searches as they occurred, whilst the top area was dedicated to a simple map projection showing where the last N searches had originated from. An impressively unpretentious viz that let the Data talk to one’s inner mind. The pictograph in the top section was for visual and spatially aware thinkers, under that was tabular Data for the more quantitative types. And there wasn’t a single number or metric in sight (well not directly anyway). Three obviously intentional design principles executed well.

More than just a Viz, this was a software solution to a plurality of organizational problems.

To properly understand the impact, imagine yourself for a moment as a Googler, briskly walking through the Googleplex towards your next meeting or snack or whatever. You alter your route slightly so you can pass by a small screen on your way through. The viz on the screen:

  • instantly and unobtrusively brought you closer to your users,
  • persistently reminded you and the rest of the (easily distracted) engineers to stay focused on the core product,
  • provided constant feedback on financial performance of recent product refinements, and
  • inspired new ideas

before you continued down the hall.

The best vizualisations humanise difficult data in a visceral way

This was visual perfection because it was relevant to everyone, from L&S down to the most junior of interns. Every pixel served a purpose, coming together into an elegantly simple view of Google's current state. Data made so effortlessly digestible that it spoke to one’s subconscious mind with only a passing glance. A viz so powerful that it helped Google to become one of the world’s most valuable companies. This was a portal into people's innermost thoughts and desires as they were typing them into Google. All this... on one tiny little IBM screen, at the end of a row of cubicles.

Thursday, 1 June 2017


Acland Street is the result of two years of research. As well as extended archival and social media research, more than 150 people who had lived, worked, and played in Acland Street were interviewed to reveal its unique social, cultural, architectural, and economic history.

Of course we got a mention on page 133:

Note the special mention under 'The Technology', page 133. Circa 1995, published 2017.

Tuesday, 1 September 2015


Homepix Photography-351

Wednesday, 29 October 2014


Here's a quick one to make the files available online from today's AI lecture at RMIT University. Much thanks to Lawrence Cavedon for making it happen.


  1. Lecture Notes (PDF)
  2. Course/grade/intelligence plate model example (Netica file)
  3. Output from sneezing diagnosis class exercise (Netica file)
  4. Burgular hidden markov model example (Netica file)
  5. Same burgular HMM in Excel (XLS file)
Have fun and feel free to email me once you get your bayes-nets up and running!

Thursday, 16 October 2014


It's always good to give a little something back, so each year I do some guest lecturing on data warehousing to RMIT's CS Masters students.

We usually pull a data warehouse box out of our compute cloud for the session so I can walk through the whole end-to-end stack from the hardware through to the dashboards. The session is quite useful and always well received by students.

This year the delightful Jenny Zhang and I showed the students MDX Runner, an abstraction used at VizDynamics on a daily basis to access our data warehouses. As powerful as MDX is, it has a steep learning curve and the result sets it returns can be bewildering to access programmatically. MDX Runner eases this pain by abstracting out the task of building and consuming MDX queries.

Given that it has usefulness far beyond what we do at VizDynamics, I have made arrangements for MDX Runner to be open-sourced. If you are running analysis services or any other MDX-compatible data warehousing environment, take a look at - you will certainly find it useful.

Do reach out with updates if you test it against any of the other BI platforms. Hopefully over time we can start building out a nice generalised interface into Oracle, Teradata, SAP HANA and SSAS.

Saturday, 13 September 2014


In this post I share a formal framework for reasoning about advertising traffic flows, is how black box optimisers work and needs to be covered before we get into any models. If you are a marketer, then the advertising stuff will be old-hat and if you are a data scientist then the axioms will seem almost obvious.

What is useful is combining this advertising + science view and the interesting conclusions about traffic valuation one can draw from it. The framework is generalised and can be applied to a single placement or to an entire channel.

Creative vs. Data – Who will win?

I should preface by saying my view on creative is that it is more important than the quality of one's analysis and media buying prowess. All the data crunching in the world is not worth a pinch if the proposition is wrong or the execution is poor.

On the other hand, an amazing ad for a great product delivered to just the right people at the perfect moment will set the world on fire.

Problem Description

The digital advertising optimisation problem is well known: analyse the performance data collected to date and find the advertising mix that allocates the budget in such a way that maximises the expected revenue.

This can be divided into three sub-problems: assigning conversion probabilities to each of the advertising opportunities; estimating the financial value of advertising opportunities; and finding the Optimal Media Plan.

The most difficult of these is the assessment of conversion probabilities. Considering only the performance of a single placement or search phrase tends to discard large volumes of otherwise useful data (for example, the performance of closely related keywords or placements). What is required is a technique that makes full use of all the data in calculating these probabilities without double-counting any information.

The Holy Triumvirate of Digital Advertising

In most digital advertising marketplaces, forces are such that traffic with high conversion probability will cost more than traffic with a lower conversion probability (see Figure 1). This is because advertisers are willing to pay a premium for better quality traffic flows while simultaneously avoiding traffic with low conversion probability.

Digital advertising also possesses the property that the incremental cost of traffic increases as an advertiser purchases more traffic from a publisher (see Figure 2). For example, an advertiser might increase the spend on a particular placement by 40%, but it is unlikely that any new deal would generate an additional 40% increase in traffic or sales.

Figure 1: Advertiser demand causes the cost of traffic to increase with conversion probability

Figure 2: Publishers adjust the cost of traffic upward exponentially as traffic volume increases

To counter this effect, sophisticated marketers grow their advertising portfolios by expanding into new sites and opportunities (by adding more placements), rather than by paying more for the advertising they already have. This horizontal expansion creates an optimisation problem: given a monthly budget of $x, what allocation of advertising will generate the most sales? This configuration then is the Optimal Media Plan.

Figure 3: The Holy Triumvirate of Digital Advertising: Cost, Volume and Propensity

NB: There are plenty of counterexamples when this response surface is observed in the wild. For example with Figure 2, some placements are more logarithmic than exponential, while others are a combination of the two. A good agency spends their days navigating and/or negotiating this so that one doesn't end up over paying.

To solve the Optimal Media Plan problem, one needs to know three things for every advertising opportunity: the cost of each prospective placement; the expected volume of clicks; and the propensity of the placement to convert clicks into sales (see Figure 3). This Holy Triumvirate of Digital Advertising (cost, volume and propensity) is constrained along a response surface that ensures that low cost, high propensity and high volume placements occur infrequently and without longevity.

For the remainder of this post (and well into the future), propensity will be considered exclusively in terms of ConversionProbability. This post will provide a general framework for this media plan optimisation problem and explore how ConversionProbability relates to search and display advertising.

Friday, 21 March 2014


Oh dear, the game is up. Our big secret is out. We should have a parade.

The Future of Modernity

This year is looking like when computer scientists come out and confess that the world is undergoing a huge technology driven revolution based on simple probabilities. Or perhaps it's just that people have started to notice the rather obvious impact it is making on their lives (the hype around the recent DARPA Robotics Challenge and Christine Lagarde's entertaining lecture last month are both marvelous example of that).

This change is to computer science what quantum mechanics was to physics: a grand shift in thinking from an absolute and observable world to an uncertain and far less observable one. We are leaving the digital age and entering the probabilistic one. The main architects of this change are some very smart people and my favorite super heroes - Daphne Koller, Sebastian Thrun, Richard Neapolitan, Andrew Ng and Ron Howard (no not the Happy Days Ron – this one).

Behind this shift are a clique of innovators and ‘thought leaders’ with an amazing vision of the future. Their vision is grand and they are slowly creating the global cultural change they need to execute it. In their vision, freeways are close to 100% occupied, all cars travel at maximum speed and the population growth declines to a sustainable level.

This upcoming convergence of population to sustainable levels will not come from job-stealing or killer robots, but from increased efficiency and the better lives we will all live, id est, the kind of productivity increase that is inversely proportional to population growth.

And then world is saved... by computer scientists.

What is it sort of exactly-ish?

Classical computer science is based on very precise, finite and discrete things, like counting pebbles, rocks and shells in an exact manner. This classical science consists of many useful pieces such as the von-neumann architecture, relational databases, sets, graph theory, combinatorics, determinism, greek logic, sort + merge, and so many other well defined and expressible-in-binary things.

What is now taking hold is a whole different class of computer-ey science, grounded in probabilistic reasoning and with some other thing called information theory thrown in on the sidelines. This kind of science allows us to deal in the greyness of the world. Thus we can, say, assign happiness values to whether we think those previously mentioned objects are in fact more pebbly, rocky or shelly given what we know about the time of day and its effect on the lighting of pebble-ish, rock-ish and shell-ish looking things. Those happiness values are expressed as probabilities.

The convenience of this probability-based framework is its compact representation of what we know, as well as its ability to quantify what we do not(ish).

Its subjective approach is very unlike the objectivism of classical stats. In classical stats, we are trying to uncover a pre-existing independent, unbiased assessment. In the Bayesian or probabilistic world bias is welcomed as it represents our existing knowledge, which we then update with real data. Whoa.

This paradigm shift is far more general than just the building of robots - it's changing the world.

I shall now show you the evidence so you may update your probabilities

A testament to the power of this approach is that the market leaders in many tech verticals already have this math at their heart. Google Search is a perfect example - only half of their rankings are PageRank based. The rest is a big probability model that converts your search query into a machine-readable version of your innermost thoughts and desires (to the untrained eye it looks a lot like magic).

If you don’t believe me, consider for a moment, how does Facebook choose what to display in your own feed? How do laptops and phones interpret gestures? How do handwriting, speech and facial recognition systems work? Error Correction? Chatbots? Emotion recognition? Game AI? PhotoSynth? Data Compression?

It’s mostly all the same math. There are other ways, which are useful for some sub-problems, but they can all ultimately be decomposed or factored into some sort of Bayesian or Markovian graphical probability model.

Try it yourself: Pickup your iPhone right now and ask the delightful Siri if she is probabilistic, then assign a happiness value in your mind as to whether she is. There, you are now a Bayesian.

APAC is missing out

Notwithstanding small pockets of knowledge, we don’t properly teach this material in Australia, partly because it is so difficult to learn.

We are not alone here. Japan was recently struck down by this same affliction when their robots could not help to resolve their Fukushima disaster. Their classically trained robots cannot cope with changes to their environment that probabilities so neatly quantify.

To give you an idea of how profound this thesis is, or how far and wide it will eventually travel, it is currently taught by the top American universities across many faculties. The only other mathematical discipline that has found its way into every aspect of science, business and humanities is the Greek logic, and that is thousands of years old.

A neat mathematical magic trick

The Probabilistic Calculus subsumes Greek Logic, Predicate Logic, Markov Chains, Kalman Filters, Linear Models, possibly even Neural Networks; that is, because they can all be expressed as graphical probability models. Thus logic is no longer king. Probabilities, expected utility and value of information are the new general purpose ‘Bayesian’ way to reason about anything, and can be applied in a boardroom setting as effectively as in the lab.

One could build a probability model to reason about things like love, however it's ill advised. For example, a well-trained model would be quite adept at answering questions like “what is the probability of my enduring happiness given a prospective partner with particular traits and habits.”

The ethical dilemma here is that a robot built on the Bayesian Thesis is not thinking as we know it – it's just a systematic application of an ingenious mathematical trick to create the appearance of thought. Thus for some things, it simply is not appropriate to pretend to think deeply about a topic; one must actually do it.

We need bandwidth or we will devour your 4G network whole

These probabilistic apps of the future (some of which already exist) will drive bandwidth-hogging monsters (quite possibly literally) that could make full use of low latency fibre connections.

These apps construct real-time models of their world based on vast repositories of constantly updated knowledge stored ‘in the cloud’. The mechanics of this requires the ability to transmit and consume live video feeds, whilst simultaneously firing off thousands of queries against grand mid- and big-data repositories.

For example, an app might want to assign probabilities to what that shiny thing is over there, or if its just sensor noise, or if you should buy it, or if you should avoid crashing into it, or if popular sentiment towards it is negative; and, oh dear, we might want to do that thousands of times per second by querying Flickr, Facebook and Google and and and. All at once. Whilst dancing. And wearing a Gucci augmented reality headset, connected to my Hermes product aware wallet.

This repetitive probability calculation is exactly what robots do, but in a disconnected way. Imagine what is possible once they are all connected to the cloud. And then to each other. Then consider how much bandwidth it will require!

But, more seriously, the downside of this is that our currently sufficient 4G LTE network will very quickly be overwhelmed by these magical new apps in a similar way to how the iPhone crushed our 3G data networks.

Given that i-Devices and robots like to move around, I don't know whether FTTH would be worth the expense, but near-FTTH with a very high performance wireless local loop certainly would help here. At some point we will be buying Hugo Boss branded Occulus Rift VR headsets, and they need to plug into something a little more substantive than what we have today.

Ahh OK, what does this have to do with advertising?

In my previous post I said I would be covering advertising things. So here it is if you haven't already worked it out: this same probability guff also works with digital advertising, and astonishingly well.

There I said it, the secret is out. Time for a parade.

...some useful bits coming in the next post.

Thursday, 20 March 2014


Fab, I’m blogging.

A Chump’s Game

A good friend of mine, whilst working at a New York hedge fund once said to me, “online advertising is a chump’s game”.

At the time he was exploring the idea of constructing financial instruments around the trade of user attention. His comment was coming from just how unpredictable, heterogeneous and generally intractable the quantitative side of advertising can be. Soon after, he quickly recoiled from the world of digital advertising and re-ascended back into the transparent market efficiency of haute finance; a world of looking for the next big “arb”.

What I Do

I am a data scientist and I work on this problem every day.

Over the last 15 or so years I have come to find that digital advertising is, in fact, completely the opposite of a chump's game – yes, media marketplaces are extraordinarily opaque and highly disconnected – but with that comes fantastically gross pricing inefficiencies exploitable in a form of advertising arbitrage.

The Wall Street guys saw this, but never quite cracked how to exploit it.

What you will find here

If you have spent more than a little time with me, then in between mountain biking and heli-boarding at my two favorite places in the world, you will have probably heard about or seen a probability model or two.

In the coming months I will be banging on about some of this, and in particular sharing a few easy tricks on how advertisers can use data to gain a bit of an advantage. With the right approach, it’s rather simple.
The concepts I will present here are already built into the black-box ad platforms we use daily, the foundations of which are two closely related assumptions:

  • Any flow of advertising traffic has people behind it whom possess a fixed amount of buying power and a more elastic willingness to exercise it.
  • As individuals we are independent thinkers, but as a swarm, we behave in remarkably predictable ways.

My aim is that one will find the material useful with little more than a bit of Excel and one or two free-ish downloads. The approach is carefully principled, elegantly simple and astonishingly effective.

Achtung! This site makes use of in-browser 3D. If your computer is struggling, then you probably need a little upgrade, a GPU or a browser change. Modern data science needs compute power, and alot of it.

The format is a mix of theory, worked examples and how-to, combined with a touch of spreadsheet engineering. A dear friend of mine – whom has written more than a few articles for the Economist - will be helping me edit things to keep the technical guff to a minimum.

I am hoping along the way that a few interesting people might also compare their own experiences and provide further feedback and refinement. If its well received then we might scale up the complexity.

So, if digital advertising is your game then stay tuned, this will be a bit of fun!

Tuesday, 17 August 1999


Back when Oracle 8i was a thing, Fibre Channel was all the rage and on the Oracle roadmap was Oracle Parallel Database, I was called into the Oracle HQ in Redwood City, Silicon Valley.

They'd pushed back the release of "Parallel" quarter after quarter and a couple senior engineers caught wind that I was the guy for doing custom built memory memory managers and I was doing big highly parallelised data structures on large SGI platforms. I had one data structure running on 16 racks of Silicon Graphics kit at a large data centre off highway 101 for an investment bank (aka hedge fund), and I'd earned a reputation for being able to do parallel distributted read/write locking of vast data structures with all CPU threads running at full speed and without any mutex locking or blocking. So, I was summoned to Silicon Valley "for a chat".

Oracle had this grand idea for Oracle 8i to share fibre channel LUNs between hosts, and your federated database would sit on one LUN with multiple Oracle 8i instances on separate machines all accessing the same database in parallel (hence the name 'Parallel'). Oracle at the time was actively influencing the specs of fibe channel (FCAL), but they just couldn't get it to work -- so I was called in so they could pick my brains. The visit was fun, but I was no dummy, and I certainly wasn't going to give up the secrets to how to build the worlds fastest computing systems.

I found the meeting quite entertaining and it descended into an argument over Oracle's outrageous pricing. On a multi-cpu system craylinked together with other multi-cpu systems, why should I pay Oracle a licensing fee for every damn CPU when we had called in Mark Gurry (the guy that wrote the book on Oracle Performance Tuning), tuned the crap out of Oracle so that it barely used a single CPU, maybe two under heavy load. I won the argument and secured special pricing for Oracle moving forward (possibly not what they had intended for our meeting - oh well, that's AP for you!)

A much more youthful looking me standing next to the Oracle lake after our meeting

Tuesday, 27 January 1998


Geekz on Demand (G.O.D) was a HR consultancy I started back in 1997 with Richard Taylor. At the time it was tech boom 1.0, and there was a dearth of talent that properly knew what the Internet was and how to get things onto it.

Entre The Geekbase, and it took off like a rocket, landing us in the news quite consistently:

Rowan and I on the cover of The Age, 27 Jan 1998.

Wednesday, 17 August 1994


I spent tech boom 1.0 putting as much as I could online.


Not unlike Steve Jobs, I started my tech-career with phone phreaking.

Fresh out of the AFP banging on the front door of my Brighton home in 1993 for phone phreaking and hacking (of course, being underage, no conviction was recorded). We were quite prolific and formed a loose nameless collective of the absolute best of the day. We were awash with so many zero day c0dez and exploits that we had to invent "minus days". Feared by the rest of the hacking community, we had the ability to access any computer system or telephone network we desired. Even Julian Assange (aka 'prof') ph3ared us. Our threat-hunting was extrordinary, we learned how to come up with 5 new minus-days exploits in 24 hours, we'd scan 1000 telephone numbers on a bad day, and 10,000 on a good one (basically an entire suburb). We even had a team of computer illiterate guys that had a van and spent their evenings going through corporate rubbish bins looking for manuals and other goodies.

Our innovations were countless, ranging from fax bombing telephone numbers (A DOS attack directed towards a voice line/mobile), digital phreaking using diverters and pads, hacking network switches to create conference bridges (or 'party lines' as we called them), drag-net hacking, and we even developed a code of ethics which kept us firmly in the realm of 'explorers' and away from the less desirable labels of 'fraudsters' or 'terrorists'.

The Black Hat Code of Ethics
  • Don't take
  • Don't break
  • Don't change
  • Don't steal (eg, credit card fraud)

The general rule was that if you broke one of those rules, then you had to put it right before you were done with the system you had hacked into.

We weren't without oversight either. For example, a senior cryptographer from the DSD would dial into our party line and give me and my mates a fortnightly lecture on number theory and cryptography (which at 13 years old is something special that lives with you the rest of your life). This went on for at least 9 months. We also had techs from Telstra watching over our activites as well, because we were basically pen-testing (penetration testing) digital telephony. Occasionally the techs would get in touch and give us a dressing down for some of our activites, so we had boundaries. The DSD guy also found out what was the latest and greatest thing in the black-hat world, so he could keep his finger on-the-pulse. A really chilled, very Australian and very hillarious symbiotic 'free flow of information' type of relationship was shared by all us insiders. Basically, we were a harmless bunch of Aussie bogans doing lots of hysterical phone pranks on a global scale for the most part. For the outsiders though, we were the pinacle of the 'Trust no-one' ethos and were scarry. Very scarry. In the black-hat world, hax0rs would have wars and we won every single one.

We became so internationally renowned, that the second ever DEFCON was held right here in Melbourne, and Captain Crunch (aka John Draper) flew out to meet us.


After that little event, I decided I was done with hax0ring and phreakx0ring and moved to the next big thing: emersing myself in what would become known as 'The Tech Boom', being famous and doing haute-tech. It was so much more fun, and considerably less anti-social.

Me on the cover of the Computer Age, circa early 1995.

We were everywhere, putting anything and everything online, helping anyone and everyone understand what this new Internet thing was. I quickly became the poster boy for the Internet here in Melbourne, and by 1995 we had stood up Australia's first Netcafe on Acland Street. It was a hive of activity, and attracted everyone. I quickly caught the attention of the media, including Wired Magazine, but they didn't have any reporters in Australia. After a couple calls, they did the next best thing: pro sponsorship. The Wired folk didn't usually do sponsorships, but someone at the office called Absolut Vodka and did the next best thing, sending us a crate of 6 Absolut Vodka bottles (including whichever one was on the back cover for that month), along with a fresh, air-freighted current month edition of Wired Magazine, direct to The Netcafe. Back then, Wired was the bible, but it also arrived in Australia 3 months late thanks to it being sea-freighted.

Me sitting at a PC in the upstairs room at The Netcafe, featuring Win95 and surrounded by Marcsta artworks
Business Review Weekly (BRW), June 1995

The Netcafe was a special place. With support from Michael Bethune from OzOnline, Adam from Standard Computers and of course Micrcosoft, we created a bunch of rooms above the Deluxe cafe on Acland street that people could experience both The Internet and Windows 95. 95 was a good operating system, it felt like a Silicon Graphics or Sun workstation and it was fast. Everyone that came in we showed it to, taking Melbourne from zero-knowledge about The 'Net to being as educated as anyone in Silicon Valley. It was an amazing time.

Even Jeff Kennet humself gave me a little tiny gold badge of Victoria as an acknowledgement of my contribution to the state.

During this period I crossed paths with Rob Furst, founder of the inky street-press music scene publication Beat Magazine. He quickly employed me as the e-editor, responsible for getting street press mag onto the Internet. I was still a kid at the time, and still studying at Brighton Grammar. How I fitted everything in I don't know, but I did. It was a great time and fun was had by all:

List of contributors to Beat Magazine, circa 1995. Note the e-editor :)

In 1996 a second luminary from Silicon Valley flew out to meet me -- Theodore ('Ted') Holm Nelson, inventor of hypertext. He heard that something amazing was happening in Australia and he came out here to take a look for himself. We chatted, solved a few problems then he went on his way. I still have his business card today.