Here at Metamarkets, we help our customers quickly make sense of big data sets. To put our solution to the test, I thought it would be interesting to analyze events surrounding the Wikipedia blackout on January 18th in protest of SOPA.
Let’s look at edit activity before and after the blackout:
As expected, edits dropped off significantly on January 18th, but they did not completely go down to zero. This is because the blackout only affected English (en) articles.
Edits of other languages were not affected and stayed fairly consistent as you can see below:
Next, I was curious how different geographies would come back online after the blackout, so I looked at edits by city:
Sure enough, London and Bangalore were most active once the English language articles were available followed by cities like New York.
It makes sense that recent news articles such as the Costa Concordia disaster as well as events surrounding SOPA would be the first to get updated.
So why is this interesting?
First, I did this analysis (setup, data loading, processing, exploration, etc. ) all in one afternoon using Metamarkets.
Second, I wanted very granular access to the metrics to truly understand what was going on. In this case, I viewed data at the hourly level, but we have some customers that analyze data in minute increments given the dynamic nature of their businesses.
Third, I was able to quickly pivot and re-orient my analysis based on the specific questions I had at hand. With Metamarkets, I can randomly slice, dice, and drill into data without being constrained by pre-defined navigation paths.
Finally, with live data feeds, I can immediately analyze new data without waiting for pre-processing or re-calculation (not apparent from the screenshots above).
If you would like to play around with this demo yourself, you can sign up here.
While Metamarkets is initially focused on analyzing online advertising events, we think that our solution is broadly applicable to other industries and problem areas. We would love to hear your thoughts.