Bitly Social Data APIs

We just released a bunch of social data analysis APIs over at bitly. I’m really excited about this, as it’s offering developers the power to use social data in a way that hasn’t been available before. There are three types of endpoints and each one is awesome for a different reason.

First, we share the analysis that we do at the link level. Every developer using data from the web has the same set of problems — what are the topics of those URLs? What are their keywords? Why should you rebuild this infrastructure when we’ve done it already? We’ve also added in a few bits of bitly magic — for example, you can use the /v3/link/location endpoint to see where in the world people are consuming that information from.

Second, we’ve opened up access to a realtime search engine. That’s an actual search engine that returns results ranked by current attention and popularity. Links are only retained for 24 hours, so you know that anything you see is actively receiving attention. If you think of bitly as a stream of stories that people are paying attention to, this search API offers you the ability to filter the stream by criteria like domain, topic, or location (“food” links from Brooklyn is one of my favorites) and pull out the content, in realtime, that meets your criteria. You can test it out with a human-friendly interface at rt.ly.

Finally, we asked the question — what is the world paying attention to right now? We have a system that tracks the rate of clicks – a proxy for attention – on phrases contained within the URLs being clicked through bitly. Then we can look and see which phrases are currently receiving a disproportionate amount of attention. We call these “bursting phrases”, and you can access them with the /v3/realtime/bursting_phrases endpoint. It’s analogous to Twitter’s trending topics, but based on attention (what people do), not shares (what they say), and across the entire social web.

I’m extremely excited to see what people build with these tools.

 


Where’s the API that can tell me that this photo contains a puppy and a can of Coke?

puppy and a can of coke

Photo by Ahmad van der Breggen on Flickr.

We’ve gotten very good at extracting and disambiguation entities from text data. You can license a commodity system, and there are API and even open source tools that work fairly well.

However, a large percentage of content that people share is not primarily text (a back-of-the-envelope guess says around 18%), and we currently have very little automated insight into that content.

I know this is a very hard problem, but I’m continuously surprised by how few people seem to be working on it. Any ideas?


Yahoo OpenHackNYC: The Del.icio.us Cake

Last weekend Yahoo came to New York for an Open Hack Day, and it was great!

I was invited to speak on a panel on semantic metadata, moderated by Paul Ford (harpers.org) along with Marco Neumann (KONA) and Paul Tarjan (Yahoo/Search Monkey). The panel was a lively discussion, and we got some great questions from the audience.

After the panel, I stayed around to participate in the hack competition. Yahoo! provided a fantastic space, with free-flowing coffee, snacks, comfy chairs and plenty of Yahoo folks and other hackers around to give advice and play foosball with. I teamed up with Diana Eng, Alicia Gibb, and Bill Ward to create the Del.icio.us Cake!

The cake is attached to a laptop via USB. A program running on the laptop accepts a delicious tag and retrieves a list of recent popular sites for that tag from the delicious API. Finally, it iterates through each URL, downloads the page, and computes the sentiment of that page relative to the tag — basically, is the content of the page positive, neutral or negative?

The signal is output to an ardiuno (hidden in the middle of the cake) which turns on the appropriate set of LEDs. There are four sets of LEDs on the cake, one in each quadrant of the delicious logo, one each for positive sentiment, neutral or inconclusive sentiment, and negative sentiment, and, of course, one to let us know that the cake is turned on.

I wrote the sentiment classifiers between around 3am and 6am Saturday morning, so they really were a hack! I trained them on movie reviews data, working with the assumption that 5-star reviews contain positive terms and 1-star reviews contain negative terms. I wouldn’t recommend this approach for a serious attempt at sentiment analysis, but it worked well enough.

We won the food/hardware hack prize, shared with the awesome MakerBot team!

We had a great time creating and presenting the hack. Thanks, Yahoo, and most of all, thanks to Alicia, Bill, and Diana for a really fantastic, silly weekend.

Further coverage: