About a year ago, I wrote a post when I got back into blogging and RSS. I’m still doing both of those things, but I’ve also been building a few more projects.
Some of the best things you can work on solve problems for yourself, and I wanted to build something that would collate all the exhibitions on at major cultural institutions in London. Normalised, searchable, filterable, and readable.
Nothing too wild I thought. I’ll just need to combine all the RSS feeds that the respective sites publish for their upcoming exhibitions. To my surprise, none of them had RSS feeds.
Interoperability on The Internet
I’ve made websites for half my life, and I’ve seen many things come and go. With the recent breakout of AI fever, one thing that came back to me, was the idea of The Semantic Web.
On that page, there’s an interesting quote from Tim Berners-Lee, the inventor of the World Wide Web:
I have a dream for the Web in which computers become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A “Semantic Web”, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The “intelligent agents” people have touted for ages will finally materialize.
Intelligent Agents? Although that sounds a lot like the current wave of AI chatbots, I think it could also mean something else.
Blogging as a standard
I remember when most major sites had RSS feeds. Big organisations rallied around the idea. My final project in University, was a graph dashboard with some very simple sentiment analysis of RSS feeds from major news sites.
That was 2011. I was able to build something that could pull in data from all these different sources and display it in a way that was useful to me.
Trying to pull together data for my exhibitions project left me dissapointed. The Tate, Natural History Museum, Wellcome Collection, V&A, and many others, were feedless. It’s been a long time since I saw the little orange badge, but I expected better from these cultural institutions.
Meaning in the madness
I think, that Blogs and RSS were a little taste of this semantic web.
Now, we’ve got many sophisticated ways of parsing and using this data, but it’s all at an OS level by Apple, Google, Microsoft, and Amazon.
Google pushed the spec of Atom and RSS with Google Data and Kinds. We have some great extensions of content here - event start and end dates, locations, quality ratings for reviews, and people associated with the content.
Kinds a great idea. It’s what I hoped we had by now.
It’s just a shame that using and remixing the web is only capable by big companies.