Recently this ownedcore thread about Blizzard hiding identifiable data in World of Warcraft screen shots sparked a minor tweetstorm among the infosec folks I follow. While I’m not a WoW player, the thread did get me thinking about hidden-in-plain-sight covert channels and where they might be found.
While nowhere near as sophisticated as Blizzards steganography, I remembered seeing a few “easter eggs” where people had hidden ASCII messages in HTML comments for exciting, retentive types like myself to find. While I usually find them by just “viewing source” in Firefox when I’m bored (and that’s really bored) I thought a fun short project would be putting together a script to grab comments from the command line for use in more sophisticated tools or monitors.
At first I thought of putting together something with curl or wget and grep, but a lot of times these tiny projects are perfect for trying out something new, so instead of a shell script I decided to throw something together in Python instead. I’ve never done any scraping with Python, but it seemed like it might be a handy skill to have in my back pocket.
The result is the short script below. I’m not going to claim this is great code since Python isn’t my forte (yet), but it gets the job done. I used BeautifulSoup 4 and mechanize as mentioned in this stackoverflow post, and a bit of code for manipulating comments right from the BeautifulSoup 3 documentation.
While this doesn’t save much work if you’re just casually checking a site’s comments, it’s a great building block if you’re looking to automatically check for changes or hidden messages. (If you need some fun examples, checkout The Oatmeal or Reddit)
#!/usr/bin/env python
#coding:utf-8
import sys
from mechanize import Browser
from bs4 import BeautifulSoup, Comment
brwsr = Browser()
res = brwsr.open(sys.argv[1])
data = res.get_data()
soup = BeautifulSoup(data)
comments = soup.findAll(text=lambda text:isinstance(text, Comment))
for comment in comments:
print comment
If you haven’t played with BeautifulSoup or mechanize, check them out as they both look to be useful for quick and dirty scraping of web data.
