Tag Archives: web

How you are being tracked online

Source: Online tracking: A 1-million-site measurement and analysis

A very solid paper on how you are being tracked online. I had known about Font fingerprinting before (as the list of fonts you have installed is actually pretty unique), but using audio filter fingerprinting, or web rtc to get a list of ip addresses you can reach, is pretty novel. And a bit scary.

Which is all another good reason to install Ad Blocking software today. uBlock is my current favorite.

SSL everywhere

One of my new years resolutions was to put more crypto into the world. Be it because of state actors, or rogue ISPs, I think the world would be a better place with a bit more cryptography in it.

As part of this, I just converted the two websites that I run, dague.net and mhvlug.org to SSL only. I’d had an SSL cert on the admin portion of dague.net for a while, but decided there was no reason to not make all traffic SSL.

Getting Certs

You can get certificates tons of places. I had bought a $12/yr cert for dague.net through namecheap. For mhvlug.org I used startssl, which provides free 1yr certs for individual hosts. They have a process for signing up, doing some automatic verification that you own the domain in question, and then you are off to the races. Their process is about as easy as SSL management tends to be, and there are good instructions for installing the cert into Apache.

IPv4 setup

SSL comes from a time when the IPv4 namespace looked small, but manageable. Before it became clear that the median # of IP addresses per human on earth would be 5 – 10. Oh how naive we were.

As such, the base protocol has no equivalent of vhosts, which means 1 hostname == 1 ip address. dague.net and mhvlug.org live on the same linode, which means I need to carry a second IPv4 address for compatibility.

In 2006 there was an approved extension to TLS call SNI (Server Name Indication), which would bring SSL to the world of vhosting. It’s largely supported, however there are some substantial holdouts, including:

  • Android < 3 – there are enough Android 2 devices out there yet that I don’t want to kill that off
  • Python < 3.3 – fixing this in 2.x was considered a “feature” and rejected, which means Python 2.x automation tools are directly an impediment to SSLing the web, as any python web service clients will fail unless they are on Python 3.3. (We seriously need a Python 2.8)

IPv6 setup

Both of these domains are IPv6 enabled. In Apache this means you need to duplicate the SSL configuration for IPv6 as well. Oh, and you need a couple more IPs (I only had 1 on the box). Linode helpfully allocated me a /64 for my box, so now I can IPv6 to my hearts content.

What stands behind us and an all SSL internet?

SSL setup is a little harder than just throwing up a web server. That being said, it’s not that bad. I realistically think the IPv4 shortage and the failure by thing like python to fix the issue in the version people have deployed, is a real problem. Because basically bots won’t be able to find these sites, they’ll fail back to the default site.

At this point I’m not going to launch anything new that’s not SSL enabled. SSL should be our default as the internet community, and right now it only costs a small amount of time and an extra IP address.

Never known an open web

Recently, a lot of people that I admire and look up to have raised their voices, advocating for getting the Internet back to what it once was. An open web. A web we shared and owned together. The old web was awesome.

It sure sounds awesome. Currently, our networks and our personal data are controlled by major corporations with no respect for privacy. Silicon Valley, that so-called tech hotbed of “innovation” and “disruption,” is by most reports becoming a culture of inequality and vapidity. Getting back to the founding open standards the web is, I’m told, a solution to all of this. The web should be a place where we can own our data, where our best developers focus on solving the problems we need to solve as a democratic society. An open web accepts all people and creates a culture of inclusion.

Again, sounds great. As a webmaker, I want an open web. But as someone who has never experienced that, I don’t know where to begin in making it. I’m not sure simply reverting back to what we had is the right path if we want to include people who have never experienced the open web or understand its principles.

via I’m 22 years old and what is this. — Medium.

It’s interesting to realize that the digital natives have basically only known a SaaS web, and how we can move forward when the expectations are that the platform is closed and controlled by a small number of interests.

Why you need an API

I’ve been playing around with Thinkup, which you should too, if you value any content you create for Twitter or Facebook. For the twitter side they’ve got this nice graph of clients you’ve used. Here is mine:

This, as far as I’m concerned, explains why you need an API. Less than 10% of the content that I’ve created in twitter comes from their official web interface. And even my most used interface only represents 1/2 of my content. This is only possible because Twitter has been API strong since the beginning.

If you are building a platform to publish information, it has to have a web services API. Otherwise you are going to massively limit the ways that your users can and will publish information.

Adblock fixes for Poughkeepsie Journal

I finally got around to figuring out how to fix Adblock (Chrome or Firefox) with the Poughkeepsie Journal site. Without running adblock, I find their site completely unusable. Because their design team does truly crazy things with javascript (and not in a good way), basically all the controls on the site don’t work (including the search box) with default adblock. It turns out that by adding the following exception to your Adblock rules, you get most of that function back:

@@||http://www.poughkeepsiejournal.com/odygel/*

And now you can do things like go to page 2 of a story, or even use the search box, and not be assaulted by popups and half page expanding ads that push all the content off the page.

Google Font API

One of my favorite new tools in doing websites is the new Google Font API. Using various hooks that exist in modern browsers, as well as really ancient versions of IE, Google is building a huge library of Open Licensed fonts. With a single css include, you can use any of these on your site, and be assured that the bulk of the internet will see what you see.

My current favorite of the new fonts is Cabin, which I’ve found works incredibly well on headers. There are now many dozen fonts in their API, and it is growing all the time.

In addition to using them on the web, you can download these fonts for local use. There is also a donate button when you download so you can give some money to the font creator, which will ensure more fonts under open licenses get released. I did this for the author of Cabin, as I love his eye for typography and want to see more of that out there.

With technology like this, HTML 5, brand new releases of Firefox and Internet Explore, the promise of the web a good as native, or even better than native, is really starting to take form. Very cool stuff.

Weekend hacking… much progress

This was one of the most productive weekends of hacking that I’ve had in a long time. I finally managed to get all my fixes for the drupal epublish module upstream. And, as a bonus, I wrote a first pass at exporting the data via views because I needed that in order to make the Featured Veggie block on the PFP website work. I’m actually hoping this is going to turn into an epublish 1.6 release before the end of the month.

On the PFP site I completed the last few things I committed to as part of the winter feature additions. This now means that instead of featured programs on the front page, we’ve got features, promoting columns from our upcoming (or just released) newsletter. I also managed to pull of a bit of a back flip and via the afore mentioned epublish views, figure out what veggie was most recently highlighted in a newsletter. Thus creating the featured veggie box.

The mid-hudson astro site saw a bit of work as well. My lending module is now pretty robust, and ready for review to become an official drupal module. Hopefully it will get some testing over the next month and we can have start transitioning to it come April.

And lastly, I managed to connect up with some of the NY State Senate developers working open government initiatives. Hopefully we can get one of them to come down and give a lecture at MHVLUG, as I think a local talk on open government would be really spectacular.

On the quest for Doonesbury

Last night I spent nearly 2 hours in the Doonesbury archive looking for a strip which I remember reading when I was just working at IBM. This morning I figured I’d keep going back through the archive on the off chance that it was earlier than I thought, it was. From August 4th, 1996:

This is one of the easiest ways to explain to people what spiders do on the internet.

Making the internet a better place

When it comes to the Internet there is something we’ve all done that’s bad for us.  You know you’ve done it.  You’ve probably done it when people weren’t watching, when you were all alone.  You know what I’m talking about: reading comments on websites.

You find yourself involved in an article that you may or may not agree with, but you find quite interesting.  Something the author spent some time on, weighed different wording, and tried to create a coherent statement.  They might even have provided helpful links and footnotes to let you learn more, or to back up their thoughts.  You are so enthralled that you keep reading at that level of interest as the article comes to an end.  You want more, so you keep reading.

Wait, what was that, when did the article reference nazis?  What’s this?  Conspiracy of big pharma?  but I thought this article was about nice places to go picnicing?  The author is a free mason?  When the hell did Apple’s new product become relevant to this article?  Oh … no … I’m lost in a see of comments written by crazy people.  Help me I’m getting dumber by the minute!

Well, there is a solution.  There is this very nice addon for Firefox called CommentBlocker.  Install it and now the internet immediately becomes a better place.  Average IQ of your reading experience goes up by at least 20 points, and your overall satisfaction interacting with the web goes way up.  As a bonus your faith in humanity goes up a couple of points as the loud and angry trolls, ones that aren’t willing to put their name on their statements, no longer get to dominate the conversation.  If there is a site where the comments are of high value, or you just need a little crazy in the afternoon to keep you awake, in one click it will return the comments, but for that site only.

Take back the web, don’t feed the trolls, use CommentBlocker.