I was amusing myself with the androidify app on my evo on the way back from JFK on my day of failed flying. Enjoy!
I was amusing myself with the androidify app on my evo on the way back from JFK on my day of failed flying. Enjoy!
I got my self some new temperature sensors for my hacked together home thermal monitor. The software that runs this is still aweful, that’s on my project list. But at least now I’m able to figure out thermal variance inside the house, which is broader than I’d have expected.
A couple other changes were made in loading in the new sensors as well.
The outdoor temperature sensor is now on the North side of the house. This should minimize the amount of the day sunlight can hit the thing. While Oregon Scientific believes it can live in direct sunlight, my experience over the last two weeks is that it definitely can not. You get a 4 – 6 degree spike under direct sunlight. I could almost use the spikes to generate a map of the trees on the south side of the house based on when their shadows hit the sensor. The short of it, my reporting to Wunderground should be more accurate now.
The cold frame sensor is still at the far edge of receptivity, especially with the amount of earth it needs to go through. I build a tin foil reflector which seems to be helping a little, but it still drops out from time to time, generating the square waves in the curve. Not much I can do about that.
If you want to know more about this project, this post is probably the best starting point.
NOVA just aired their special on Watson, the computer that is going to compete on the IBM Jeopardy challenge, and it’s really good. Even my wife, who often waves that kind of thing off as “boring” was sucked in and glued to her seat for the entire program. They do a really good job of explaining some of the basics of how Watson works, and why this is incredibly hard to do.
NOVA is current streaming this episode online, so you can watch it on your computer if you missed it when it aired.
I’m really looking forward to the 3 nights of matches (Feb 14 – 16). Match 1 will span nights 1 & 2, presumably to explain some of what’s going on to the audience, and night 3 will be the second match in it’s entirety. I know of events at SUNY New Paltz (where I’ll be headed), and Bard College, and I assume many other locations around here, given that Watson itself lives about 50 minutes south of here at the Yorktown Heights Research Facility.
At the web design meetup the other day there was a bit of chat about editors afterwards which turned into a normal geek fest of poking fun at each other for our choices. One new truth that came out of this a good editor doesn’t make someone more productive, but it does allow them to become more productive.
As an emacs guy, that can navigate around in vi reasonably, I’ve was always somewhat surprised that people actually use vi to program. But there are plenty of folks that hold that position, and are as equally surprised that I use emacs. Over the years I tried to look for some common thread. What makes someone choose one or the other, and hold to that choice so firmly. A few years ago I came around to the best predictor. With probably ~ 90% accuracy you can predict which editor a person will use based on what editor their mentor used when they first got serious about computers. People pick up the tools of their ancestors. What looks like free will is basically dictated by heredity.
So, that’s how you figure out which one people use. So, which one is better? The answer to that is simple, whichever one you know the best.
There is this naive assumption that using a “better” editor will make you more productive. The answer is, maybe, but don’t stop there. If you aren’t willing to put in the time to learn how your editor is configured, take time every month to learn a new trick with your editor, give up on either vim or emacs, and go back to gedit. Learning an editor well is going to mean learning how to do very complex activities with complex key strokes. There just aren’t enough keys on the keyboard to be able to do what you want to do without combining them together.
Every couple of months I spend a bit of time to learn a few more emacs tricks. It’s hard for me to even enumerate what these are, because they are just part of my normal workflow, like water. Buffer changing, remote editing, multi language buffers, integrated source management. I am also aware of other things I could be doing, which include interactive debugger control, which don’t warrant the time and skill for me to set them up yet. But I know they are there, and I even know how I would get started in configuring them if I needed to.
Which also brings another point, if your tools themselves aren’t extensible, you will be limited in your growth. I have never met a system that I like 100%, though I’ve met many which did 98% or 99% of what I wanted. This is one of the reasons I’m a really big fan of open source. Couple 98% complete with an easy way to extend it, plus open source so you can really understand what it is doing behind the scenes, and you’ve got a winning combination.
tldr; – If you want to be a better programmer, spend a lot of time getting to know your editor.
If you are not already deeply invested in an editor, I’d heartily recommend emacs, fire it up, then hit “F1 t” or from the menus, Help -> Tutorial. It will walk you through an interactive tutorial of basic key commands, including detecting if your local installation has changed any of the keybindings and alerting you appropriately.
If you are already invested in an editor, commit to yourself that you’ll take 1/2 a day a month, for the rest of 2011, and spend that time reading and learning about other capabilities of the editor you are using now. You’ll be a better programmer for it by far by the end of the year.
Ed de Bono has published a number of interesting books on using what we seem to understand about the brain to force ourselves to be more creative and get out of ruts. Some of his training was included in a leadership class I took back in 2006. Among the various models for thinking about thinking he created, one is this idea of the Six Thinking Hats.
One of the things we learn from Western Classical teaching is that the road to the truth is through 2 sides arguing, and eventually the truth emerges. Our entire legal system is based on it. The problem is, this is actually a really poor way to get to consensus, because once people start arguing for a side they actually become more entrenched in the idea as it progresses.
The human mind is an interesting thing, we often decide something is good or bad from our gut, and then spin a complex set of justifications later. Justifications that come out on the fly about why a gut reaction is provably true in some way. Everyone does this to some degree. As with everything, it’s easier to see this flaw in others, but watch yourself in a heated discussion next time. If you pay attention close enough, you’ll see yourself doing it. We all have these reactions, that’s part of being human.
But it’s not part of being productive or moving forward. de Bono created a methodology for working with ideas with different colored hats. You tell everyone that now is the time for White Hat, which means facts only on the table. Black and Yellow are worst case and best case scenarios respectively. Green is for new idea generation, and Blue for wrapping things together. And then there is the Red hat. If what you are discussing has any level of contentiousness, it’s really important to open up a time where everyone can express their feeling and gut reactions about it, not with a complex justification for those feelings, just the feelings themselves. I divorces the gut instincts from logical arguments, but it still lets everyone express them, and that can be useful data in making a decision.
Even if you are doing anything that formal, the Red Thinking Hat is an interesting model for realizing you are no longer in a conversation or debate that’s going anywhere except round and round in circles on previous prejudices.
While de Bono wrote lots of books, including one specific to this, the book Serious Creativity, was the one most recommended to us by the instructor I had. I’ve used as a class refresher over the years, and it includes an introduction to the Six Thinking Hats, as well as much more about idea generation.
There is a standard Rookie mistake in website development when storing people’s information. A phone number, looks like a number, so people think they can use an integer to store it. The problem is that integers, as implemented on most modern computer systems and environments, have a maximum value of 2147483647. The results of this:
Somewhere in Dallas, some poor bastard is wondering why his phone rings off the hook with calls for the Nevada Division of Mental Health & Developmental Services, the Jackson County Florida Chamber of Commerce, a yacht club in New York…..
It’s even funnier because I’ve got a friend that ran into the same issue when working on a project as an undergraduate, it happens more often than you think.
Over at Communication Nation:
It’s time to think about what companies really are, and to design with that in mind. Companies are not so much machines as complex, dynamic, growing systems. As they get larger, acquiring smaller companies, entering into joint ventures and partnerships, and expanding overseas, they become “systems of systems” that rival nation-states in scale and reach.
So what happens if we rethink the modern company, if we stop thinking of it as a machine and start thinking of it as a complex, growing system? What happens if we think of it less like a machine and more like an organism? Or even better, what if we compared the company with other large, complex human systems, like, for example, the city?
There are some very good specific points in this article about how you make a successful organism like this.
Spaces need owners. Again, think of the city street: every business or building has an owner. The sidewalks have owners – typically every business at street level “polices” their stretch of sidewalk. And even the street has owners – the street sweeper, the cop on the beat. In the same way, make sure that every online space you create has someone positioned to take care of it, to keep it safe and clean.
This is something people most often get wrong. Communities are gardens, and only flourish with tending. I’ve seen way to many efforts fail because there was an assumption that someone else was going to take care of the community. Things without owners slowly rot. Things only get done when someone makes a decision to do it.
Thanks to Sacha Chua for the link.
As soon as I went to work the home network dropped off the internet, wouldn’t you know it. It turns out that was the time the old lease ran out, and it wanted my router to ask for it again. As I’d configured it to static to deal with their lease issue, no such luck.
Fixing it when I got home taught me another lesson, it’s actually really important to clone the MAC address as well of the old router. With the original MAC address in place I couldn’t get a DHCP lease. With a clone of the FIOS router, all was good.
I finally got around to installing my own wireless router on my FIOS network, a Linksys e2100L with dd-wrt installed on it. After the router is setup (that’s beyond the scope of this post) there are 2 tricks to make this work.
First, Verizon FIOS gives out really long dhcp leases, and doesn’t want to give them up. So you need to not only clone the MAC address for your router, but actually set it to the ip addresses that your old router was given. I’m told that after about 2 weeks you’ll be able to start using DHCP again, but you can’t for the switch over.
Secondly, you have to set the upstream MTU. Presumably Verizon is doing some VLAN tagging, which would explain why your IP addresses can jump all over the place after a major network change on their side. 1496 should be a safe value, and it looked like it worked, but I left mine down at 1450, for no good reason other than superstition. This was the trick I was missing before, and since I’ve been dealing with bizarre networking issues at work recently the idea was still floating around in my brain.
It’s now working, and my port forwarding is setup enough that I can do any fixes I need remotely via vpn.
I realized last night at our replacement MHVLUG dinner that I was the only one there who was still using Firefox on Linux. Everyone else was on the Chrome bandwagon. And that’s where I thought it would stay, until Chrome 9 came out today.