Today I witnessed an exchange between two very intelligent people the subject of writing automated tests for software development.  There was more detail than I can put here, but the basic discussion was:

I think writing tests for my code is important. Management doesn’t think so.
Should I try to change their views?

The “For” argument was essentially:

Management is resistant to automated testing because they don’t understand the cost-to-value proposition. If you explain the benefit to them, they will come around.

The “Against” was, interestingly, similar:

Management is resistant to automated testing because they don’t understand the cost-to-value proposition. No amount of explanation will convince them. They must “come around” on their own.

I wanted to get involved in this, I really did, but I realized that arguing on the internet is, still, just arguing on the internet and it wasn’t a can of worms I was willing to open. So instead, I’ll place my banner in the ground right here. This is my third approach to this situation:

You are ultimately responsible for the company you work for.
If you disagree with their views without actively trying to change them, you actually agree with them.
The standard you walk past is the standard you accept.

If you claim to be a “testing zealot“, but work for a company that had forbade you from writing tests (or seriously limited your ability to write them) and you choose to stay, you really don’t care as much about testing as you claim. Conversely, of you think tests are a waste of time but you job requires them and you write them — guess what, you are a supporter of the testing culture.

Anyone can have opinions, but action speak louder than words. Ideas a cheap, execution is what counts.

So, although it sounds harsh, here is the ultimate truth as I see it:

If you want to write tests but your company won’t let you (and you’ve unsuccessfully tried to change their views), either leave and write tests somewhere else or stay with the knowledge that you don’t believe testing is as important as you thought.

When you say “First Amendment”, everybody automatically knows what you mean even if they are not from United States. Even though we can probably take it for granted that people know what that means, I’m going to give a quick capsule synopsis here just to make sure.  Although the actual text of the U.S. Constitution has a lot of different wording, such as “Congress shall not etc.”, here is my working definition of  ”freedom of speech”, sourced from a Google definition:

The right to express any opinions without censorship or restraint.

This is great. But, like the man said, “With great power comes great responsibility.” The counterpoint to this right is that we then bear the responsibility for how what we chose to say is received.

This is oftentimes forgotten. About 15 years ago I was part of the web community who coined the phrase “you do not have the right to not be offended,” and while I agree with that sentiment to this day, it does not mean that you are free to offend with impunity.

Oh sure, you do have the right to offend people. Make no mistake, you are free as a human being to be as offensive as you choose to anyone you choose. but, if you choose this action, you’re also choosing to face the consequences: if somebody is offended by your actions, that offense is your responsibility.

Even if it was not your intention to offend anyone.

Too often we dismiss this by saying that, because we didn’t mean it, it is of no consequence and that intent it what matters. Sometimes, we’ll even say that it is the responsibility if the person feeling the bite of our words to get clarification, to see if that’s what we really meant. Seldom, but often enough to mention, we’ll even say that people should just toughen up and learn to not be so sensitive.

To those points, I say “Wrong“. And since it is now my responsibility to ensure my words are taken in the right light, here is the explanation:

 

Continue reading

Backstory

One job ago I worked on a web product that had a pretty good coverage of Selenium tests. Those tests would spawn a Firefox instance and, as such, tended to be slow. On my quad-core Mac Pro they would clock in at a bit over 25 minutes.

Htmlunit_logoA co-worker of mine saw that this was not ideal, time-wise, and set about fixing it. He pulled out Selenium and plugged-in HtmlUnit (via Celerity/Culerity), updated the tests and — BEHOLD! — the test suite ran in four minutes.

Totally. F’ing. Epic.

Fast-forward to now

Webkit_LogoI’m starting a new web product project and ,naturally, I have some full-stack testing in from the beginning. This time I use capybara-wekbit and all is well. For a while.

Then some warts appear: a case come up where capybara-webkit doesn’t quite render something the way Chrome, Safari,or IE do so although the tests pass, it doesn’t actually work in a real browser.  I spend some serious time reworking the code to make it work properly in capybara-webkit as well as the commodity browsers.

All is good again for a time, but then it happened once more. Again, it’s fixed.

The third time this happens, I stop and think about it:

This stuff MUST work in real browsers, and I usually end up testing that by hand. So what’s the point of running tests through this headless browser? Why am I sinking time in to making this work in a browser that, quite literally, NO ONE will actually use?

The revelation was made: the browser being used is part of the stack in full-stack testing.

And, as such, trying to avoid using those browsers directly is, then, avoiding testing part of your stack. A lot has been said that you should develop on the same system as production: same interpreter, same database, same OS. So why should your tests not follow the same logic? Why test using an abstracted browser that, functionally, is never really used by a human?

Full Stack

Unit-testing is a fair point: when you are unit testing JS, a headless browser is probably a good thing to use. It will be faster and (assuming ECMA compliance) just as good as a real browser. But unit-testing is not full-stack testing.

This was a turning point for me. Before that, I saw using Selenium as using a crutch. Now I see that using Selenium is the whole point. You are not just testing your code, you are also testing how your code interacts with the browser. To wit, the browser is in actuality part of your product.

Ok, so, fact: doing Selenium tests with one browser is slow. Doing then with multiple browsers is even slower. But the solution to this problem is not to remove the browser from the equation, is is to make the browser a manageable part of the equation. Farming, parallelism, concurrency, those are all viable ways to make the speed issue more manageable. But removing the browsers from the testing process is not.

Typical Hipster Office
Typical Hipster Office

Have you ever been to one of those fancy parties those Dot-Com people throw? You know the ones: It’s at their office in some scary part of SoMa. The building used to be a wool cannery, but now that the company has 80 dicktillion bucks they’ve installed a skateboard ramp and drive-thru Kombucha stand.

Typical Hipsters
Typical Hipsters

You’re surrounded by 30-something single men who are trying hard to look like 20-something androgynous women. They’re all wearing wool caps and their younger sisters’ jeans, sipping gold-plated champagne and complaining about how it’s just not as good as that PBR that had last night at that one bar in the Tenderloin.

They’re all talking about computer programming. You’re not a programmer.

But you can fake it with these three simple phrases.

 

Continue reading

Back in 2005 I had this idea for a web site. Here’s how it was supposed to work: You would use your mobile phone to send messages to a web site that would then re-broadcast that message to your “friends”. You could also use small desktop app. You’d “friend” people, and then the site would send you anything they posted.
Fake Twitter?
Sound familiar? For the uninitiated, that’s basically Twitter. But about two years early.

I started work on it, but never finished it because I started to feel like there was no market for something like that. I got sidetracked and let it die.

Then, around 2007, I heard about this “tweet” thing and I was like:

Fuck, really? What a rip-off of my idea! Twitter STOLE my idea!

Continue reading