omgbloglol

Month
Filter by post type
All posts

Text
Photo
Quote
Link
Chat
Audio
Video

November 2012

speakers.io will connect speakers and conferences

tl;dr: speakers.io is an app to connect speakers and conferences. It’s going to be awesome.

I’ve been a conference organizer for almost 6 years now (and a speaker for slightly longer), and year after year I’ve hit the same points of friction in my work. Seeing someone encounter the same thing with BritRuby made me realize that we could fix it! Partially, at least.

An aside about BritRuby

I had a long diatribe written about how everyone should stop forcing their priorities on event organizers and likewise organizers should find ways to include more people and people shouldn’t whine about people being passionate about diversity, but really it can just boil down to this: stop being a jerk.

If you think that statement could be about you, then it probably is. So, stop it. And let’s get back to organizing mindblowing Ruby events that include everyone.

So, how does speakers.io help that happen?

Connecting speakers…

So watching this whole BritRuby thing, I kept thinking, “couldn’t we solve part of this with software?” When I sit down to consider who to invite to speak at my events (the keynote, typically), it would be really nice to not only get information about a presenter, but be able to know if they’re interested in speaking, available to come, and then reach out to them easily. If we had this utility with a lot of speakers on it (including men, women, children, alien, golden retrievers, and anyone else who could operate a mouse) that was easily navigable, searchable, and so on, it’d make it way easier for organizers to hunt down some great, diverse speakers?

Likewise, when I’m looking to apply to some conferences, it’s difficult to get a definitive list of what’s available, what I’d be interested in, and where my talks might fit. What if there was a list of conferences that made it super easy to submit talks to?

As a speaker and organizer, I look at services like Speaker Rate and Lanyrd and see the value somewhat. There’s a lot of information available on there and it’s a pretty neat way for a speaker to build a profile. I think if I attended more conferences rather than speaking or organizing them, I’d probably derive more value from it. But I’ve always felt that they weren’t serving the right audience (or that the right audience in my mind wasn’t intended to be served but should be).

These apps represent some great tools for attendees to schedule things, get the scoop on a speaker, rate them and give feedback and so on, but they’ve historically not been very helpful in terms of creating a community for conference organizers and speakers to work. I’d like to change that.

…with conferences

The first sort of “arm” of speakers.io is the organizer-speaker. First, obviously, we’ll let speakers build profiles of talks with links and such to videos/slides/whatever so an organizer can review their past work. The profile will also contain information like topics the person is interested in discussing and their schedule (always frustrating to ping a speaker, wait a week, and then find out they’re not available!). Organizers will be able to search for speakers and get results that only include speakers available on your event date and boosted by diversity index (if a speaker has provided that information), topic expertise (judged by tags and number of presentations on a topic), and so on. Then organizers can issue “talk requests” for a speaker or even a specific talk. Did you see a great talk by Zach Holman at TrollConf? You can request that he give that talk at your event next month, or you can request he come speak with some notes about what you’d like to see. You and Zach can work together then to put together an awesome talk description to work from.

We’ll also be exposing some simple CFP functionality for organizers. Speakers will then be able to search events by topic, location, dates, and so on, then submit a talk to multiple CFP’s at once (or submit a previously presented talk in one click). Organizers can then filter and transform their submissions on axes like “only show first run talks,” “sort by numebr of presentations,” and so on.

…and other speakers

The second arm is the speaker-speaker functionality. I really like the concept of SpeakerConf, in that it’s a number of speakers getting together to present, hone their ideas, and practice their craft. Why not re-create that in software?

Speakers will be able to create a talk on speakers.io that isn’t published to the public, and then invite other speakers to come and collaborate on it with them. The vision here is still a bit ethereal, but basically, I want speakers to be able to help other speakers create better presentations through sharing slides, video, code, etc. Like I said, I’m still hammering out the vision here, but it will end up in there!

AND!

OK, there is no and. That’s it. I want to keep the tool simple and focused. I’m fine with farming out slide sharing to Speaker Deck and selling tickets to one of the many fine ticket sellers. It doesn’t need to be a universal tool. The point is to connect organizers to a wider array of speakers than they may have encountered otherwise and to connect speakers to events they may not even be aware of.

“Is this a business or whatever?” No, I’m over the moon with working at GitHub. It’s just something I’m doing on the side to make things easier: it’s free and always will be. I don’t know about going open source or not (that’s because I don’t want to manage it, not because I necessarily want to keep it secret), but I’m going to push this out soon and see what happens beyond that.

So, head over to speakers.io and sign up. I had hoped to have an alpha version out to start getting speakers into and such, but I got bogged down a bit this week, so no such luck. I should hopefully have a basic version done by next Monday so we can start playing with it! Hit me on Twitter at @jm if you have any other cool ideas (or want to do a UI design for it…I can send wireframe ideas!).

Nov 26, 2012
#ruby #rails #events #speakers

April 2012

On a positive note: I'm starting a positive newsletter.

I love surfing Twitter, checking Facebook, and hanging out on Reddit/Hacker News/etc. as much as the next person, but I’ve got to be honest: you guys can kind of be douchebags. Pair that with the constant cycle of terrible news being pumped out of CNN, FOXNews, and friends, it makes me feel pretty bad about the world when I go through my morning reading cycle.

Of course, the solution is easy: give it up. Right. So, I’ll unplug from the world, stop associating with most people, and simply live in a silent bubble of limited information for the rest of my life. That works great for some people (the Information Diet is a “thing,” remember?), but that’s not how I roll. I love absorbing information. Learning is exciting for me. But, the (seemingly) recent trend of Twitter arguments, crappy news, fear-driven reporting, and general crappery and loud-mouthiness associated with a lot of non-traditional news outlets (e.g., TechCrunch) has really started to affect my mood. I’m more on edge. I’m quicker to get grumpy.

So, I’m forcing myself to do some Happiness Therapy™ everyday. I’ve started a newsletter called Good Morning, Interwebs, which will drop a little packet of positive into your inbox every morning. By making myself seek out positive news, good things going in the world, and other stuff that will generally make me smile, I’m thinking it’ll make me feel better about things in general. I’ve tried things like this before, but I quit quickly. “OK I’ve had a few days of this, great, OK, done.” But with the added pressure of “people expect this in their inbox tomorrow morning,” I can’t skip out on it quite as easily.

You can subscribe with this form:

So, go forth and enjoy. I’m not sure how this is all going to take shape, but hopefully it’ll add a bit of positivity to your morning before you wander out into the vast wasteland of negative, attention-hungry (ZOMG DID YOU KNOW ANOTHER PERSON GOT SHOT IN YOUR CITY? YOU DO NOW. BE AFRAID), and frankly exhausting media. :)

Apr 11, 2012
On Railcar: an isolated Rails environment

Ever since launching Railcar, I’ve been getting a lot of questions about why I’m building it, how I’m approaching things, how people can help, and so on, so I thought I’d take a few minutes and share some things with you.

Why?

The main reason is that the Kickstarter pointed to a real need that I didn’t realize still existed. I’ve become so separated from what it means to be “new to Rails” that I didn’t realize it was still a problem to get a Rails setup going on your machine, but thinking back on it, it was and still is a bit of a nutty process. It’s not just the installation of stuff, but the whole environment around the application. How do you start it up? Why can’t I just stick it in my Apache root and let it go? Why do I have to configure a database file? Migrations? What is all this? I forgot how much cognitive friction really exists there that things like Locomotive removed when I was first starting out. I had some spare time, experience building desktop apps, and some ideas about how it should work, so why not hack on something?

Now, as I’ve learned more about Tokaido’s approach, I’m really glad that I started building something else. Statically linking everything is good for the tools to get started (for example, we’re going to make it such that you don’t have to compile anything to get started with Rails and SQLite), but offering everything statically linked is just going make things difficult. There’s a reason that Apple doesn’t like you to statically link things on OS X (go ahead, try it; there’s no crt0.o for a reason). You can find their reasoning through a quick Googling (unfortunately, the original page explaining it appears to be gone now), but essentially they force you to dynamically link to the system libraries and kernel (even in static mode). Their position is that purely static linking is a Bad Thing because things can change and break under your code (e.g., moving a piece of functionality from the Mach kernel to userland). Plus, if part of the reason you don’t want people to have to compile things is the file size of the GCC package, you’re not going to help that with statically linking everything. Locomotive was about 100MB, and I think that’s probably the absolute minimum file size you’ll be able to pull off if you go that route. Why not have them download ~150MB and be able to install anything they want?

Which leads to my second issue: discouraging people from using the tools they will be using in the “Real World” is a bad thing. Yes, it’s great to have a one click thing that you can develop in and really learn with, but your team will not work that way. No matter how good you make it, I seriously doubt that experienced developers will use a GUI tool for something that can live and work better with a CLI. Rails developers use homebrew. Rails developers use compilers. Rails developers encounter problems with installing gems sometimes. After a certain point (i.e., once they move beyond their first few learning apps), attempting to hide these details from people learning isn’t helping them learn. This issue is especially controlled thanks to homebrew and their extensive list of patches and OS X-specific fixes for many libraries. And these are not problems you will solve, unless you think you’re more talented than the thousands of developers from the past 20+ years who have attempted to solve it in every programming language ever used in a *nix environment.

Look, I don’t think what Yehuda’s doing is wrong or that Yehuda is wrong. That’s not what this is about. I see this as the exact same situation as bundler/isolate, Merb/Rails, Sinatra/Ramaze, whatever/whatever else. There are alternative ways to approach the same problem. My philosophy is that I want people to use Railcar until it doesn’t work for them anymore, at which point they can click the forthcoming “install to system” button and go about their merry way. His approach is probably different.We appear to be taking two valid approaches, and I’m sure different people will gravitate to one or the other. That’s fine.

What?

So, what am I doing exactly? Currently, I have a usable isolated app environment living in the repository. It’s written in MacRuby using XCode and Interface Builder (I don’t have any interest in using HotCocoa because I like pretty interfaces and it’s incredibly hard to build those in HotCocoa). On the first run, it will install Homebrew, RbEnv (Why RbEnv? I could quickly figure out how it worked and its flexibility will make it easier to drop binary installs in. I’m not opposed to using RVM at all, but RbEnv was just easier to figure out and easier to hook with my code. Patches accepted :)), Ruby, and some default packages.

Once it’s bootstrapped, you can install popular packages in Homebrew, install various Ruby versions, generate (or drag in existing) applications and launch them with various options. It’s a little rough, some things aren’t quite wired up, but it’s definitely a good MVP build right now I think.

Currently everything builds from source, but I’m working on a setup for binary installations for Ruby and SQLite. I’m also setting up (later today, hopefully) a repository for a very, very small collection of statically compiled gems for SQLite, RMagick, and a few others. Basically, I want to put the most popular gems in there so that in their initial learning, there won’t be a ton of issues. I’m also going to invest some time (or invest some money in having someone else) in converting a few gems to rake-compiler to make a lot of things with respect to compilation that much easier.

Once the binary installs are in place, I want to polish up the whole UI, put some nice icons in (those are in the works!), and get it really released as a product unto its own. Then, I want to get to work on an educational piece that will either be part of the Railcar app itself or a separate application. Basically, I want to make the documentation accessible, provide a number good on boarding tutorials, help with common errors, offer a “help search” that will search the mailing list, Stack Overflow, etc. to help find the best answer, and so on. I haven’t decided the best place for that; I’m gravitating towards a second app so that you don’t have to use Railcar, but we’ll see.

How can I get involved?

You can contribute code, for sure! I’m going to work on adding some tickets to Github today for some things that need to be done, but you’ll find a few TODO: type things sprinkled throughout the code. There are also a few rough areas that I’d like to smooth out that don’t really conform 100% to the Cocoa way of doing things (e.g., I should be binding the per-application settings using Core Data probably), but those issues are minor. Feel free to refactor/add/improve anything that you see. I know MacRuby pretty well, but I’m not the most expertest expert. I know some people were saying things like ARC were creating issues for them, so if I have a weird build setting in XCode, please feel free to correct it and send me a pull request. Oh, and tests. We need some of those.

Secondly, if you’d like (and don’t feel compelled at all), you can contribute monetarily. While everyone likes money (and it would be amazing to get a little boost around tax season), I don’t really care about it, I’m not “seeking” it, I don’t need it to keep working on the project, but I’ve gotten a number of requests for a PayPal button or whatever, so here you go:

Depending on the amount I gather, I plan on investing part of the money back into paying others to improve things like rake-compiler, Rails documentation, and so on. I’ll send you a sticker for contributing if you don’t mind providing your address on the page after you finish with PayPal. Please pardon the payment going to my business’s PayPal account. My wife uses my “main” PayPal account extensively for her business, and I don’t want to risk PayPal freaking out and locking it up. If you’d prefer not to use PayPal, I can figure out something else if you’ll e-mail me.

Where do we go from here?

Well, I’m going to keep hacking. Getting people to help finish up a few loose ends would be great so we can get it into the hands of people to use. Anything you can do to contribute to that (even filing a Github ticket with a great idea), would be really helpful. I’m available via e-mail or Twitter if you wan to chat about any ideas/issues you can foresee.

Apr 9, 20121 note

March 2012

Recruiters: why I'm kind of rude to them, why they deserve it, and how to fix it (IMHO)

I’ve been dealing with recruiters for a long time. Not the good kind that offer you awesome jobs, but the crappy kind that mailbomb you with irrelevant positions. I’ve been receiving their e-mails and such for years and years now, and eventually I decided I’d afford them the same courtesy they afford me and simply write a form response for the ones that annoy me. I posted it on Twitter yesterday after using it to respond to a recruiter who offered me a junior PHP/C# position (tech I haven’t worked with for 5+ years) that required relocating to a crappy area for terrible pay. The response has been one of three things:

  • General amusement at the contents
  • Sharing of their own form responses or experiences with recruiters (some worse than mine!)
  • Anger at the contents/the fact that I’d be rude to a recruiter

The first two I expected, but I must admit, I didn’t expect the latter from anyone but recruiters (and trust me, three recruiters felt it their duty to let me know what they thought via e-mail; I can’t express how hard it was to not respond with my form response but I was civil :)). So, I felt like maybe I should explain some things about recruiting, because I’m not sure many of the people who were upset by it have actually experienced what a lot of people experience with recruiters.

What “recruiters” do

Disclaimer: The following doesn’t describe every recruiter on the planet. It does describe at least 90% of the ones I’ve interacted with, but I talk about some of the good ones later on.

I put “recruiters” in quotes there because the modern day recruiter is little more than a spammer who has legal authority to spam you. I’m almost surprised some of them haven’t taken to appending male enhancement ads to their job e-mails to make a little side cash. Essentially, they have big databases of resumes that I’m guessing are usually purchased rather than built given how out of date some of the information is in there (we had an old cell phone number that we kept around for a while and we’d still get calls on it 2 years after I’d taken it off my resume). They’re probably tagged somehow or searchable by some means. Recruiters take a job listing, search for keywords in their resume database, and e-mail everyone who could possibly match those keywords. Everyone. Even people like DHH, who obviously isn’t looking for a job.

So, why am I so offended by this practice?

It’s invasive

Never has a single industry disrupted the general flow of my life so much as recruiting. That’s probably an exaggeration, but honestly, it’s a bit much sometimes. Here are some examples:

  • My wife now has my “old” cell phone number (I got an iPhone 4, she kept the 3GS and my number). Unfortunately for her, that number is still on my resume (and there’s no legitimate reason to change it; anyone who actually needs to reach me still can since, you know, she’s my wife). But she gets several daily calls from really pushy recruiters. Once, a recruiter called at 9:30p.m., was told not to call again because I wasn’t interested. The same guy called at 8:20a.m. the next morning about the same position. Again, he was rebuffed, and he called back again at 4p.m. that day. Who else would do that?
  • More recently, my wife told a recruiter that the number he was calling was hers but even if it wasn’t, I wasn’t interested anyhow. This recruiter proceeds to try to strongarm my wife into putting me on the phone, give him my new number, give him a time he could call back, and any manner of things to try to get in contact with me. He was already told I’m simply not interested in anything he has right now, yet he felt it was imperative that he talk to me. I’ve never even had a debt collector be that adamant.
  • My inbox is pretty consistently invaded by just utter crap from recruiters. It’s pretty easy to identify junk (“MYSQL/PHP/SCALA/JAVA/C__ - $36k SALARY - NYC —” is the most recent subject line I’ve received), but sometimes I get e-mail like “Hey I’d like to chat with you about work.” OK, great. I’m totally fine with “networking” with recruiters who actually care, but inevitably, this turns out to be a “Oh, he actually reads his e-mail” situation where they start firebombing me with “leads” every day. I try to be charitable to most of those introductory e-mails because I always hope it’s a recruiter really trying to do it right, but every time I’ve “fallen for it” it’s turned out badly.
  • Typically I’ll send my response, mark as spam/block the sender, and move on. But on a few occasions, I’ve had someone else in the same agency start spamming me. So, I block one person, they simply toss my address to another e-mail within their company, and start sending from that.
  • In one case, I’d tried to be nice to a recruiter, told her I wasn’t interested, but she continued to send “leads” my way (of the “you’d have to relocate to the middle of Tennessee or South Carolina and be paid $22k for a position requiring 5 years of Java experience” variety). Finally, I bluntly but civilly told her to never contact me again (this was pre-The Response™) and blocked her. She proceeds to e-mail me from her personal Yahoo! account yelling at me for blocking her other address, telling me I’m helping to put her out of business, and all this other crap that I can’t be hassled to remember right now.

You can point to those and say “oh, those are the bad ones!” but frankly that’s how most of my interactions with recruiters go, especially when it comes to phone pushiness. The point is that this boorish behavior isn’t really abnormal.

The worst part is that 3 for 3 yesterday in the recruiter responses, they all blamed me. “Well if you wouldn’t post your resume on your website, we wouldn’t e-mail you.” “If you didn’t mark yourself as ‘for hire’ on LinkedIn or WorkingWithRails, we couldn’t call you.” Are you kidding me? PROTIP: I am for hire. I run a consulting business. I’ve actually gotten 2 contracts from people pinging me from those mechanisms. I’m not going to act as if I’m full up on work just so you won’t spam me. That’s just ridiculous, self-absorbed martyrdom. Even further, just because I post some information publicly that doesn’t give you the right to spam with tenuously related information. I mean, what if I spammed you with some recruiter related product? Or what if I started a recruiter recruiting firm and just totally bombed you with e-mails about positions in it? I guarantee you’d cry foul then.

It’s not very effective

Recruiting as it’s currently practiced can’t possibly work very well. Maybe it does. Maybe they have enough resumes built up that they’ll get a few hits that are actually viable. I doubt that happens most of the time given the amount of repeat job spam I get, but it’s always possible.

But even so, they’re not doing their job as it’s supposed to be done at all. Part of the job of a recruiter is (theoretically) screening candidates on some surface level. I did an experiment a couple of years ago by responding to 3 job spams I totally wasn’t qualified for: one was a position using R or SAS or something like that at a financial firm, one was a C++ position at a games company, and one was a low-level network engineer position at some MegaCorp™. Every time the recruiter merrily passed on my information to the client, selling me up as a great candidate, and so on. My resume said nothing about any of this stuff. No experience, no education. Nothing. So, not only are these people not good at screening candidates to spam, they’re also terrible at even telling whether a candidate is legitimate. I felt bad telling the firms they’d been duped into accepting a lead on a crappy candidate. Two of the companies never responded at all, but the MegaCorp™ HR person told me they never expected high calibre candidates from recruiters anyhow.

It’s a useless industry

The way it’s currently done, recruiting is totally useless. What do recruiters offer beyond what a job board posting would offer? I’d even venture to guess that a job board posting would have a better return since the people looking at it and applying are actually looking for another gig. Spam recruiters are simply leeches, middle(wo)men who take a big slice for being a reverse job board.

What recruiting is supposed to be

Recruiting isn’t always like this, though. In Real Recruiting™, recruiters actually spend time evaluating candidates (not just e-mailing anyone who matches some keywords), searching out people who fit the position they’ve been tasked with filling using information gathered from their own experience and their network (not just doing a Google search for a resume and passing it on), getting to know the candidates (not just merrily passing them along after the first response), and then making an informed and pointed recommendation to their client.

That’s how it works in corporate America. Do you think when a big corporation decides they want to hire a new CEO that their recruiter mailbombs everyone who has CEO experience? No, they have an informed process to make educated recommendations to the board. For example, when Apple recruited John Sculley to be their new CEO, they spent a lot of time evaluating his effects on the company, what he would bring to the table, how it could shape Apple, help tame Steve Jobs, and so on. Now, granted most developer positions don’t carry that much gravity in a company, but a little consideration of the position and background of the candidate would be nice.

But…but…but!

When I’ve shared some of these thougts with recruiters, I often get back, “But that’s not sustainable! I don’t get paid enough for that!” Then let me be clear: Maybe you don’t have a real business. It’d be great if I could sit on the side of the road and sell small carvings I make from the rinds of watermelons, but hey that’s not sustainable either. The hard truth is that you don’t make enough to do that because you don’t offer any value beyond a job board, and job boards are cheap. I posted a job on one job board, got 20 credible leads (and about 10-15 not-so-credible ones). Would a recruiter have turned that around for less than $300? I doubt it.

In this Era of the Internet, a lot of “connector” businesses are finding themselves replaced by websites these days. Phone companies are finding stiff competition requiring staff reductions for things like directory assistance, driving directions, and so on. The Internet has democratized information access and inter-personal connections to the point that middle(wo)men like recruiters are a fading industry. Want to save yourself some cash? Want a programmer that does Java? Post the job on a board and do some searching on a community site. You’ll find people who are doing interesting things and probably looking for work.

Doin’ it right

So, I hate blog posts that just complain the whole time and offer no concrete solutions. How can recruiters start actually offering value?

Learn the industry

Anyone can do what I just described above (Google and e-mail someone). The CTO, the team manager, the little HR lady who always offers you a peppermint when you visit her office, they all know how to do that. The value a recruiter can offer in knowing the tech, actually being able to evaluate candidates, talk intelligently with the client and candidates, and so on is nearly immeasurable. I really think a firm of tech-educated recruiters who have real chops (or at least some knowledge), who can connect with both sides, and can actually make educated recommendations would be a real winner.

Don’t have the time or inclination? OK, understood, but then hire someone. I tell you what: Arcturo will pre-screen all your candidates for $500 (i.e., toss out actual crap) and technical screen them for $100 a pop. I’m sure a number of other firms would do the same. Even better, talk to the client’s current team or leadership about people and things they’re looking for outside the job description. I talked to a recruiter at Square who was totally doing it right. She had dug up a few people to talk to, and then she went to the team there (who would know who has good technical chops) and said, “What do you guys think?” They helped her narrow her list down, and she contacted each of these people personally. That is doing it right.

Contact me like a human

Don’t form e-mail bomb me. It’s just offensive that you can’t be hassled to at least compose at least a semi-personal e-mail. That carelessness was the genesis of my form response: they’re taking less than a second to compose a message to me, so I’ll afford them the same courtesy while also registering my displeasure. I’ve only gotten a single response to my form e-mail, and that was simply “OK.” Usually they don’t respond, which, to be fair, is the intended effect.

But had they reached out to me like a person, made me feel like they had done any degree of research at all, had actually evaluated whether I would even fit the position at all, I would respond differently. If a recruiter has any familiarity with me at all (even “I saw your Github account” is passable in some cases), I’d be a lot more civil. The CTO at Mixbook did a great job with this. He’d looked over my blog, seen my Github, and contacted me because he thought I’d be a good fit (I’m guessing he didn’t have much success because they’ve now hired a recruiter who is spamming people like DHH). But even so, I thought that was a great approach, and were I looking for a job and to relocate, I’d have definitely responded to him.

Use common sense

If my resume says nothing about SAS or SAP, then why are you e-mailing me leads dealing with those technologies? If my experience listing tells you that I haven’t touched C# in any real capacity in years, then why are you e-mailing me about a “C# Expert” position (well, I’ll you why, because they’re not reading the resume, but still). Evaluate the information you have available to you before you even reach out. It’ll pay off for you.

I also can’t tell you how irritating it is to get an e-mail with something like “We need a developer for a Rails project. It pays $27,000 a year with no benefits and requires at least 4 years experience with Rails and 6 in web development. Oh and you’ll need to be in (Atlanta|NYC|San Francisco|Seattle)” (not an exaggeration). Who would take that position? Sometimes recruiters need to learn to say “NO” to crappy companies trying to hire like that. Candidates would respect you a lot more if you wouldn’t toss this utter crap our way. I know right now the economy is still pretty unstable and some people would be happy to have that job, but if the requirements and the compensation don’t match up at all, then that’s a huge red flag for candidates.

OK

So, that’s my speel on recruiters. I’m sure I’ll be “blacklisted from [another recruiter’s] extensive network” as I was yesterday. I’m totally sure I’ll “regret saying such things in public.” OK, not really. I feel like I’m being fairly reasoned here given the amount of stupidity and abuse I’ve put up with over the years.

By the way: I’m on vacation right now (thanks Tumblr post-queue!). If you e-mail, comment, tweet, etc. and I don’t respond, I’m not ignoring you. Well, I sort of am, but only because I’m probably on the beach or floating in the middle of the Caribbean. Sorry, the Internet reception’s not real good out here.

Mar 5, 2012
#recruiters #work

January 2012

Let me work on-site for you! (For a few days...)

I’ve been tossing this idea around for a while, but now I’m at a point where I can actually do it thanks to things in life and business stabilizing a bit. I like to travel, I like work, and I’ve been wanting to hang out with more people in person (sitting in my office alone is great most of the time but other times it bites!), so I figure why not combine the three?

The deal

I come to your office and work for you for any number of days (up to 5) at a flat rate. I’ll hack on code, train your developers, pair program, fold your laundry, up vote all your Hacker News posts, make coffee, conduct dramatic readings from the Gang of Four book, whatever you want me to do. The options are (nearly) limitless.

If you want just 1 day, that’s OK. I plan on giving everyone a good chunk of time before hand to familiarize myself with the code, their business, what they’ll be needing, and so on. I’m not going to walk in on the first day with no clue about your business, spend 6 hours learning stuff, 1 hour contributing, and another hour telling jokes about airline peanuts.

The fee right now will be $2,000 per day, which is basically what I charge for 2 days of time right now at $125/hour. To be clear: I give you a (basically) a day’s worth of time off-site reading documentation, talking to your team, looking at your code, getting familiar with your needs and a day on-site actually doing the work. So, basically you’ll be paying what I charge for remote work, except, you know, on-site. This rate might go up in the next round, I don’t know, but since this is sort of an experiment, I figured I’d just stick with what works right now.

When and how?

Well, I don’t know when exactly. My plan is to go to San Francisco for a week sometime soon and work at least 4 of the days of the week. I’m also considering a run in New York City. If you want 4-5 days, we can work something out where I make a special trip just for you (possibly even to places not in NYC or SF, but we’ll have to talk about that :)), but if you want fewer, we’ll have to try to coordinate dates with others who want fewer also.

So, if you’re a company in San Francisco or New York City and could use a little extra Ruby, Rails, iPhone, or whatever muscle, then get in touch.

Jan 24, 20121 note
Bad (or, my unfortunately unfavorable review of Bob Martin's Ruby Midwest keynote)

Uncle Bob Martin has had a lot of influence on the software development industry over his career. His books are heralded as “landmark” and “essential tome[s].” He is credited as “legendary” (ugh) in his author biography on Amazon. I don’t doubt that he’s an incredibly smart guy from what I’ve read from him. Some of his articles are fantastic reads. But I think perhaps either I haven’t read enough to get a real impression of him, or the conference talk I recently had a chance to watch is significantly more dishonest than his writing for some reason.

I was wandering down a rabbit hole of Twitter/Hacker News discussion, and I kept seeing people linking to his keynote video from Ruby Midwest 2011 as a “very important talk to watch.” I’d sat through at least one (possibly more) of his conference talks before without paying much attention (I unfortunately often find it hard to focus on conference talks), really liked what I heard at his RailsConf 2009 keynote (missed his 2010 one), and since this particular talk was relevant to what I was reading at the time, I figured I’d give it a more attentive watch.

I realize I’m probably going to tick off a lot of people here, but what I heard was seriously troubling. (There’s that and he took time to correct everyone else’s talks at the start of his talk, so I figured turnabout is fair play. :))

I’d heard his talks described as “sermons” before, but I never realized how hand wavey they could be at times (at least this particular one). I had to watch it 3 times to get at his main point, which still (to my ears) doesn’t really have any evidence behind it or meat to it outside of “Uncle Bob says.” Even worse, as I was listening, I kept getting angrier by the minute at the gross mischaracterizations or downright mistruths he was spouting. The following list is just a collection of things I caught on my first couple of listens. Maybe there are more in there, but these were glaring enough to catch my attention.

Assertion: Architecture is about intent, and intent should be evident when looking at a software project, so the Rails directory layout sucks. (around 11:00)

He led everyone to this point by showing them blueprints of buildings, indicating that a building’s purpose should be and is evident by how it is architected. From this, he then makes the logical leap that this should absolutely be true of software, and that when you look at the top level directory of a project, the architecture should be evident, not the framework. His criticism is that when you look at a Rails application’s directories and files, you can readily see it’s a Rails application but not what the application actually does.

Disregarding the fact that having standardized file placement driven by the framework is one of the biggest wins for development teams when using a framework, that’s one of the most bizarre criticisms I have ever heard in a conference talk. I have never in my career worked on a project where I could simply glance at the file layout and discern exactly what the application does. Heck, even in things like XCode or Visual Studio, where one can have a logical layout of the files with smart groupings, I haven’t been able to do that.

The better question is: why would you need to? You’re a developer. You’re going to be building the project out, so you’ll figure out what the app does soon enough. What is more convenient for you: a gangly file layout/“architecture” that is non-standardized, annoying to navigate, and requires documentation for others to navigate or something standard that makes your locating files and important logic in those files that much easier? And even so, as his argument indicates even, file layout doesn’t speak to the functionality of the application. You could just as easily follow his suggestions but put different, unrelated code in the files, and you’d be in a worse position. It’s a foolish, silly criticism that probably sounded better on paper than when it came out in the talk.

The worst part was that at around 28:00 he advocates an alternative directory structure based on the architecture he’s describing in the talk, which has names that are just as or even more opaque: interactors, entities, and so on. He also suggests you’d have interactor files named after use cases (e.g., create_order.rb, fill_order.rb, etc.); I would personally kill myself if I had to navigate a huge project in this structure. I get the idea here, but is the ability to quickly sort of discern what an application does worth making your developer’s life a miserable existence during the other 99.9% of the project? Who would want to figure out in which of the 500 use case files that this particular piece lived in? Nobody, that’s who. This point was one part of the talk where he totally lost me in terms of what he was actually trying to say other than “I needed 5 more minutes of material and this seems like a good place to start the rest of my arguments from.”

Assertion: Views should know nothing about the business objects. (Around 32:15)

Perhaps that’s his opinion on things, but if we’re going to appeal to MVC’s origins and go by standard, accepted definitions, that assertion is just patently false according to much of the authoritative MVC documentation. For example, in the paper where the terminology is finalized for MVC dated December 10, 1979, Reenskaug writes in reference to views and how they get or update data in models:

A view is attached to its model (or model part) and gets the data necessary for the presentation from the model by asking questions. It may also update the model by sending appropriate messages. All these questions and messages have to be in the terminology of the model, the view will therefore have to know the semantics of the attributes of the model it represents. (It may, for example, ask for the model’s identifier and expect an instance of Text, it may not assume that the model is of class Text.)

In the original vision of MVC, the model, view, and controller were separated but communicative. A view can ask request data (or update a model (Heaven forbid!), an action which he derides at about 31:45) as needed for its functionality (so long as it doesn’t violate its role in the triad). Acting as if a view should be and always has been a “stupid piece of tiny code” that is simply feed flat data that it renders is false.

Assertion: You should have “hundreds” of views, not just one view. (Around 30:30)

Again, he harkens back to MVC’s roots and asserts that the Rails way of having one view (the page) is wrong, and according to the original plan, you should have hundreds of views, so MVC is a flawed model for doing things on the web. And again, he is incorrect.

Quoting from How to Use Model-View-Controller, a paper describing the original implementation of MVC in Smalltalk:

Views are designed to be nested. Most windows in fact involve at least two views, one nested inside the other. The outermost view, known as the topView is an instance of StandardSystemView or one of its subClasses.

In the original Smalltalk environments, having an overarching, top-level view for the M-V-C slice you were working with was common (and likely required in most situations). If we envision the page to be the same “object” as a window in the original implementation (which I believe is how it should be viewed), then the pattern fits quite well, especially since partials (and cells if we want to follow his assertion that all views should have an M-C piece to them) provide the same subview functionality. This fact is especially true if we get over the whole notion that the MVC pattern is a totally defined, prescribed Pattern™ that you must adhere to religiously and unwaveringly and instead take it for what it is, which is a loosely defined pattern that describes a way to reduce and manage complexity in systems (post coming about that attitude tomorrow…).

Assertion: Point of writing tests first is to avoid coverage gaps (or just about anything else he said about TDD in the talk).

A lot of his tangent into TDD starting at around 58:00 was silly. First, he asserts that writing tests after the fact is “a waste of time.” Granted, you’re more likely to miss some coverage if you do only that, but who doesn’t write quite a few tests after the implementation? Lay down a solid, basic set of tests covering what you’re writing, then go back and cover the edge cases when you have a clearer picture of the logic and its interactions with other pieces of the system. It’s stupid to act as if writing any tests after the implementation is useless.

Secondly, he asserts that the reason everyone TDD’s is to avoid coverage gaps. Now, I don’t know what sort of Magic Double Dream Hands TDD™ he’s doing, but the only “coverage” gains you’re making by TDD'ing are the kind that don’t matter (i.e., numbers not quality). That’s great that you have 100% coverage, but are your tests actually robust? And, even further, if you’re requiring 100% coverage, are you over-testing things? (If I see a unit test for the existence of an attr_accessor or a constant value one more time, I will scream) These questions don’t seem to faze him however. TDD'ing leads to perfect coverage, which, of course, means impeccable quality tests! </sarcasm>

Assertion: MVC is meant to be used in the small, so Rails does it wrong. (Around 31:30)

This was probably the most frustrating point in the whole talk. He twists and contorts MVC’s role in a Rails application and then muddles the terms of architecture pattern and design pattern to forge a point that Rails usage of MVC is inherently flawed according to how the inventor intended the pattern to be used.

Yes, as he asserts, MVC is meant to be used “in the small” in the sense that it takes one slice of your application, separates its concerns, and then lets you independently manage the complexity of those concerns. He is correct in that it is not necessarily an architecture pattern. But his diagram of how a Rails app looks versus this architecture he’s discussing in the talk is just simply disingenuous.

https://img.skitch.com/20111231-n9qswtxgswfmmb1qq1qgj6c75y.jpg

Not only does he conveniently rearrange the pieces so that it seems disjointed, he also completely pulls it out of the proper place in the architecture diagram to make it seem sloppier than it really is.

Even further, Rails uses MVC in the exact way that the original creator of the pattern intended it to be used. It doesn’t use MVC to handle the entire cycle of interaction in the applicatio (f.e., it doesn’t treat the web as part of the MVC mechanism). When a request comes in (i.e., user input), the input is passed to the controller, which decides what should be done with it, how models should be updated, and which views should be rendered for that particular input from a view. This is nearly exactly how it’s done in Smalltalk, exactly how it’s been done in nearly every other implementation of MVC, and this is exactly the “small” that it’s meant to be used in. It’s not being used to build the framework (i.e., your app isn’t treated as some weird model plugged into one giant MVC mechanism or something), it’s not used as the framework/application “architecture” (that’s actually something akin to a Model2 architecture pattern), and it’s not being shoved somewhere it doesn’t belong. It’s exactly where it’s supposed to be.

In reality, Rails is fairly close to the architecture he discusses. It’s not as decoupled and interface driven as he’d like it to be, but that’s the real rub with this entire talk: he’s complaining about Rails “flaws” that aren’t part of its DNA. It’s like complaining about how a sweater isn’t a very good conversationalist. He ends the talk by harping on the fact that good architecture lets you defer decisions for as long as possible. But here’s a PROTIP: if you’re using Rails, you’ve already let someone else make a lot of decisions for you. That’s kind of the point since Rails is largely a curated set of Rack extensions that help you build web applications. They’ve decided your app layout. They’ve decided you’re going to be using MVC. They’ve decided you’re going to be piping things through a router of some sort and dispatching those requests to objects. All of these decisions and many, many more are already made. So, why waste the effort to whine and complain and hand wave that it’s bad, when you’re doing it to yourself? Pick a different framework or build your own, problem solved.

Seriously, wtf?

Those are just the major points I had issue with. There were several other minor things that grated me:

  • At 17:00, he makes the remark that only classes derived from ActiveRecord::Base go in app/models, not the business objects he’s describing. I know he was making a sarcastic remark since that’s a general practice he disagrees with, but it (a) fell flat because a bunch of people yelled ‘models’ when he asked the baiting question and (b) is a fairly well known fact these days you can put anything in there that’s a business object.
  • Around 25:00, he takes issues with “web stuff” like session id’s and so on getting into your business logic. The problem with that is that sometimes you need that stuff. There are many times where I need to know how to handle some logic because of a header or some other payload information from the web.
  • Around 33:00 he makes a snide argument that “You gotta know a ton of languages to write a web app.” Seriously? You really only have to know one programming language. If you want your pages to look decent, you need to know one markup language (and optionally know a second programming language if you want some fanciness). What about desktop apps? You need to know a form designer or a mark up like XAML/WPF or how to manage frames and such in code (which isn’t any easier than just learning a stupid mark up language). The criticism is weird because the best part of the web is that we have standard, interpretable languages/mark up usable by clients that don’t require client knowledge of any specific programming language. I can write a web app in Java or Rails and the client, which could be a mobile phone, desktop computer, mainframe in the tundra of Russia, or whatever, doesn’t give a crap which one it is. That’s awesome not annoying.
  • Around 40:00, someone mentions they have “too many tests.” He goes on to dismiss that attitude (2 or 3 times actually) as a symptom of slow tests, and continues to point out that you should never get rid of tests, just write faster ones. I hope he’s kidding. I can tell stories of many apps I’ve inherited apps that were way overtested. I’m talking 5,000 Cucumber scenarios for an internal, non-mission critical application over-tested. I mean 22,000 unit tests for 30 models over-tested. I mean if I tweak the content of a constant array, the right tests fail, but I also have to clean 5 other tests where they simply tested the content of the array. That’s over-testing. I could rant about this for a while, so I’ll stop. :)

So, seriously, what happened? I think I’m just so disappointed because I’ve seen better stuff from him, but how have people been pointing to this talk as a really important talk that everyone should watch? I get he wants us to DECOUPLE ALL THE THINGS, but do we look past all this crap to get to a point he could have made much more directly and honestly (and in only about 10 minutes)? Or am I missing some grand overarching sarcasm that has placed me in the unenviable position of being part of the conference session equivalent of Punk’d?

Jan 2, 20123 notes

November 2011

Introducing gem_git: tiny tools for working with gems' code via Git

We’ve all been there. You’re plowing through your app, in your groove, then you notice an issue with a gem you’re using. In some cases you can work around it (or if you’re desperate/crazy, just monkey patch over it and move on), but more often than not, you want to fork and fix it and/or send a pull request back to the original author. Likewise, I’ve been hankering to hack on some open source stuff lately, and while browsing Github for stuff to hack on is cool, more usually I’m doing something and think, “Hey, it would be cool if this gem did (x)!”

Tracking down a gem’s source usually isn’t terribly difficult, but it’s kind of annoying to go find the URL for the repository, pop that into my Terminal, clone it, and so on. The friction is even more irritating if after hacking a bit I decide to fork it and keep my changes separate. So I decided I’d make things a bit easier.

I hacked out gem_git. Right now it’s just a couple of gem commands to help with hacking on gems. The first one is gem clone, which hits the RubyGems API to find the gem’s source and clones it. So, if you want to clone paperclip:

$ gem clone paperclip
Cloning paperclip from https://github.com/thoughtbot/paperclip...
Cloning into paperclip...
remote: Counting objects: 5231, done.
remote: Compressing objects: 100% (2292/2292), done.
remote: Total 5231 (delta 3582), reused 4377 (delta 2822)
Receiving objects: 100% (5231/5231), 798.34 KiB | 1.25 MiB/s, done.
Resolving deltas: 100% (3582/3582), done.

The next one builds on that and lets you actually create a Github fork. So if I wanted to create a fork of pakyow, I’d do this:

$ gem fork pakyow
Forking pakyow from https://github.com/metabahn/pakyow...
Repository forked, now cloning...
Cloning into pakyow...
remote: Counting objects: 1109, done.
remote: Compressing objects: 100% (461/461), done.
remote: Total 1109 (delta 730), reused 977 (delta 598)
Receiving objects: 100% (1109/1109), 139.93 KiB | 230 KiB/s, done.
Resolving deltas: 100% (730/730), done.

Now I have a shiny fork of pakyow for my own hacking.

Right now there are no tests and pretty poor error handling, but I’ll be hacking on it over the next few days to improve that sort of stuff. Please file any bugs you find on Github Issues and I’ll get around to them.

Also, be sure to clone/fork the gem and send me patches. That would be awesome. :)

Nov 21, 20113 notes
Attention API Provider: How to make people using your API love you

At Arcturo, we’ve been working with a lot of remote API’s and big data lately. The more API’s from all over the web I work with, the more I realize how much some companies really get how to build an API that developers love and use all the time, but at the same time, I’m beginning to realize how little thought some teams really put into their API and how it will be used. I rant about this often to Ryan, so I thought I’d go ahead write up a little list of things that API consumers would really appreciate if you’re providing an API.

Keep it consistent.

The number one most annoying thing I’ve encountered is inconsistent treatment of API calls. For example, let’s say I’m working with an API to a library. If I pull a book from the main collection and then a book from the reserve collection, both should contain the same citation information. If both of them, from my end, look like a book but contain different information (or annoyingly differently formatted information), that’s a big usability problem.

Much like we obsess over the user experience on the client side, investing time in your API’s user experience will yield big results in terms of adoption and engagement from developers. Think about what they will be doing and make the path from where they are to where they want to be as frictionless as possible. Giving me inconsistent data is a huge blocker to actually getting things done because not only am I wrestling with the data itself, I’m also trying to figure out what your assumptions about the data are and how they may affect something else I’m doing.

Make it general.

I can’t count how many times I’ve talked to teams who are building an API for internal use and are “just going to give people access to that.” While that turns out well sometimes, in general your internal product, case-specific logic will end up being more annoying to someone attempting to adopt it.

I was working with an API once that kept returning boolean data in different ways among the different calls, and when I inquired as to why that was, I was told that their iPhone app interpreted the data one way, their internal web services another way, and the Ajax calls they were making a different way. Of course, I was building something that cross-cut all the calls used by these services, so it made my life incredibly difficult (I literally eventually built a BooleanParser class or some such silliness to handle all the different states). If you’re building an API for something internal, then keep it internal; just because you can offer an API based on some internal thing doesn’t mean you should!

Keep your information up to date.

Please. Please. I’m begging you. The first thing you should do after changing anything in your API is to ask “Has then been documented and/or covered in our client libraries?” Having your API documentation being out of sync with your actual running API is a death sentence. This situation is becoming more and more of a problem as more sites are adding API’s as a second thought rather than a core functionality. Go ahead and compare your average web application’s API documentation to someone like Twitter who have made their API a core competency (though I could offer them as a counter-example to that about 2 years ago…).

Even more important is to make sure that the code developers are running and working with jives with what you’ve got running on your servers. If the documentation is wrong, that’s not such a huge deal if I can dig around in your client’s source and figure out what changed. But if the client is wrong and the documentation is right (or both are wrong), you’re going to drive me to Bedlam before I figure out you haven’t updated something. Keeping your API releases synced with documentation and client releases will go a LONG way to keeping your API users very happy.

Don’t gimp it.

If you offer an API feature, offer to your clients evenly. I understand “premium API features” and all that jazz make total business sense. I’m all about people monetizing their API’s as much as it makes sense. Where I think you, as a provider, cross a line is offering certain API’s only to your in-house, product bound clients and no others. Not only is it sort of gratingly protectionist to the point of turning off a lot of potential developers, it just doesn’t make any sense. If you’re going to arbitrarily limit the ability to which I can engage your product via the API, you obviously don’t want my competition or my contribution to your ecosystem very badly.

I understand limits in terms of not allowing blatant scraping of data or something that’s core to the viability of your business, but disabling convenient features and neat additions in the name of arbitrary limits is a problem. Twitter, while being fantastically open in a lot of respects, really put a bad taste in developers’ mouths with the whole OAuth/XAuth split for example. Limits like that can easily build up a lot of bad press and kill developer goodwill for very little benefit.

Be prepared for customer service.

As API usage grows, developers will inevitably have questions, requests, problems, and want to chat about their favorite beers from this really cool microbrewery in South Carolina that you just have to try when you get a chance ZOMG. Many API providers simply tell a couple of their developers to answer all of these inquiries just sort of as they have time between all the Real Important Work™. E-mails and support tickets pile up, people get upset, and things explode. No one wants that.

Instead, be prepared to offer actual customer support. Even if it’s one developer whose primary workload is flipped (answer support inquiries first, fix bugs and add features second), then that’s better than treating your existing developers as second class to building stuff. Your developers will thank you and sing the praises of your API team all over. Look no further than Twilio to see this in action; these guys really get it in terms of working with developers directly to make sure they succeed.

TL;DR

Your API developers will appreciate you at least considering their sanity in how you build and operate your API. Thanks.

Nov 18, 20111 note

July 2011

The Hoedown 2011 Experience, Part 1: Lodging

Well, it’s that time of year again: another Ruby Hoedown coming at you. Registration is open, so head over and register now before we sell out (we’re not THAT far from doing it…).

When I started planning this year, I honestly didn’t know if I wanted to do it. This is the fifth year in a row that I’ve done this conference (and the seventh conference I’ve had a hand in planning over the past few years), and it had become a bit of rote repetition for me. So, like a few years ago when I got bored with charging people, I decided to change it up again. I started pondering a few things. First, what makes a great conference? Secondly, what makes a great regional conference? These aren’t easy questions to answer, especially for someone so close to the experience as I’ve been for the past few years. So this year I sat out all conferences save for RailsConf (and that was because I was speaking) and MagicRuby (for obvious reasons). This distance gave me some time to think about it a bit, and I’ve come up with a couple of core things (if you don’t really give a crap about what I think about conferences in general, you can slide down to the part about the hotels).

First, it needs to be an experience. Not just an experience within itself (i.e., “I went to RailsConf and it was what it was.”), but a truly memory-creating, impact-enhancing, honest-to-goodness experience for attendees. Too many regionals basically try to be a mini-(Ruby|Rails)Conf. I was guilty of falling into that for sure: try to shove as many people in as possible, give them what you expect in a conference (a badge, a t-shirt, a piece of paper telling them what they’ll hear over the next 2-3 days), and hope that you can at least muster a good review from the attendees. I won’t point fingers, because I doubt the organizers who I think have fallen into this trap even realize it as a problem, and that’s OK. If that’s how their conferences want to roll and it’s successful for them, that’s awesome. I’m just tired of trying to do that.

So instead, this year we’re trying to create a really memorable experience for attendees. The one conference that I think nailed this was Ruby Fringe. I attended (and spoke) there, and from the very start, everything had a very nice handcrafted approach. The organizers obviously put a lot of thought into how things progressed and the things that their attendees would see and do during the conference. That sort of thoughtfulness really matters because people tweeted, blogged, and talked about that conference for years after (and still do at times). We’re trying to put that same degree of thoughtfulness into the conference this year, and we hope it shows.

Second, I think regional conferences especially have to play up the “flavor” of their region. I’m always sort of disappointed when I go to a regional conference and I’m not wrapped in the culture and experiences of the region it’s representing. I’ve never put much effort into that. The closest I’ve come has been the general “feel” of the media (effusing the “dirty south” aesthetic that’s popular in a lot of Southern art these days) and hosting at the OpryLand one year. One conference that I think nails this year after year (I’ve only been once but heard from others how great it is at this) is GoRuCo. From the badge (which is handwritten by a local graffiti artist) to the parties (which are held in totally NYC locales), everything screams New York, and it’s awesome.

We’re trying to capture that this year. These few blog posts (today, tomorrow, and Friday) will layout some of the things we have in store for everyone this year. We’re going FULL NASHVILLE, and everyone knows you never go FULL NASHVILLE. But we’re doing it this year, from the food to the venue to the music (that’s right!), it’s going to steep you in the South like we’ve never done before. So, let’s take a look at what we have in store so far…

The Hotels

So in the past, we’ve usually had the conference at a hotel or at the least picked a hotel as “the” conference hotel. This year we’re sort of doing both.

First, the venue has lodging on-site that’s nice and super affordable. Like, $40 a night and you won’t get accosted by a hooker while staying there nice and affordable. Now, the arrangements are sort of spartan (see photo below) and they were previously a dorm, so the bathroom sharing situation might not be ideal for everyone.

But, they were a freaking dorm. What does that mean? Late night hackfests? I think so. Camaraderie not seen since your college days? You betcha. I’m going to place a Dean of Nerds in the dorms to plan activities and answer any questions you may have about the conference (I can’t do this but if you’d like to volunteer, ping me on Twitter or e-mail). I think that people will not only enjoy the price point, but it’ll create a great environment for cool stuff to happen.

For those of you to whom that doesn’t appeal, I plan on staying at the nearby Hilton Garden Inn. It’s only about half a mile away, only $89 a night, and should prove to be a little more luxurious.

Beyond that, there are more luxurious options if you’d prefer to travel just a bit further. Within a mile or so, there’s a super nice Loews hotel and a great boutique hotel named Hutton Hotel. My wife and I stayed there about 2 weeks after they opened and it was really chic and interesting (they used sustainable materials in the construction, so you have bamboo floors, etc.). There are other hotels near that area, too, such as a Doubletree or Hotel Indigo, if those are you thing.

So there you have it. That’s our lodging strategy for this year. Stay tuned for tomorrow’s post on the food and venue!

Jul 14, 2011

June 2011

Authoring eBooks is on sale right now

I dropped the price of my eBook on writing eBooks Authoring eBooks to its lowest ever $19 today. Not sure how long I’ll keep it there, so grab it while it lasts!

Jun 6, 2011

May 2011

RubyRescue.tv - Answering Ruby questions live and on the air every Tuesday (submit a question and win $50 to Amazon)

If you don’t follow me on Twitter and hang on every word I tweet, you may not be aware of RubyRescue.tv (ignore the inaugural episode language; we were too lazy this week to change it ;)). We did an episode last week that went awesomely well.

So awesome, in fact, that we want to make sure this one is super awesome. Submit your questions today and tomorrow using one of the methods on the website and you could win a $50 Amazon Gift Card. We’ll announce the winner on air tomorrow at 2p.m. Eastern.

May 9, 2011

March 2011

The Compleat Rubyist is coming to Boston real soon

If you haven’t seen it, the training set that I do with David A. Black and Gregory Brown is called The Compleat Rubyist. Don’t let the word “training” conjure up images of boring lectures and spurious example problems that take you 30 seconds to complete. It’s an intimate, interactive experience wherein we do lecture, but we also provoke discussions, debate amongst ourselves, and give you a one-on-one tour through some of the topics we discuss (which are not presented at a novice level for the most part). No matter your skill level, you can get something out of it because due to the way we scale the content.

So back to my point: we’re coming to Boston soon, and we’d love to see you there. Head over to the signup page on http://thecompleatrubyist.com and use discount code FRIENDINBOSTON to get $50 off.

Hope to see you there!

Mar 28, 2011
FREAK OUT (or, I quit my job and what I'm doing next)

It’s my birthday today. I am now 26, and I’ve decided to have a quarter-life crisis.

Most people have these a little earlier in life, but I was too busy or missed out on some of the typical milestone triggers. I don’t drink, so I missed out on my first epic hangover at 21. I haven’t graduate college yet, so I missed out on that at 22. I was already married for 5 years when 25 rolled around and some people decide they are sick being alone in their terrible, worthless existence and get married. If I wait much longer, the moniker “quarter-life” won’t make much sense probably, so I’ve decided this year is the year.

This birthday also marks the end of my first decade of employment in the software industry (that’s weird to say I’ve done anything for a decade straight!). It’s the only job I’ve ever had. I fudged my age to sign up for one of those remote contractor websites when I was 16, and I’ve been hacking code for money ever since.

So what does that mean? Well, I quit my job, plan on moving to Belize to open a writer’s commune/coffee shop/moped repair shop, and have adopted 13 Malaysian children that I will teach to tumble and spin plates in an effort to create “Cirque du Soleil: Belize” some day.

You’ll be glad to know I haven’t given up sarcasm just yet.

I did actually quit my job, though. March 25, 2011 will be my last day working at Intridea, and if all goes according to plan, my last day “working at” anywhere (aside: working at Intridea is pretty neat; if you’re looking for a job, they’re hiring). But I’ve decided that I’m going to quit talking about doing what I’ve always wanted to do, and instead, actually, you know…do it.

I’ve been working in and with consultancies for years now. I’ve had some great experiences (and some not so great), and every time I leave one, I come away with ideas about what a great business can look like and what could really be done better. I’ve also developed some pretty wild ideas, but nowhere I’ve worked has been willing to experiment enough to test them out.

So, I’m doing two things. I’m spinning up my own business with a good friend (Ryan Waldron, excellent Rails developer and killer biz dude), and we’re going to change the world. OK, not actually, but we are going to try some interesting business practices and work processes.

We’re named Arcturo, and you can see our quick cardfolio site I put together if you want to contact us about work (and you know you do); we’re currently open to pretty much whatever sort of opportunity you have available (working directly for clients, working on your product, working with your consultancy, and so on). Our full site is going to be awesome, but I’m just waiting on our illustrator (Steve Thomas) to finish up the graphics.

Secondly, I’m starting a blog to talk about our experiences/experiments and others like them. The blog is The Business Hypothesis. I’m going to blog about what we’re doing at Arcturo and what others are doing in their business.

So, yes. Keep an eye out. I have some other fun things planned, but I don’t want to talk about them until I know, you know, that I can actually do them. :)

Mar 18, 2011

February 2011

My books are on sale today!

Authoring eBooks is on sale for $15, and it will finally reach its full $49 price this weekend.

The Rails Upgrade Handbook is $6, and it will return to its full price of $12 this weekend.

These sales end TODAY, so you’d better snap them up if you want them.

Feb 25, 2011

January 2011

MagicRuby is nearly full and the group room rate goes away TODAY

MagicRuby, the FREE Ruby conference at Walt Disney World Resort® on February 4-5, 2011, is nearly full! If you want to hang out with over 300 other Rubyists and see talks by Chad Fowler, Dave Thomas, Kyle Neath (Github), Gregg Pollack (Ruby5, Envy Labs, Rails for Zombies), and a host of other great speakers, all at a convention center literally steps from the Magic Kingdom park, then you’d better run over to http://magic-ruby.com and grab a ticket right now. They’re going very fast!

Also, if you plan to stay at the five-star resort that the conference is being held in for about 60% off the normal price, you need to book your room TODAY. You can book your room using the number from reservations link on the website. Usually they allow you to pay for one night up front and pay the balance when you arrive (I don’t know if that holds true with group reservations or not…). In any event, the rate expires today, so get those reservations in!

Hope to see you in February!

Jan 3, 2011
#magicruby

December 2010

Holy crap! I made $40,000 this year with my eBook. And you (probably) can, too.

Caution: I am going to try to sell you something at the end of this post. If that offends you, skip the last few paragraphs. :)

When Rails 3 RC1 hit earlier this year, I also released my Rails 3 upgrade guide in its final form. I had been shopping it to a few people since early alphas/betas of Rails 3, but I finally let it out of the bag in early January. It’s been about a year since I first released it, so I thought it’d be neat to look at the sales numbers and see how far I’ve come since then. So, I popped the console open on the Rails app and ran a couple of queries. “That can’t be right,” I thought. “I made how much?”

Holy crap. I’ve made $40,967.

How does that break down? Well, here are the numbers:

  • Direct sales - $34,017
  • Peepcode sales - $5,950
  • Other stuff - about $1,000

I’ve been overwhelmed by the purchases and the great comments I’ve gotten about the book. Plainly, I was on to something here. The sales from both avenues have been excellent (in my opinion), and the “other stuff” constitutes things like speaking gigs and so on that I’ve gotten as a direct result of writing this book.

You can see from this graph that sales have actually been fairly consistent month-to-month. They ebb and flow, of course, but there haven’t been any major crashes.

It should stay like that for some time since I think a lot of developers are just now actually getting confident enough in Ruby > 1.8.6 and Rails 3 to really make the leap.

So, the question is: how did I do it? Is the book really that good? Or is it something else at work? There was a recognition factor at play here for sure given my past work (my 2 books previous to this one, open source, speaking, and so on), but there are a few other things at play here I think…

  • Timeliness The book hit right when the Rails 3 hype was starting to build substantially. I managed to release the book early enough to catch everyone from early adopters to Johnny (or Jilly) Come-lately.
  • Niche target It’s a very concentrated topic, but it’s a topic with a huge audience. There is a boatload of Rails 2 code out there now, and many of those projects will want to migrate to Rails 3 at some point in the near future. That means they’ll probably want to some help…
  • Partnerships I built some great affiliate and sales partnerships with places like Peepcode. They didn’t constitute the bulk of my sales, but they certainly didn’t hurt.
  • Audience monopoly No one else had even released a significant blog post at the time when I started writing about it, so the market was wide open for a guide like it. Since then, many blog posts have come out, but none of them are as cohesive or exhaustive. Even so, they’re still excellent contributions to the literature on the subject and certainly have taught me a few things that I did not include in the book.

There are, of course, more factors at play, but I think those are the big ones. I’ll be blogging more about this stuff soon because I think there are some interesting lessons.

But enough about me.

Would you like to make some excellent passive income while getting a few fringe benefits while you’re at it (speaking gigs, consulting jobs, and so on)? *dons Matthew Lesko jacket* I CAN HELP YOU WITH THAT! I’ve been toiling on an eBook project for almost a year now; I’ve written most of it, rewritten it, tore most of it out, changed the format, and written it again. I think I finally have something good.

I’ve titled it Authoring eBooks, a thorough eBook guide to (you guessed it) writing and marketing eBooks. It takes you through the writing process from concept development to structuring your outline to actually writing the book and into the marketing and sales process. It’s structured as a discrete set of essays so you can pick and choose the topics that are relevant to you. Some of them are short thoughts on a particular topic; others are multi-pages treatises on an in-depth topic. You can check out the sample pages to get a feel for how it’s written, or jump over to the informational site to get more discussion of what’s inside.

It currently costs $29 for over 100 pages of information with access to future bonus content (topics in the hopper so far: running a successful affiliate program, A/B testing, and more). I’m also in the process of setting up a private discussion forum for readers. The price will be jumping up to $49 for the full package after the New Year, so if you want it cheap, you’d better grab it now! Unlike my other book project, you can pay via PayPal or Google Checkout thanks to e-Junkie!

I hope to see tons of eBooks popping up as a result of readers using this information. I also plan on starting an eBook “coaching” (crappy, overloaded, market droid term but it’s the best I have right now) where I will work with a small group for 6 weeks to at least begin writing a tech book (doesn’t have to be an eBook); it will include discussions, worksheets, one-on-one consulting, and so on. If you’d like more information on that, sign up on this form and I’ll ping you when it’s ready.

(P.S. If you’re not into writing but still want to make some cash, I have an affiliate program also. It pays 30%, so right now you’d make about $10 per copy sold and about $15 a copy after the New Year.)

Dec 16, 20104 notes
#ebooks, #money #affiliates #writing

November 2010

Road to MagicRuby: Getting Here

I’ve heard from a few people who’d really like to make it to MagicRuby, but think that even though the conference is free, the travel might be killer. I traveled a lot to Orlando before moving here and travel to and fro quite a bit now, so I thought it’d be a great idea to put together a little guide for those who’d like to come join us in February.

TLDR: You can do it affordably.

This post is about getting here, the next one is about what to do when you’re here, and the last will be about things near-but-not-in Orlando that are cool if you have a few extra days.

Traveling to Orlando

If you’re not driving, there are a few ways to get to Orlando (some you may not have thought of…).

Fly

Obviously, you can fly. But what’s the best deal? Without question, if you can, catch one of these airlines domestically (ranked by usual price from lowest to highest):

  • AirTran
  • Southwest
  • US Airways
  • Delta
  • Frontier

AirTran is usually the cheapest by far; I believe that Orlando is a fairly large hub for them, so they tend to have a lot of routes for really cheap. I recommend if you do fly AirTran, that you spend the extra $20 and get an exit row seat. If you’re planning on business class, they don’t typically fill it up, so if you wait until about 24 hours before and check-in online you can usually bag a biz class seat for about $50 rather than the $300 they want to add on if you purchase it with your ticket.

If you’re international, Orlando actually has a ton of good airlines to choose from. For the UK, Virgin Atlantic has a route here, as do Aer Lingus and British Airways. We also have routes from AirCanada, AeroMexico, AirFrance, and a few more. You can check out the whole list of airlines (major ones; there are more smaller routes that aren’t listed there if I recall correctly), on this page.

Catch a bus

Not feeling the whole scan your junk/TSA Grope of Doom™ scenario? No problem. You have a few other options.

Catching a bus is one option. Of course, Greyhound probably goes from your city, but those tend to be sort of grimy and unpleasant to travel on (even if they are cheap). If you don’t mind the possibility of urination, crazy drunken homeless men attempting to grope you, arguments about smoking around babies, and other tomfoolery (these are all incidents my wife or I have experience, by the way), then grab a Greyhound. They will be much cheaper than other bus lines. If that doesn’t sound like a wild party to you (and trust me, it is), there are a lot of other options, but you’ll have to search for your city. I know NYC has service via GotoBus, but I don’t know who else has buses. Many of these smaller bus lines are actually quite nice; my brother-in-law recently took a trip from NYC to D.C. to meet me during a conference via Megabus (who unfortunately does not go to Orlando) and claimed it was really clean and cushy. Your mileage may vary, though.

Take a train

My preferred way to travel sans aeroplane is definitely the train (AmTrak being the only option). Orlando has a nice train station and Amtrak offers robust service here. You have two options:

  • The Autotrain - Pack up your car and yourself and roll past all the sucky driving on the way down here. You put your car on the train and they take you here. Nice if you don’t want to rent a car, but I don’t know how expensive this service is. You can find out more on the route page.
  • The Silver Service / Palmetto line - This is a standard train line that goes from NYC all the way to Miami, stopping in Orlando. If you aren’t on the line, you can catch it in D.C. or NYC, so you could transfer from another line. Get more info here.

Both lines offer full rooming services, which is highly recommended. If you can sleep in the coach area, by all means go for it. But it’s not too much more for a roomette or bedroom and you get a nice bed, a private restroom (no shower in the roomette), and, best of all, your meals are included. If you elect to eat a steak every meal as I do when traveling by train, then you actually save money by getting the room (depending on the length of your route).

Staying in Orlando

So once you’re here, where should you stay?

The conference hotel

Of course I’d prefer you stay at the conference hotel. A lot of attendees will be there as it is. It’s ridiculously cheap for the hotel (like 55% off or something), crazy nice (one of the nicest hotels I’ve stayed in and I stay in a lot of hotels), and will be really convenient for you while you’re at the conference. It’s also extremely close to the Magic Kingdom. How close? Check out this map; the conference hotel is that resort in the middle right (The Contemporary) and the Magic Kingdom is obviously in the middle left. It’s so close that I actually often park at The Contemporary and walk over to the Magic Kingdom because its parking lot is closer to the park than the one dedicated to the Magic Kingdom. You can get rooms that overlook the park so that you can relax in the comfort of your room during the fireworks show for instance and get a killer view.

It’s only $189 a night and you would be doing me a giant favor by booking a room so I don’t have to pay for it. We’re currently still pretty far away from our room commitment, but we have until January 3rd, 2011 to hit it, so I’m pretty sure we’ll make it happen.

Cheaper on-property options

If that’s still too much of a stretch for you, Disney offers some more affordable options. Given the urban sprawl-y nature of Orlando (they built the city to be that way so it could accommodate the crazy amounts of tourists guaranteed by the presence of the Disney parks), I highly suggest you stay somewhere within Disney World. Otherwise you’ll just be asking for a world of bus/taxi/transportation induced pain (unless you rent a car, but then you aren’t saving much money are you? :)). So what’s available?

  • Moderate resorts - Resorts in this tier include Coronado Springs, Port Orleans, and a couple of others. I very much suggest Coronado Springs if you go this route. It is very recently renovated and the decor is almost on-par with the more expensive resorts. Room rates in this tier will run slightly lower than the conference price.
  • Budget resorts - These resorts include the All-Star Movies/Music/Sports resorts and Pop Century Resort. Don’t let the word budget turn you off: these places are actually pretty nice for the price. They won’t be as luxurious as the other resorts, but they will be a place to sleep that’s really clean, has great service, and has bus links to all the parks and Disney stuff.

Most of the conference and surrounding time is in the “value season” so room rates should be rather affordable. The only catch with these resorts is that you don’t have Monorail access, which is sort of a bummer. With the Monorail (which runs through tbe main concourse/lobby of the conference hotel…so cool!) you can just hop on and go to Magic Kingdom and Epcot really easily. The buses from the other resorts aren’t too big of a hassle really (only about a 10-15 minute wait max and travel times aren’t terribly bad), but you can easily get spoiled by the easy access of the Monorail.

Off-Disney options

If you stay off-property, you have a ton of options. There are star ratings etc. to look for, but if you want to know what’s really good and really close to Disney World, you only need to look for one thing: The Disney Good Neighbor Hotel rating. This tells you a few important things:

  • It’s close to Disney World.
  • It’s super nice. Disney has really high standards of quality and customer service that they check constantly. You won’t find a crappy hotel in this program.
  • They offer free shuttle services to the Disney property.

I would suggest picking one of these hotels if you go off-property. I know that the Doubletree and Hilton in Lake Buena Vista are nice, as is the Radisson near Downtown Disney. Outside of those, you’re on your own to discern what’s good from TripAdvisor and Yelp.

Soooo…

When are you booking? :) Get your conference ticket now!

If you have any more questions about travel, please do ping me via e-mail or Twitter. I’d be glad to answer anything (or find someone who can).

Nov 29, 20105 notes
#magicruby #travel #conferences #cheap

October 2010

Pay no attention to the code behind the curtain: the tech behind tldr.it

tl;dr: Thanks for helping me win the Solo Division of the Rails Rumble! Also, the tech behind this is pretty sweet.

My last post talked a bit about the story behind the application, but this time, I want to give you guys and gals a little bit of detail on the tech behind the application.

The app

The application is a Rails 3.0.1 application. It has 3 controllers and 2 models, with almost 1000 lines of application code. I’m making use of about 10 third-party gems, mostly for fetching and parsing tasks.

The general architecture of the app centers around two distinct pieces. The Rails application really just kind of accepts and displays data: the real magic happens in the background jobs (currently powered by delayed_job).

The background jobs

Once a job is fired off, a job runner grabs it and fetches the content. If it’s a feed, then the feed is fetched by Feedzirra, summarized down (more details on this in a bit), and stored back in the record. I persist all 3 versions of the feed along with the original content. I did that because I intended to show the length differences on the page (e.g., “This feed is 70% shorter!”), but I didn’t have time.

If it’s a web page, then the content is fetched by RestClient. I then use nokogiri to extract the main content of the page out. The algorithm I’m using is pretty complex and clever, but since it’s sort of half the “secret sauce” of tldr.it, I’m not going to describe it in detail. I will say that it uses some things from my own research, some refinements from the Readability bookmarklet’s techniques, and some HTML-specific (and HTML5 specific) additions. It’s nowhere near perfect, but then again I did built it in 48 hours. :)

The summarizer

Next, it’s passed to the summarizer. The summarizer is largely powered by libots, an open-source word-frequency powered text summarizer. This library works quite well, but I hit the obstacle that it was written in C. I had planned to just pipe out to its command line utility, but its utility doesn’t take input from stdin very well (and by not very well, I mean it segfaulted every time). So, at that point I wanted to just write a Ruby extension or use ffi. Neither of those approaches worked out (good C programmer, I am not), so I just opted to write my own C shell app to pipe to and get info back from. The way the summarizer works is to use the Ruby standard library’s Shell class (I bet you’ve never heard of that one!) to pipe out the text content of the page (with some smart additions and such from my code) to my C summarizer with the summarization ratio as an argument to the utility. It captures the output on stdout (if there’s an error for any reason like encoding, then it just returns blank) and places that back in the record. I do this 3 times for each web page and each feed.

Once the summarized text is captured, then the record is updated by the background job and the action that’s polled by the user’s browser returns the right JSON and HTML to update the user’s view to show that it’s been fetched.

Places to improve

I want to replace libots with a library of my own creation. I wanted to do this during the Rumble (or at least enhance ots’s output with it), but I didn’t have time at all. I’m still not totally sure which algorithm I’m going to use, but word frequency doesn’t work the best in every situation. I also need to refine the content extraction algorithm, working on more special case parsers (currently there’s only one for NYTimes and Blogspot blogs). I see why many of the URL’s people try aren’t working, but I didn’t have a chance to add a second pass algorithm if we miss the content on the first run. I also want to make the extraction content-aware, since right now it just does some analysis on page structure and loose content detection.

Anyhow, that’s the technical background. Feel free to ask any questions; I’ll answer the best of my ability.

Oct 23, 2010
#rails #ruby #delayed_job #libots
From tl;dr to Techcrunch: my Rumble app's story

tl;dr - Go vote for tldr.it on http://railsrumble.com!

(That’s the screen cast I was going to post where the “How it works” section is now on the page, but I didn’t have time. View it on Vimeo, if you’d like.)

So, wow, it’s really been a wild ride for my Rumble app so far. I thought maybe I’d share a little bit about the app’s story today, and then maybe share a bit about how it works in another entry tomorrow-ish.

Idea cometh

I never really used RSS until last year. I knew what it was, why people used it, and so on, but I just figured I’d rather (a) read things in a visually attractive environment and (b) give ad revenue to the sites that I really like reading. While I still hold the latter sentiment, I discovered two things. As I expanded my reading list and started reading more and more blogs and news sites everyday, it became really time consuming to go to each site individually, and only about half of the sites I read are actually attractive (the other half are really ugly). I also realized that most of the feeds that are worth my time and effort to support actually put ads in their feeds. So my RSS usage ramped up.

After using RSS for a while, I realized that while I could get to more information faster, a lot of it (a) wasn’t worth my time or (b) said a little that was excellent but it was wrapped in a lot of crap. After suffering with the annoyance for a while, I deleted a lot of feeds from my list, but that wasn’t working for me either since I really did like reading some of the ones whose signal to noise ratio wasn’t quite to my standards. So it dawned on me: there must be a way to dig out what’s good. If you could summarize what’s in the feed, you can see whether reading the full article is worth the effort. I formulated some ideas about how it would work, played with a few names, and drew up some plans.

And then that sat there for months.

But one day when talking to a good friend of mine about news and RSS and such, I shared the idea I had. He thought it was such a good idea, he instantly bought the domain for me and threatened to lock me in his basement until I’d finished building it. Of course, I claimed I’d try to do it soon, got wrapped up in the holidays and changing jobs, and then forgot about it.

Until the Rumble this year.

The Rumble

So the Rails Rumble rolled around this year, and this idea seemed prime for some 48-hour construction action. I lost most of my notes from my previous planning, so I drew up new ones and phased everything out for 48 hours. I made a few choices so that I could definitely fit it into two days:

  • Decided I would start with RSS feeds and add URL’s if I had time.
  • Picked libots for the heavy lifting on summarization unless I had time to write my own stuff.
  • Picked Apache and Passenger because a StackScript existed for it already.
  • Picked MySQL because it was what I was used to.
  • Picked Feedzirra for feed fetching because it was easy and worked well.
  • I’d leave optimization (caching, DB indexes, etc.) for last because this is a Rumble app. It’s not like you get tons of traffic, right? RIGHT?

Some of these decisions were great decisions (Feedzirra being a prime example of that), some of these ended up changing (I did have time for URL’s), and so some of them came back to bite me (more on that later).

So, I built it. I did not spend the whole 48 hours glued to my keyboard, but I spent a significant portion of my weekend working on it. I think most people look at the app and think “gee, that’s really simple” but they don’t see all the code that goes into the content extraction, the preparations for summarization, the background processing, and so on. So the front end code is quite simple, but the backend is pretty complex (I’ll discuss it more in another post).

So I launched the app about 5 minutes before the end of the competition. I’d have launched it sooner, but my dj runners were giving me fits and required some extra fiddling.

What does it do?

The app essentially summarizes text. So, for example, let’s say you’re reading this Guardian story on DADT. It’s fairly long and something I’ve read a lot of articles on already. It’d be great if I could figure out if this says anything new without having to waste time reading a ton of text. If you plug it in, you’ll get this back as the medium summary:

The lifting of a ban on gays serving openly in the US military proved shortlived after a federal appeals court ruled late on Wednesday in favour of granting the Obama administration a temporary delay.

Although President Barack Obama favours an end to the ‘don’t ask, don’t tell’ policy in which gays could serve in the military, as long as their sexual orientation remained secret, his justice department went to the courts on Wednesday seeking a temporary delay to allow the military time to prepare for the end of the gay ban, and, possibly, allow Congress to legislate.

OK, not bad, but nothing new. Skipped!

Now, imagine you can do that with almost any page or RSS feed. With tldr.it, you can. It summarizes the text of articles down to 15%-30% of the original length. Of course, the algorithms aren’t perfect (they were built in 48 hours after all and word frequency isn’t the best summarization algorithm), but that’s the idea that the app hopes to build on.

Aftermath

I posted it on Hacker News (like any good Rumble competitor) and tweeted it. A few people retweeted it, but then I noticed that Robert Scoble (@scobleizer) favorited my tweet about it. “Hm. Maybe people will be interested in it,” I thought. That was quickly followed by the app hitting the front page of Hacker News, and my first real technical problem.

You see, we were given a 512MB Linode instance for use with the Rumble. Serving up normal traffic and traffic from judging wouldn’t even cause that sort of box to break a sweat. Unfortunately, as it started getting tweeted around and featured in different places, Passenger and my background workers chewed through the RAM quite quickly, causing paging and serving things really, really slowly. Once I adjusted those things around, I was finding that it was then hitting the CPU boundary because it was putting so much stress on so few workers. So I decided to deal with the paging until I could get a bigger box setup. I setup a 1024MB instance, copied the app over, and pointed the main domain over at that. Crisis averted for the time being.

Then @nickbilton of the NYT was nice enough to tweet about it, causing another flood of traffic to hit the box. It actually withstood that storm fairly well, but the way it handled it made me hope it didn’t get hit with anything bigger.

Then I got an e-mail from TechCrunch. “Great,” I thought. “That’ll be excellent press, especially when voting opens.” So I answered the questions and asked that they please, please hold the story for another day so I could get my box up to speed and fix my stack.

They didn’t.

So, when the TechCrunch traffic came, the box went into total fail mode. At first, the load overwhelmed it. Then it seemed to just give up (no resources were being taxed but Apache wasn’t serving; a restart of the box did nothing to fix this). Unfortunately I was a Florida Creatives/ORUG meetup that night, so I couldn’t do anything about it for a while. Once I got home and realized Apache was being an epic pile of fail, a really nice fellow Rumbler (@vertis) was nice enough to jump in and get nginx going for me in about 20 minutes (would’ve taken me a few hours easily). It started serving requests (fast, might I add), and all was well.

The app hit Techcrunch JP the next morning without even blinking, and it’s served up nicely ever since.

From there, it’s been featured in a lot of places…

  • The hot list on Delicious
  • PSFK featured it on their home page
  • It lived a little while on MetaFilter (unfortunately that was when I was having server trouble, so they killed the story)
  • news.com.au featured it on their Facebook page
  • A lot of blogs and such have written posts on it
  • Probably a few more I’m missing (and so sorry if I did miss it!)

The attention has been incredible and humbling. It’s a little hack for me, and it seems a lot of people are interested in it.

The stats

So, what are the numbers. Press is great, but if you aren’t getting users, it doesn’t matter, right?

  • 600,000+ requests served through the app since switching to nginx (about 1,000,000 since the I released the app). These include AJAX requests and feed requests.
  • 120,000 page views since switching to nginx (about 160,000 since I launched the app)
  • Over 10,000 URL’s summarized
  • Over 3,000 feeds summarized
  • Three venture capitalist inquiries

And they’re still growing. The attention and usage has been very exciting, and I hope moving forward it continues.

What’s next?

I don’t know what next steps are exactly. I haven’t quite decided. Intridea has signaled that they would be open to continuing development on it and maintaining it as one of our products. That option is attractive since I think it is a really cool product. I have a lot of good ideas for monetization and further product development (including a battery of immediate fixes I need to get in…) that could easily turn into a solid product roadmap.

Then again, I may list it on Flippa and turn it over to a team that I know can really invest a lot of time and interest in it. I don’t know how much interest I would find there, but it is something I’m thinking about.

Voting

So, it’s up to vote now. The only problem is I’m not doing so well! I don’t know if I’m being trolled (I was at 3.4 and after one of the earliest refreshes in the competition, I obviously got a slew of 2/2/2/2 or lower votes for some reason) or (more likely) maybe the app isn’t as good as I thought it was. Either way, I’d really appreciate your votes.

I’ve been extremely humbled by how people have spoken of my app, and I hope people continue to use it. Let me know if you have any ideas or feedback. Next up is a post on the more technical aspects of the application.

Oct 22, 20101 note
#tech #tldr.it #tldr #rumble #railsrumble #rails

September 2010

$3 off The Rails Upgrade Handbook ENDS TODAY

In celebration of the release of Rails 3 stable, I’m running a sale on my upgrade handbook. Be sure to get it soon because the sale ends today.

Click here to get you some sweet, sweet discounted eBook action!

Sep 1, 20101 note

June 2010

Ruby Hoedown MMX is open for business (and talk submission!)

Maybe one of these I’ll actually, seriously regularly blog. I had a few entries almost finished for Rails 3, but they now require some revision, so I’ll try to polish those up soon. I also have a few in the hopper related to tech writing, which should prove fun and interesting for all parties involved (OK, they’ll at least be informative!).

But my lack of blogging isn’t the reason that I am, in fact, blogging today. No, my friends, today I am announcing that Ruby Hoedown MMX (that’s 2010 for those of you who don’t follow Roman Numerals™ very closely or may still be running on a Pentium II) is open for registration and talk submission.

We’re holding the conference in Nashville at the Downtown Hilton on September 3-4, 2010, and the price will be a wallet-shredding, budget-busting, GDP-increasing, third-world-country-enriching $0 (that’s right, FREE). We had so much fun last year going free, we decided to go ahead and do it again this year. But we’re not just throwing old hat around this year; you’ll have to keep your eyes and ears tuned for some new things we’re adding and some old things that are coming back!

So why not mosy on over to the conference website and register up or submit a talk? The CFP closes on July 2, 2010, and talk selections will (probably) be made that weekend! And if you register now (and get one of the next few slots), you’ll be able to have your say about the talks that we select (more on that soon).

Jun 28, 2010

February 2010

The Rails Upgrade Handbook is now available

The eBook I previously mentioned is now available! It’s only $12 at http://railsupgradehandbook.com.

Inside you’ll find…

  • Almost 120 pages of upgrade information
  • A step-by-step guide to upgrading your app to Rails 3
  • High-level discussion of what’s new in Rails 3
  • Practical tips on using Rails 3’s new features to improve your code
  • Real case studies of upgrading apps and plugins
  • Detailed checklists for upgrading

Feb 22, 20104 notes
Content from my Rails 3 presentation today, and a note about RailsConf 2010

Some people asked for the links and slides to be placed on my blog from my presentation upgrading to Rails 3, so here they are.

Download the slides

Links

  • Rails 3 Release Notes and Guides: http://guides.rails.info/
  • EdgeRails.info http://edgerails.info/
  • List of links from railslove.com: http://is.gd/8CZN2
  • List of links from mediumexposure.com: http://is.gd/8CZWn
  • Yehudaʼs blog: http://yehudakatz.com
  • Mike Lindsaarʼs blog: http://lindsaar.net
  • This blog (redundant, I know): http://omgbloglol.com
  • Rails Upgrade Handbook: http://railsupgradehandbook.com

They’re also posting the slides and eventually audio over on the conference website; if you missed it, they’ve also posted some information on RailsConf 2010, including a little something about the tutorial I’ll be giving on advanced Rails 3 features!

Feb 18, 20104 notes
Coming very soon: The Rails Upgrade Handbook

I’ve been enjoying you guys’ feedback on my Rails 3 posts. I’ve gotten feedback both good and bad, and it’s been really helpful in the big project I’ve been working on, which I’m really glad to finally be able to talk about it a little bit. Most of the content I’ve posted about Rails 3 has been excerpted from my new eBook: The Rails 3 Upgrade Handbook.

It’s a hair over 100 pages of information on upgrading and improving your Rails 2.x applications with Rails 3. It covers how to upgrade, what new features can improve your existing code, a few case studies of upgrades, and extremely detailed checklists for upgrading and fixing your code.

The release of this project is imminent, so if you’re interested head over to http://railsupgradehandbook.com and sign up to be notified of when it’s released. It will be priced at $12 and kept updated as things shift and change on the path to Rails 3 Final.

Next week’s posts will be more consistent. I plan on discussing upgrading to Rails 3, one of my favorite new features in Rails 3, and my toolchain for writing eBooks (it’ll make hackers happy I think).

Feb 18, 2010
Improved validations in Rails 3

Quite sorry about not getting another post up sooner; I’ve been very busy lately with a few different things (some of which shall be revealed very soon!). Today I want to talk about validations. Validations received quite a bit of love in Rails 3, but if you don’t go looking for the new shiny features, you won’t find them. The old API is still around, so nothing will break if you try them, but there is also a variation on that theme:

validates :login, :presence => true, :length => {:minimum => 4},
          :uniqueness => true, :format => { :with => /[A-Za-z0-9]+/ }

This new form is excellent since you can compress what would have previously been 4 lines of code into 1, making it dead simple to see all the validations related to a single attribute all in one place. The valid keys/value types for this form are:

  • :presence => true
  • :uniqueness => true
  • :numericality => true
  • :length => { :minimum => 0, maximum => 2000 }
  • :format => { :with => /.*/ }
  • :inclusion => { :in => [1,2,3] }
  • :exclusion => { :in => [1,2,3] }
  • :acceptance => true
  • :confirmation => true

As I mentioned previously, you can still use the old API, but it makes sense to switch to this form since, when scanning your code, you’re rarely looking for what sort of validation it is rather than the attribute that’s being validated.

Another great new validation feature is the ability to have a custom validation class. It’s fairly common for Rails developers to develop their own validation methods that look something like this:

def validates_has_proper_category
  validates_each :category_id do |record, attr, value|
    unless record.user.category_ids.include?(value)
      record.errors.add attr, 'has bad category.'
    end
  end
end

These methods are really useful, especially if you use this validation in a lot of different classes, but they often add a bit of ugly code. Fortunately, in Rails a lot of that nastiness can go away. Those old methods should still work, but you could make them look like this instead:

class ProperCategoryValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless record.user.category_ids.include?(value)
      record.errors.add attribute, 'has bad category.'
    end
  end
end

Basically, create a class that inherits from ActiveModel::EachValidator and implements a validate_each method; inheriting from this class will make it available to all Active Record classes. Not only is the code a bit cleaner, since you can spread it out a bit more if it’s hairy without polluting the model class, it also makes these validations easily testable without much hassle, and you can also integrate them into the short form validations like this:

validate :category_id, :proper_category => true

Note that the key name is taken from the class name (i.e., ProperCategoryValidator becomes :proper_category). A similar new feature is the ability to have validator classes that bundle validations into a single object. If you have a lot of classes that need some very complex validation logic, you can create a class like this:

class ReallyComplexValidator < ActiveModel::Validator
  def validate(record)
    record.errors[:base] << "This check failed!" unless thing(record)
    record.errors[:base] << "This failed!" unless other(record)
    record.errors[:base] << "FAIL!" unless fail(record)
  end

private
  def thing(record)
    # Complex validation here...
  end

  def other(record)
    # Complex validation here...
  end

  def fail(record)
    # Complex validation here...
  end
end

The API is basically to inherit from ActiveModel::Validator and implement a validate method that takes a record as its only argument. Then in your model classes, use it like so:

class NewsPost < ActiveRecord::Base
  validates_with ReallyComplexValidator
end

This pattern is nice for wrapping up a lot of unruly validation code, but a more interesting variation on it will be building class factories (o noez not a factory!) based on parameters that build these validation classes (i.e., a class that generates validation classes). You can find a little more information these and other Active Model validation features in its API documentation at http://api.rails.info/classes/ActiveModel/Validator.html .

Feb 16, 201014 notes
#rails3 #validations #ruby #rails
The Path to Rails 3: Greenfielding new apps with the Rails 3 beta

Upgrading applications is good sport and all, but everyone knows that greenfielding is where the real fun is. At least, I love greenfielding stuff a lot more than dealing with old ghetto cruft that has 1,900 test failures (and 300 errors), 20,000 line controllers, and code that I’m pretty sure is actually a demon-brand of PHP.

Building a totally new app in Rails 3 is relatively simple (especially if you’ve done it in previous Rails versions), but there a few changes that can trip you up. In the interest of not missing a step someone may need, this post is a simple walkthrough of building a new app with Rails 3. I would have simply posted about the Rails 3 version of the Getting Started guide, but it’s actually a bit out of date now. I’ve committed each step in its own commit on Github so you can step through it (the repository is here: http://github.com/jm/rails3_blog)

An aside: Installing the Rails 3 beta

Installing the Rails 3 beta can be sort of tricky since there are dependencies, it’s a prerelease gem, and RubyGems basically poops the bed when those two scenarios collide. Hopefully that’ll be fixed soon, but the mean time, install Rails’ dependencies like so:

gem install rails3b
gem install arel --pre

# or if that gives you hassle...

gem install i18n tzinfo builder memcache-client rack \
            rack-test rack-mount erubis mail text-format thor bundler

Once all those lovely gems are installed (add --no-ri and --no-rdoc if you want to skip those/speed up your install), then install the prerelease version of Rails:

gem install rails --pre

Now you’re ready to roll on with the Rails beta!

Using the new generator

The application generator is basically the same with two key differences:

  • The parameter that was formerly the app name is now the app path. You can still give it a “name,” and it will create the folder like normal. But you can also give it a full path (e.g., ~code/my_application rather than just my_application) and it will create the application there.
  • All parameters for the generator must go after the app path. So, previously one could do rails -d mysql test_app, but now that has to be rails test_app -d mysql. This change is largely due to the major refactoring of the Rails generators, so even though it’s somewhat of a temporary annoyance, it’s definitely worth it for the flexibility and power that the new generators bring (more on that soon).

So, let’s generate a blog application (really original, I know, right?):

rails rails3_blog -d mysql

If you get an error like “no value provided for required arguments ‘app_path’”, then you’ve gotten your parameters out of order. If you’d like to use another database driver, you can provide postgresql or sqlite (or nothing, since sqlite is the default). You’ll see a lot of text scroll by, and now we have a nice, fresh Rails 3 application to play with [4b6b763ac9378c6cde95b0815d2a4c2619a0e403].

Let’s crank up the server (note that it’s different now!)…

rails server

Rails went the “Merb way” and has consolidated its many script/* commands into the rails binscript. So things like generate, server, plugin, etc. are now rails generate and so on. Once the server’s booted, navigate over to http://localhost:3000 and you should see a familiar friend:

Click on “About your application’s environment” to see more information about the app you’ve generated.

Configuring an app

Now comes the task of configuration. Again, not a whole ton of changes from previous versions, but navigating them can trip up the novice and journey(wo)man alike. First, setup all your database settings in database.yml; it’s just like previous versions of Rails, so no surprises there (and plenty of information abounds if you’re new to it).

Next, pop open config/application.rb. This is where much of the configuration information that once lived in config/environment.rb now lives. The portion you probably want to pay attention to most when making a new application is the block that defines your options for ORM, template engine, etc. Here’s the default:

config.generators do |g|
  g.orm             :active_record
  g.template_engine :erb
  g.test_framework  :test_unit, :fixture => true
end

I’m going to stick with the defaults, but you could substitute in something like :datamapper or :sequel for :active_record, :haml for :erb, or :rspec for :test_unit (once they get it working with Rails 3). Doing so will set the generators for models, views, etc. to use your tool of choice (remember that whole technology agnosticism thing?); I don’t know if all these generators are available yet, but there are some available here.

The config/application.rb file also houses some configuration for other things.

  • If you need to configure internationalization, it’s been moved to application.rb. Rails 3 comes equipped with a really powerful i18n toolkit; if you haven’t seen it, you can learn a little more about it here. The defaults that Rails sets up will work for most people (default locale is en and all translations in the default directory are automatically imported), so you may not need to touch anything, but if you need to customize, this is the place to do it.
  • You may want to set a default timezone. I usually stick with UTC since it’s easy to convert on a per-user basis to their desired timezone, but you might want to set it your timezone or the server’s timezone.
  • Your favorite old haunts from config/environment.rb such as config.plugins, config.load_paths, etc. are still there (even though config.gems is not).

Other configuration bits like custom inflections, mime types, and so on have been moved out into their own initializers that you can find under config/initializers. [b613cef6f92ff7d3304da84dba530196ba51371d]

The last big piece of configuration you’ll need to add is a Gemfile for bundler (get more information on Gemfiles and bundler here and here). We already have a basic Gemfile that has the following:

# Edit this Gemfile to bundle your application's dependencies.
source 'http://gemcutter.org'

gem "rails", "3.0.0.beta"

## Bundle edge rails:
# gem "rails", :git => "git://github.com/rails/rails.git"

gem "mysql"

## Bundle the gems you use:
# gem "bj"
# gem "hpricot", "0.6"
# gem "sqlite3-ruby", :require => "sqlite3"
# gem "aws-s3", :require => "aws/s3"

## Bundle gems used only in certain environments:
# gem "rspec", :group => :test
# group :test do
#   gem "webrat"
# end

Notice that it has added mysql as a dependency since that’s what we set as the database (or whatever driver you selected, for example, pg or sqlite). Since I want to write blog entries in Markdown, I’m going to add rdiscount as a dependency. To do so, I simply have to add this:

gem "rdiscount"

As I’ve said before, bundler is much more powerful than config.gem, and one of the great features it adds is the concept of a gem “group.” For example, let’s say I want to use mocha, but only when testing (obviously). You would add this to your Gemfile:

group :test do
  gem "mocha"
end

Now this gem will only be added in when testing. This will also be useful for production only gems related to caching and what not. [598652fa49634eaa9d23ab8df652faf73dfd07f4]

Next, run bundle pack if you want to vendor everything or bundle install to install the gems to system gems. After you’ve combed through this stuff and set whatever you need, you’re done configuring your application. Now on to actually building something.

Building it out

So, we’re going to build a very simple blog (and expand it later). First, let’s generate a scaffold for posts, since that’ll generate a lot of boilerplate code that we’ll go back and tweak:

rails generate scaffold post title:string body:text
      invoke  active_record
      create    db/migrate/20100202054755_create_posts.rb
      create    app/models/post.rb
      invoke    test_unit
      create      test/unit/post_test.rb
      create      test/fixtures/posts.yml
       route  resources :posts
      invoke  scaffold_controller
      create    app/controllers/posts_controller.rb
      invoke    erb
      create      app/views/posts
      create      app/views/posts/index.html.erb
      create      app/views/posts/edit.html.erb
      create      app/views/posts/show.html.erb
      create      app/views/posts/new.html.erb
      create      app/views/posts/_form.html.erb
      create      app/views/layouts/posts.html.erb
      invoke    test_unit
      create      test/functional/posts_controller_test.rb
      invoke    helper
      create      app/helpers/posts_helper.rb
      invoke      test_unit
      create        test/unit/helpers/posts_helper_test.rb
      invoke  stylesheets
      create    public/stylesheets/scaffold.css

Next, run rake db:migrate to create the database table for Post. Now if you go to http://localhost:3000/posts, you should see the standard scaffold interface. [8f27fe53282de70343afadaedd583ecc279d535d]

Let’s a take a look at the controller code; you’ll see a lot of actions that look sort of like this:

def show
  @post = Post.find(params[:id])

  respond_to do |format|
    format.html # show.html.erb
    format.xml  { render :xml => @post }
  end
end

That’s some clean code, but in Rails 3, we can compress down even further with the Responder. This class wraps very common rendering logic up into some really clean helpers. To use it, you’ll need to add what formats your actions respond with to the class:

class PostsController < ApplicationController
  respond_to :html, :xml

  .
  .
  .
end

So your show action goes from the above to this:

def show
  @post = Post.find(params[:id])

  respond_with(@post)
end

Now the action will automatically look at the state of the object, the format requested, and respond accordingly. So, for example, if you successfully create an object in create, it will redirect to show; if it fails, it will render new (this is assuming, of course, you’re requesting HTML). Of course, if you need custom logic, you’ll want to do something else, but these helpers make already clean, RESTful code even easier and cleaner. Make sure to rake to make sure you refactored it right! [53846f92393e10146fbf2d9b43b530a244d0137e]

Next, open up config/routes.rb. It should look something like this (with oodles of extra commented out routes):

Rails3Blog::Application.routes.draw do |map|
  resources :posts
end

To set PostController’s index action to the root, we need to do two things. First, remove public/index.html otherwise it’ll always overtake any root route you set. Next, add a root route to config/routes.rb like this:

Rails3Blog::Application.routes.draw do |map|
  resources :posts

  root :to => "posts#index"
end

Now going to http://localhost:3000 should show the posts index page. [120c377c8ec1c138d600f9b9bc39bedf1d43afd4] OK, so now that most of the functionality is in place, let’s make it look presentable; here’s my version of the index template:

<% @posts.each do |post| %>
  <h2><%= link_to post.title, post %></h2>
  <p>posted at <%= post.created_at.strftime('%D') %></p>
  <p><%= post.body %></p>
<% end %>

<%= link_to 'New post', new_post_path %>

You can see what other design edits I made in this commit [03b2c39d65331f7dfeb4ada89cf65604f7130e2d].

Now we need to add the Markdown functionality to the Post model. First, let’s generate a migration [2cbb0b04411ac1712a4f5039ed93bdad0cb6e76e]:

rails generate migration AddRenderedBodyToPosts rendered_body:text

Migrate your database, and now we’re ready to move on to testing. Write a simple test to make sure it renders the body after a save [af83a5a2e85a1679896e989f6828d1f5ee4aa7d3]:

require 'test_helper'

class PostTest < ActiveSupport::TestCase
  test "renders Markdown after save" do
    post = Post.create(:title => "This post rocks.", :body => "Now *this* is an awesome post.")

    assert_equal "<p>Now <em>this</em> is an awesome post.</p>", post.rendered_body.chomp
  end
end

If you rake now, that test should fail. So, let’s make it pass:

class Post < ActiveRecord::Base
  before_save :render_body

  def render_body
    self.rendered_body = RDiscount.new(self.body).to_html
  end
end

You should be all green [4730cd4e8601c74a05b9763d307b462f76e44b26]! Now we’ll need to go back and change the instances of body to rendered_body on the index and show views.

That’s pretty standard Rails stuff, so let’s do something Rails 3-specific now. First, let’s add some validations; we’ll want to make sure that every post has a title and a body.

test "requires title" do
  post = Post.create(:body => "Now *this* is an awesome post.")
  assert !post.valid?
  assert post.errors[:title]
end

test "requires body" do
  post = Post.create(:title => "This post rocks.")
  assert !post.valid?
  assert post.errors[:body]
end

Note the new API for Active Record errors (i.e., [] rather then on) [930e8868b0e4d8904d6f5090f6b445b0c428f71f]. Now, of course, we have to make them pass…

class Post < ActiveRecord::Base
  before_save :render_body

  validates :title, :presence => true
  validates :body, :presence => true

  def render_body
    self.rendered_body = RDiscount.new(self.body).to_html
  end
end

As you probably noticed, the API for Active Record validations I’ve used here is different (the validations shown are equivalent to a validates_presence_of validation, which are still around) [178bb06839bb44978f42c922c9348bfe783da8b1]. You can read a little more about the new style of validations here. So, now if you try to create a post without a title or body, it’ll reject it.

More later…

I realize this introduction is extremely simple, but I’ll expand on it very soon (including authentication, commenting, post drafts, an API, spam protection, feeds, caching, etc. with a separate entry after it on deployment). I’ll get to that sort of stuff very soon, but my next post is going to be a walkthrough of upgrading an app step by step (very similar to this entry). Look for it in a few days!

Feb 5, 20108 notes
#rails3 #ruby
rails-upgrade is now an official plugin

I apologize for not getting another Rails 3 upgrade post up this weekend, but I spent this weekend working on a few things. First, I contributed a few little pieces to the Rails 3 release notes, which should be showing up on the Rails blog soon (edit: or view them here right now), but most of my time was devoted to a bigger project.

My little gem rails-upgrade is now rails_upgrade, an officially blessed upgrade tool that will be maintained by myself and the Rails team. You can get it from here: http://github.com/rails/rails_upgrade.

To use it now, simply install the plugin:

script/plugin install git://github.com/rails/rails_upgrade.git

The plugin adds the following tasks:

rake rails:upgrade:check      # Runs a battery of checks on your Rails 2.x app
                              # and generates a report on required upgrades for Rails 3
rake rails:upgrade:gems       # Generates a Gemfile for your Rails 3 app out of your config.gem directives
rake rails:upgrade:routes     # Create a new, upgraded route file from your current routes.rb

Simply run those tasks in the same way you ran the commands with the rails-upgrade gem. In the near future, I plan on expanding the checks for deprecated pieces to handle some of the less obvious changes, adding some generators for other changes (like config/application.rb), and adding some extra tools (ideas/suggestions certainly welcome).

Anyhow, I’m really looking forward to seeing this project become a dependable upgrade tool. If you have any ideas or find any bugs, please contact me via e-mail or Twitter or, even better, fork it and handle it yourself!

Feb 1, 20109 notes

January 2010

rails-upgrade: Automating a portion of the Rails 3 upgrade process

If you’re looking for more info on upgrading, don’t miss out on my other posts on Rails 3 starting here.

NOTE: This is now an official, blessed plugin, so use that rather than this gem. More info here.

I’ve been playing with upgrading some apps to Rails 3 (some open-source, some not), and I’ve sort of gotten some of the process down to a science. So what does a developer do when something is down to a process? Automate!

I’ve created a (pretty hacky) gem named rails-upgrade (installable by a simple gem install rails-upgrade) to automate some of the more annoying parts of the upgrade from Rails 2.x to Rails 3. So far, it has three parts…

Find out what parts need to be upgraded

I’ve assembled a battery of checks to run on your app for obvious things that need to be upgraded. To get a report, simply run this in a Rails root:

rails-upgrade check

It checks over some things, then generates a report like this:

named_scope is now just scope
The named_scope method has been renamed to just scope.
More information: http://github.com/rails/rails/commit/d60bb0a9e4be2ac0a9de9a69041a4ddc2e0cc914

The culprits: 
    - app/models/group.rb
    - app/models/post.rb

Deprecated ActionMailer API
You're using the old ActionMailer API to send e-mails in a controller, model, or observer.
More information: http://lindsaar.net/2010/1/26/new-actionmailer-api-in-rails-3

The culprits: 
    - app/controllers/application.rb
    - app/controllers/feedback_controller.rb

Old ActionMailer class API
You're using the old API in a mailer class.
More information: http://lindsaar.net/2010/1/26/new-actionmailer-api-in-rails-3

The culprits: 
    - app/models/post.rb
    - app/models/user.rb

It shows and explains the issue, where to get more information on it, and which files the issue was found in. It checks a lot more than that report shows (e.g., looks for old generators, busted plugins, environment.rb conversion requirements, old style routes, etc.). It doesn’t cover everything 100% probably, but I’ve found it’s great for quickly identifying some low hanging fruit in upgrading.

Upgrading routes

The gem will also upgrade your routes as best it can. To generate a new routes file, simply run this inside of a Rails application:

rails-upgrade routes

I’ve tested it on some quite complicated routes files and it did fine, but it does have some minor quirks (i.e., it flattens with_options blocks currently…that might change if I feel like putting the effort into it…or if one of you patches it :). It takes a routes file like:

ActionController::Routing::Routes.draw do |map|
  map.resources :posts, :collection => {:drafts => :get, :published => :get}

  map.resources(
    :users,
    :groups,
    :pictures
  )

  map.login  '/login',  :controller => 'sessions', :action => 'new'
  map.logout '/logout', :controller => 'sessions', :action => 'destroy'

  map.connect '/about', :controller => 'static', :action => 'about'

  map.connect ':controller/:action/:id.:format'
  map.connect ':controller/:action/:id'
end

And makes a new one like this:

YourApp::Application.routes do
  resources :posts do
    collection do
      get :drafts
      get :published
    end  
  end

  resources :users
  resources :groups
  resources :pictures

  match '/login' => 'sessions#new', :as => :login
  match '/logout' => 'sessions#destroy', :as => :logout
  match '/about' => 'static#about'
  match '/:controller(/:action(/:id))'
end

The formatting isn’t quite this nice when it comes straight out of the script (I’m working on that), but you get the idea. I’m still tweaking/adding things to this script, but as far as I know it supports every feature of the Rails 2.x router. Fixing the formatting bugs are my first priority, simply because they’re really annoying.

Creating Gemfiles

The last piece is a Gemfile generator; it takes your config.gem directives and generates a nice Gemfile (even including the required Rails stuff). To run it, simply execute:

rails-upgrade gems

That will take an environment.rb with these config.gem calls:

config.gem "bj"
config.gem "hpricot", :version => '0.6', :source => "http://code.whytheluckystiff.net"
config.gem "sqlite3-ruby", :lib => "sqlite3"
config.gem "aws-s3", :lib => "aws/s3"

And generate this Gemfile:

# Edit this Gemfile to bundle your application's dependencies.
# This preamble is the current preamble for Rails 3 apps; edit as needed.
directory "/path/to/rails", :glob => "{*/,}*.gemspec"
git "git://github.com/rails/arel.git"
git "git://github.com/rails/rack.git"
gem "rails", "3.0.pre"

gem 'bj', 
source 'http://code.whytheluckystiff.net'
gem 'hpricot', '0.6'
gem 'sqlite3-ruby', :require_as=>"sqlite3"
gem 'aws-s3', :require_as=>"aws/s3"

Then it’s just as simple as gem bundle. Again, I’ve tested this on some fairly complex sets of gem requirements, so it should stand up to most sets.

If you find a bug or want to expand the checks and upgrade scripts or, like, add some tests (please do!), then hit it up on Github, fork it, and send me a message. If you want to simply install the gem and run it, then just run gem install rails-upgrade then rails-upgrade <whatever> inside the Rails application directory.

NOTE: This is now an official, blessed plugin, so use that rather than this gem. More info here.

If you’re looking for more info on upgrading, don’t miss out on my other posts on Rails 3 starting here.

Jan 29, 201010 notes
The Path to Rails 3: Approaching the upgrade

Now that we’ve looked at some of the core architecture, I’d like to shift my focus first to upgrading an application. Originally I had planned on writing about upgrading plugins first, but apparently that API isn’t quite stable. So, I figured rather than write a blog post that will be deprecated in 2 weeks, I’d rather write one that will be deprecated in 3-6 months instead. So, this post will focus on getting your app bootable, and it will be followed by a succession of articles that contain tips and scripts to help you upgrade the various components (i.e., routes, models, etc. are topics I’m working on right now).

The first step towards an upgraded app you need to take is to actually get Rails 3. As noted in the previous post, you can follow Yehuda’s directions or use Bryan Goines’s great little script. Once you’ve got it up and running, I suggest you “generate a new app” on top of your current one (i.e., run the generator and point the app path to your current Rails 2.x app’s path). Running the generator again will actually update the files you need to update, generate the new ones, and so on.

ruby /path/to/rails/railties/bin/rails ~/code/my_rails2_app/

Note that the argument is a path, not a name as in previous Rails versions. If you got an error about your Ruby version, upgrade it! If you use rvm it’ll be totally painless. Now, be careful which files you let Rails replace since a lot of them can be edited much more simply (I’ll show you how here) than they can be reconstructed (unless you really like digging around in git diff and previous revisions), but do take note of what they are since you will likely need to change something in them. As a general list, it’s probably safe to let it update these files:

  • Rakefile
  • README
  • config/boot.rb
  • public/404.html (unless you’ve customized it)
  • public/500.html (unless you’ve customized it)
  • public/javascripts/* (if you don’t have a lot of version dependent custom JavaScript)
  • script/* (they probably wouldn’t work with the new Rails 3 stuff in their old form anyhow)

And, you probably don’t want to let it update these files since you’ve likely made modifications:

  • .gitignore (unless you don’t really care; the new standard one is pretty good)
  • app/helpers/application_helper.rb
  • config/routes.rb
  • config/environment.rb
  • config/environments/* (unless you haven’t touched these as many people don’t)
  • config/database.yml
  • doc/README_FOR_APP (you do write this, don’t you?)
  • test/test_helper.rb

Of course, these lists won’t apply in every situation, but in general I think that’s how it’ll break down. Now, on to the things you’ll need to change…

config.gem is dead, long live bundler

Everyone and their brother complained about Rails’ handling of vendored/bundled gems since config.gem was added sometime ago (just search for “config.gem sucks” or “config.gem issues OR problems” and you’ll see). Between issues with requiring the gems properly to problems with the gem detection (I can’t tell you how many times I nixed a gem from the list because it kept telling me to install it even though it was already installed), Rails seriously needed a replacement for such a vital piece of infrastructure. These days we have Yehuda Katz’s excellent bundler, which will be the standard way to do things in Rails 3.

Essentially, bundler works off of Gemfiles (kind of like Rakefiles in concept) that contain a description of what gems to get and how to get them. Moving your gem requirements to a Gemfile isn’t as simple as copying them over, but it’s not terribly difficult:

# This gem requirement...
config.gem "aws-s3", :version => "0.5.1", 
           :lib => "aws/s3", :source => "http://gems.omgbloglol.com"

# ...becomes:
source "http://gems.omgbloglol.com"
gem "aws-s3", "0.5.1", :require_as => "aws/s3"

As you can see, it’s not too hard. It’s basically just removing the config object and moving some keys around. Here’s a specific list of changes:

  • Remove the config object
  • :lib key becomes the :require_as key
  • The :version key becomes a second, optional string argument
  • Move :source arguments to a source call to add it to the sources

Once you create a Gemfile, you simply have to run bundle pack and you’re done!

The bundler is much more powerful than config.gem, and it helps you do more advanced tasks (e.g., bundle directly from a Git repository, specify granular paths, etc.). So, once you move your config.gem calls over, you may want to look into the new features; they may be something you had wished config.gem had but didn’t!

Note: I’ve noticed some activity in Yehuda/Carl’s Githubs to do with a bundler replacement called gemfile; I’ll watch that closely to make sure there are no major breaking changes in the API/operation. If there are, I’ll definitely post here!

Move to Rails::Application

In all previous Rails version, most configuration and initialization happened in config/environment.rb, but in Rails 3, most of this logic is moved to config/application.rb and a host of special initializers in config/initializers. The config/environment.rb file basically looks like this now:

# Load the rails application
require File.expand_path('../application', __FILE__)

# Initialize the rails application
YourApp::Application.initialize!

Simple: the application.rb file is required and then the Application is initialized. The YourApp constant is generated based on the folder name for your app (i.e., rails ~/code/my_super_app would make it MySuperApp), so name it wisely! It doesn’t have any special relationship to the folder the app lives in so you can rename it at will (so long as you do it everywhere it’s used), but you’ll be using this constant in a few places so make it something useful.

Now you need an application.rb; if you generated the files using the Rails 3 generator, you should have one that looks something like this:

module TestDevApp
  class Application < Rails::Application
    # ...Insert lots of example comments here...

    # Configure sensitive parameters which will be filtered from the log file.
    config.filter_parameters << :password
  end
end

For the most part, your config.* calls should transfer straight over: just copy and paste them inside the class body. There are a few new ones that I’ll be covering later on in this series that you might want to take advantage of. If you run into a config.* method that doesn’t work (other than config.gem which obviously won’t work), then please post in the comments, and I’ll add it into a list here.

You’ll also notice that many things that were once in environment.rb have been moved out into new initializers (such as custom inflections). You’ll probably want to/have to move these things out of application.rb and into the proper initializer. If you opted to keep any custom initializers or specialized environment file during the generation process, you’ll probably need to go in there and update the syntax. Many of these (especially the environment files) now requires a new block syntax:

# Rails 2.x
config.cache_classes = false
config.action_controller.perform_caching = true

# Rails 3.x
YourApp::Application.configure do
  config.cache_classes = false
  config.action_controller.perform_caching = true
end

All configuration happens inside the Application object for your Rails app, so these, too, need to be executed inside of it. As I said previously, most things in there should still work fine once wrapped in the block, but if they don’t please comment so I can post about it/figure out the issue.

Ch-ch-chaaange in the router

You’ve probably heard a lot of talk about Rails and routes and new implementations and this and that. Let me tell you: the new router is pretty awesome. The problem is that it’s not exactly easy to migrate existing routes over to the new hotness. Fortunately (for now, at least) they have a legacy route mapper so your routes won’t break any time soon. Of course, you should always try to update things like to this to keep up with the version you’re running (i.e., never depend on the benevolence of the maintainers to keep your ghetto legacy code going while using a new version for everything else).

But don’t worry. Upgrading your routes is fairly simple so long as you haven’t done anything complex; it’s just not as easy as copying and pasting. Here are a few quick run-throughs (a detailed guide is coming later)…

Upgrading a basic route looks like this:

# Old style
map.connect '/posts/mine', :controller => 'posts', :action => 'index'

# New style
match '/posts/mine', :to => 'posts#index'

A named route upgrade would look like:

# Old style
map.login '/login', :controller => 'sessions', :action => 'new'

# New style
match '/login', :to => 'sessions#new', :as => 'login'

Upgrading a resource route looks like this:

# Old style
map.resources :users, :member => {:ban => :post} do |users|
  users.resources :comments
end

# New style
resources :users do
  member do
    post :ban
  end

  resources :comments
end

And upgrading things like the root path and so on looks like this:

# Old style
map.root :controller => 'home', :action => 'index'

map.connect ':controller/:action/:id.:format'
map.connect ':controller/:action/:id'

# New style
root :to => 'home#index'

match '/:controller(/:action(/:id))'

I’ll be writing another entry later on about the router’s new DSL and looking at some common patterns from Rails 2 apps and how they can work in Rails 3. Some of the new methods add some very interesting possibilities.

Some minor changes

There are a few minor changes that shouldn’t really mess with too much (except perhaps the first one here…).

Constants are out, module methods are in Ah, nostalgia. Remember when RAILS_ROOT and friends were cool? Well, now they’re lame and are going away in a flare of fire and despair. The new sexy way to do it: Rails.root and its module method pals. So, remember. Old and busted: RAILS_ROOT and its depraved, constant brethren. New hotness: Rails.root and its ilk.

Rack is Serious Business™ You might have noticed that the Rails 3 generator gives you a config.ru in your application root. Rails 3 is going gung ho on Rack, everyone’s favorite web server interface, and as such, a config.ru is now required in your application to tell Rack how to mount it. Like I said, the Rails 3 generator will spit one out for you, but if you’re doing a manual upgrade for some reason, then you’ll need to add one yourself.

Interesting note: Remember that YourApp::Application class you created earlier in application.rb? That’s your Rack endpoint; that’s why your config.ru looks like this:

# This file is used by Rack-based servers to start the application.

require ::File.expand_path('../config/environment',  __FILE__)
run YourApp::Application.instance

Neat, eh? That’s why I suggested you pick a meaningful name for that class: its touch runs quite deep in the stack.

.gitignore to the rescue Rails also now automatically generates a .gitignore file for you (you can tell it not to by providing the --skit-git option to the generator). It’s fairly simple, but covers the 95% case for most Rails developers, and it’s certainly a welcome addition to the toolbox. It was always annoying having to create one every time and either dig up a previous one to copy or try to remember the syntax of how to make it ignore the same stuff.

After upgrading this stuff, you probably have a booting application. Of course, these are a lot of moving parts that could derail this plan: old and busted plugins, gem stupidity, weirdness in your config files, lib code that’s problematic, application code that needs upgrading (I can almost guarantee that), and so on. In any event, these are a few steps in the right direction; subsequent posts will show you the rest.

Posts in this series

I’m posting a whole series on Rails 3; be sure to catch these other posts!

  1. Introduction
  2. Approaching the Upgrade
Jan 26, 201022 notes
The Path to Rails 3: Introduction

Wow, over half a year with no blog post. That may be a new record for blog laziness for me, but fear not! This bout of sloth shall not last, and the dearth of blog entries shall come to and end! This cure should come partially because I’ve switched to Tumblr and can now compose my entries in Markdown, and partially because that’s part of my whole Get a Better Life New Year’s Resolution Package 2.0™ (coming to a burned out programmer near you in 2011!). Let’s hope this pans out, or, at least, I don’t end up strung out in the gutter tapping out entries into Textmate. Anyhow, to catch you up, here’s what’s happened since my last post:

  • The Ruby Hoedown 2009 was a smashing success. The whole free conference thing went off without a hitch, and I think everyone had a great time (at least I did).
  • I’ve left entp and now work at Intridea, which has been fabulous thus far. I get to dabble in a few different technologies, and our office (which I don’t work out of but wish I did) is about 2 blocks north of the White House. Epic win.
  • Ruby in Practice was featured on Slashdot, making it jump about 650,000 slots in the Amazon sales ranks. Unfortunately, it has once again floated back down into obscurity, below such fine volumes as “What’s Your Poo Telling You?” and just above vital classics like “Much Ado About Nothing: The Restored Klingon Text”

But that’s not what this post is about. This post is kicking off a series that I’m doing about moving your skills and migrating your code to Rails 3. I’ll be sharing some practical insights and covering some pretty in-depth topics as we go along (I’ve got some notes for entries about upgrading plugins, taking advantage of new features like the agnosticism, migrating applications, and so on), but before I go into a lot of specifics, I thought it might be useful to go over some of the high-level philosophical and architectural changes that have gone on in the Rails code between versions 2 and 3.

The Big Picture

When the Merb/Rails merge was announced, I was worried that we were going to end up in some weird tangle of Merbilicity and Railsishness when the final product came around. I don’t think anyone wants some Brangelina of the web framework world all up in their business. Fortunately, the gents on the Rails core team are smart and classy and have navigated the waters of cooperation and compromise extremely well. We’re getting the best of both world here, folks: the ease of use and packaging of Rails with the juicy technical bits of Merb. Who can argue with that?

But to make that Epic Code Merge of Awesome™ happen, of course there had to be some changes. These big picture changes have concentrated on a few key areas:

  • Decoupling Rails components from one another as much as possible, making things more modular and a la carte.
  • Pulling in improvements from Merb and rewrite/refactor much of the internals to improve performance.
  • Exposing explicit, documented API’s for common tasks and integration of wider ecosystem components from testing, ORM, etc.

In order to hit these objectives, DHH, Yehuda, Josh, and the rest of the Rails team have extracted things into some new components, expanded others, and removed others to allow for agnosticism.

The general movement seems to be from a monolithic, one-stop shop approach to a looser ecosystem of code that works together with a straightforward set of sensible defaults. You’re no longer “locked in” to ActiveRecord or made to use code injection and hacks and such to get your testing framework integrated. Instead, there are hooks all over the place to cover this sort of stuff (which I will cover later on in the series!) that let generators generate things for the various options or helpers include different modules. It’s a great way to support an ecosystem with an established API.

Lifecycle changes

One of the biggest movements in the codebase has been a shift towards using simple, composed components and a lot of Rack in the request chain rather than specialized, one-off classes. This has affected a lot of things, but one of the major changes has been the addition of Action Dispatch.

Action Dispatch is a “new” component in Action Pack (extracted and expanded from the previous logic) that handles a number of things related to requests and responses:

  • Request handling and parameter parsing
  • Sessions, Rails’ flash, and cookie storage
  • File uploads
  • Routing, URL matching, and rescuing errors
  • HTTP conditional GETs
  • Client response and HTTP status code

Breaking this functionality out into its own component and decoupling much of it creates a much more flexible call stack for requests, meaning you can jack into the process easier with your own logic or improve the existing functionality. I’m sure we’ll see a lot of plugins taking advantage of this to create interesting middleware hacks, improve callbacks and lifecycle methods, hack in their own middlewares to handle specialized logic, or even plug in improved or application-specific routers. This is one of the more interesting pieces I’m interested in seeing develop, since it opens a lot of possibilities that were previously much more difficult to reach.

Making controllers flexible

As a result of the changes in the request chain, the controller stack has also seen a significant overhaul. Previously, every controller inherited from ActionController::Base (either directly or by inheriting from ApplicationController) and slimming down the call stack was accomplished by either (a) previous to Rails 2.3, building a smaller app with Sinatra or Rack to sit next to your main Rails application or (b) post-Rails 2.3, using Rack Metal/middlewares.

In Rails 3.0, this concept of middleware plays an even more central role to how the controller hierarchy is arranged.

The bottom of the stack is AbstractController, a very low level “controller.” Rails uses this class to abstract away essentials like rendering, layouts, managing template paths, and so on, while leaving more concrete implementation details to its subclasses. AbstractController exists only to provide these facilities to subclasses. That is, you should not use this class directly; if you want something super-slim, create a subclass and implement render and a few other pieces).

Each subsequent jump up the hierarchy is actually a class that inherits from the previous, each including modules to compose its behavior. So, if you want to create something slim without implementing a lot of plumbing, use the next rung on the compositional ladder: ActionController::Metal. Metal essentially exposes super simple Rack endpoints that you can then include extra modules into to add more ActionController functionality (check out an example here). These little classes are excellent for replacing those Rack/Sinatra apps for file uploads or what have you while still having the power to easily build out to rather rich controller objects.

Finally, if you need the full monty (i.e., like a controller in Rails 2), then you’ll need to inherit from ActionController::Base. This class inherits from ActionController::Metal and includes a slew of modules to handle things like redirecting the user, handling implicit rendering, and a number of helpers for other stuff like caching.

The advantage of taking this approach is that you can take one of the base classes like Metal and include your own modules to create specialized controllers. I foresee someone using this to create a simple way to serve up resources (e.g., PostsController < ResourcesController(:posts) or something like that) much like people have done previously (José Valim’s inherited_resources jumps to mind) or using it as a way to quickly build API backends. This is the other piece of the major refactor that excites me, since we’re looking at a new way to construct reusable code and assemble it into usable applications.

Where models are concerned

Though the public API for models is generally the same (with a few additions and changes that I’ll cover in a subsequent post), Active Record is now powered by the brain-melting Active Relation, a powerful relational algebra layer.

What does that mean for you? Well, basically it means that Active Record will be smarter and more powerful. Rather than fairly naïve SQL generation, it uses some fancy mathemagical approach that should generate smarter queries. Frankly, I haven’t had a lot of time to research these features for myself, but when I do, I’ll be sure to post (or if you’ve posted about this stuff somewhere, then by all means let me know).

The second big change in Model Land is the extraction of much of the rich logic in Active Record objects like callbacks, validations, serialization, and so on into the Active Model module.

You can use this module to make any object behave like an Active Record object; for example, let’s say you wanted to add some validations to a PORO representing a host on a network:

class Host
  include ActiveModel::Validations

  validates_presence_of :hostname

  attr_accessor :ip_address, :hostname, :operating_system
  def initialize(hostname, ip_address, operating_system)
    @hostname, @ip_address, @operating_system = host, ip_address, operating_system
  end
end

h  = Host.new("skull", "24.44.129.10", "Linux")
h.valid?    # => true
h.hostname = nil
h.valid?    # => false

To get this functionality, simply include ActiveModel::Validations and start implementing the methods. It’s possible to exercise fine-grained control over how the validations operate, how the validator gets the object’s attributes, and so on. To get the other functionality like observing or callbacks, just include the relevant module (e.g., ActiveModel::Observing) and implement the required methods. It’s fantastically clever.

Other pieces

ActionMailer is also getting some love in Rails 3. A new API pointed out by DHH in this gist is looking especially delicious; it’s much more like a controller with some excellent helpers mixed in just for mailing.

Rails is also getting a rather robust instrumentation framework. In essence, an instrumentation framework lets you subscribe to events inside of a system and respond to them in meaningful ways (e.g., an action renders and the logger logs its result). Internally the framework is used for things like logging and debugging, but you could easily repurpose the code for other things. For example, let’s say you want to log to the system logger when a particular e-mail is sent out:

# Subscribe to the event...
ActiveSupport::Notifications.subscribe do |*args|
  @events << ActiveSupport::Notifications::Event.new(*args)
end

# Fire the event...
ActiveSupport::Notifications.instrument(:system_mail, :at => Time.now) do
  #SystemMailer.important_email.deliver
  log "Important system mail sent!"
end

# Do something with it...
event = @events.first
event.name        # => :system_mail
event.payload     # => { :at => Wed Jan 16 00:51:14 -0600 2010 }
event.duration    # => 0.063
system_log(event) # => <whatever>

Of course, this is arbitrary, but it adds a really powerful way to respond to certain events in your application. For example, someone could probably rewrite exception_notification to use the instrumentation framework to handle and send error e-mails.

Getting started

So, how does one get a piece of this sweet, sweet Rails 3 action? You can install the old pre-release gems, but they’re pretty out of date. To get rolling on edge, I’ve found two ways that work well. First, you can use Yehuda’s directions; these worked great for me and they’re not a lot of hassle (and how awesome is the bundler?). If that seems a bit much or you want to automate it, Bryan Goines has made a pretty awesome script to handle installing and bundling all you need to make it work.

So, go ahead, install Rails 3, get setup, and I’ll meet you on the other side. I’ll be dropping another post this week about how to get started upgrading an existing application/plugin to Rails 3.

Posts in this series

I’m posting a whole series on Rails 3; be sure to catch these other posts!

  1. Introduction
  2. Approaching the Upgrade
Jan 20, 201022 notes
#rails3
Next page →
20112012
  • January
  • February
  • March
  • April
  • May
  • June
  • July
  • August
  • September
  • October
  • November
  • December
201020112012
  • January
  • February
  • March
  • April
  • May
  • June
  • July
  • August
  • September
  • October
  • November
  • December
20102011
  • January
  • February
  • March
  • April
  • May
  • June
  • July
  • August
  • September
  • October
  • November
  • December