Our dev team’s grown to twenty-three and when the summer intern zenith hits we’ll be — this is weird to type — forty for a few frightening days.
My biggest personal challenge now, by far: How do you stay connected to a product amidst all this scary m͓͇̘̗a̴̹̠̜̝͖̘n̩͙̘͙̫ͅa̬͙̖̬͍̦͔g͓̼͈͇̙i̪̺̘̙̻͘n͕̠̰̦̗g̗? Do you launch the damn editor? Write a single line of code?
I’ve been trying to squeeze every last bit of advice out of role-models I respect. I’m seeking stories from folks who’ve successfully navigated these waters. From personally writing tons of code in v1 to scaling and managing teams who then receive profuse apologies for all those v1 hacks. If this is you, plz send tips.
Here’re the various mentor voices in my head so far.
You must stop coding. But you must not stop coding. Stay very connected to your product. Make mockups and decisions and docs. Scale your team. Intimately know your tech and your tools. Don’t get out-of-date. Be a coder, but don’t waste time in the code.
You absolutely without a doubt need to keep coding, but under no circumstances should you be coding.
Got it. ✔ Chhhheck!
As confusing as that seems, there is solid agreement for how a technical manager should spend, say, 85% of her time. Scaling, recruiting, empowering, refining some teams’ focus, blah blah blah BLAH we’ve all heard it. Everybody agrees.
It’s the other 15% of the day that’s up for debate. This is that short bit of time when you do something creative that magically keeps you tied to your product and on top of your tech stack and immunized against pointy-haired-boss disease. Some suggest mockups, some suggest small production bug fixes, some suggest adding unit tests, some suggest writing code that you never intend to check in.
There’s leeway here, so for anybody else who’s trying to figure it out, here’s how I’ve chosen to spend my intentionally non-managerial time.
Rands’ Technicality is the authoritative piece on this search for maker/manager balance. He opens with “Stop coding.” By the end he commands, “Don’t stop developing.”
What’s clear is these waters are murky. Any good technical manager must go through the difficult transition from coding to spending almost their entire day empowering smarter teammates. But how you maintain that certain je ne sais quoi that separates you from the non-technical manager you never want to be is a pretty personal choice.
Me? I have to launch the damn editor. Just for 30 minutes. Not to code, but to code a single line.
Heard a bit of an interview with Dick Costolo while driving home. He was telling the story of why he started his “Managing at Twitter” course that he personally teaches to all new managers. It went something like so:
An engineer came up to me who had recently switched from one Twitter dev team to another.
He said, “My old manager used to hold 1:1 meetings with me. My new one doesn’t believe in them. Which is it? Should I be having 1:1 meetings with my manager at Twitter?”
Supposedly this kicked off a wave of realizations in Dick’s head about all the inconsistencies in Twitter’s management styles. He wanted to fix the problem, started his “Managing at Twitter” course, and you’d have to ask some manager at Twitter what happened next because how the hell would I know if it’s working.
Working on v1.1 of “Managing at Twitter” that I teach to managers at Twitter. - thought it would end up being a 2hr class & it’s 6 hours.
The story did strike a chord with me. As Khan Academy’s pool of new mentors grows while interns simultaneously flood in the door (13 confirmed so far this summer, join up), it feels healthy to head off any mentorship inconsistencies at the pass.
I want simple and uncharacteristically terse guidelines for Khan Academy mentors. Here goes.
Mentorship is nuanced, and the above is purposefully far from exhaustive. If we’re gonna have the same old shockingly high expectations for mentorship quality that we had when we were a much smaller team (we do!). we’ll need to keep building artifacts like the above for our growing team (we just did!).
The hover effects on Amazon’s big ‘ole “Shop by Department” mega dropdown are super fast. Look’it how quick each submenu fills in as your mouse moves down the list:
See the delay? You need that, because otherwise when you try to move your mouse from the main menu to the submenu, the submenu will disappear out from under you like some sort of sick, unwinnable game of whack-a-mole. Enjoy this example from bootstrap’s dropdown menus:
I love bootstrap, don’t get it twisted. Just a good example of submenu frustration.
It’s easy to move the cursor from Amazon’s main dropdown to its submenus. You won’t run into the bootstrap bug. They get away with this by detecting the direction of the cursor’s path.
If the cursor moves into the blue triangle the currently displayed submenu will stay open for just a bit longer.
At every position of the cursor you can picture a triangle between the current mouse position and the upper and lower right corners of the dropdown menu. If the next mouse position is within that triangle, the user is probably moving their cursor into the currently displayed submenu. Amazon uses this for a nice effect. As long as the cursor stays within that blue triangle the current submenu will stay open. It doesn’t matter if the cursor hovers over “Appstore for Android” momentarily — the user is probably heading toward “Learn more about Cloud Drive.”
And if the cursor goes outside of the blue triangle, they instantly switch the submenu, giving it a really responsive feel.
So if you’re as geeky as me and think something this trivial is cool, I made a jQuery plugin that fires events when detecting this sort of directional menu aiming: jQuery-menu-aim. We’re using it in the new Khan Academy “Learn” menu:
I think it feels snappy. I’m not ashamed to copy Amazon. I’m sure this problem was solved years and years ago, forgotten, rediscovered, solved again, forgotten, rediscovered, solved again.
If anyone else on the planet ends up finding a use for jQuery-menu-aim, I’d be grateful to know what you think.
Thanks go to Ben Alpert for helping me understand the linear algebra / cross-product magic Amazon uses to detect movement inside the “blue triangle.” I ended up going w/ a cruder slope-based approach, mostly b/c I’ve lost all intuitive understanding of linear algebra. Sad. Need to watch more KA videos.
I’ve been the jerk who denies a candidate their hopes for an internship hundreds of times. That’s not a point of pride – just a consequence of helping build eight (nine?) classes of interns at two small companies that both happen to receive up to 500 intern applications every month.
I’ve obsessed over finding ways to build internship classes full of the best developers in an industry where the phrase “best developers” is so overused it’s become meaningless. In all cases, the male:female ratio has been exactly what you’d expect in tech.
So when Jessica penned her post the other day celebrating the fact that Khan Academy’s inbound Summer ‘13 class somehow has twice the number of females as males, I smiled big. Even though we all knew it wouldn’t last as acceptances come in (it already hasn’t held), our team enjoyed a brief moment basking in the future that we (and all educators around the world) hope to build: a world in which being a female developer ceases to be a novelty.
Here’s the top-voted comment on her post:
Here are some selected comments from sharing it on Facebook:
Here’s John Resig on Twitter’s equally frustrating if less-accusatory response:
Some really disappointing replies to my last tweet about Khan Academy’s improved female:male ratio. imgur.com/tnmgl7B
As the person who stands at the end of our hiring process’s pipeline, I find “Mark“‘s idea that we’re sacrificing quality to fill some quota merely very insulting. If I were one of the women who has successfully navigated our brutal interview process, I’d be furious.
It doesn’t help to be an industry obsessed with meritocracy if the first reaction to an altered status quo is that OUR MERITOCRACY MUST BE BROKEN, SOUND THE ALARMS. That’s an a-hole old-timer’s club.
We should celebrate shifts we see in the historically depressing numbers of women in tech, especially when validated by rigorously competitive hiring environments. Why celebrate if we’re aiming for perfect equality? Because we’re not there yet and should encourage change we want to see.
As I was pacing around our company parking lot trying to decide whether or not to curse in this post I kept thinking about the hubbub raised when a 14-year-old posted his iPhone juggling game on Hacker News. He told the community his age. Because of that, he was met with accusations of “emotional manipulation” and “why can’t we leave age/sex/race out of it all.”
It requires a dangerously simple view of the world to decide that we don’t want to encourage a 14-year-old who’s pushing his limits by building mobile apps because we’re scared of risking our oh-so-perfect meritocracy.
It’s not a meritocracy until all 14-year-olds know that they can build mobile apps. It’s not a meritocracy until we don’t have one gender wondering if tech companies are just stomping grounds for another.
So we’ll continue encouraging females, children, and just about anybody to get into tech by celebrating milestones along the way. To those this threatens – oh.
Paul Graham claims nonprofit startups are similar to their counterparts. As soon as I read the headline I realized I’ve been answering a different version of this question in every recruiting conversation I’ve had for the past two years:
“What’s it like to be a hacker at a nonprofit?”
I’ll answer candidly based on my stories on each side of the profit divide. I won’t claim objectivity. I hope to prove this is the wrong question to be asking. Great companies share qualities far more important than 501(c)(3)-ness.
Judging by the questions I get asked by candidates, many would think the for-profit vs. nonprofit issue makes for a striking difference in these otherwise comparable stories.
Truth is both journeys have been eerily similar. I’ll spell out the specifics.
Real stick extends far beyond camera frame.
1. Day to day, you’d never be able to tell a difference. Walk in every day, hack on products, hack on the dev team and its culture, hack on recruiting, battle the hybrid maker-manager schedule, walk out. Realize I forgot my keys. Walk back in.
2. Technical challenges don’t care about your corporate structure. I’ve been stumped at both companies more times than I care to admit.
3. Speed and autonomy is celebrated, red tape and bureaucracy is reviled. The image of a political, slow (or at least less fast) organization is one of the most typical FUDs thrown at nonprofits. I can’t speak for other orgs, but I can guarantee that with the right people at the helm, “nonprofit” != “slow”.
4. Recruiting with a capital R is a cultural cornerstone at both Khan and Fog Creek. Some assume nonprofits wouldn’t approach recruiting with the same fervor and dedication as their counterparts. Not so. I learned everything I know from much wiser coworkers at Fog Creek, and I consider my current team a once-in-a-lifetime experience.
5. Compensation could be at the top of this list of similarities, but I don’t like to emphasize it. If you think working at a nonprofit means you can’t command highly competitive comp, you’re just wrong.
1. The lottery ticket doesn’t exist at Khan Academy. Nobody’s getting filthy rich off an exit, and we know it. This is perhaps the most obvious distinction when compared to for-profit startups where nobody’s getting filthy rich and they don’t know it. I keed, I keed!
If you need to have that lottery ticket, a nonprofit may not be for you.
Of course, my Fog Creek experience isn’t very differentiating when it comes to the typical startup lottery ticket. It’s entirely bootstrapped by principled and generous founders who’ve set up a company that immediately shares success instead of pinning hopes on an exit.
2. Mission is baked in at nonprofits. It’s easy to think of a for-profit startup that pivots left and right in an attempt to find traction. For better or worse, it’s harder to imagine a nonprofit completely abandoning its founding mission.
Once again, “Fog Creek-vs-Khan” isn’t very interesting here. Great companies will have meaningful missions whether they’re for-profit or not. Fog Creek is on a mission. Google certainly has one. I wish Stack Exchange would answer all the world’s questions faster. There’s nothing about 501(c)(3) status that dictates whether or not each teammate shows up every day aimed at the same epic purpose.
But there is something unique about bold nonprofit causes like Khan’s or Watsi’s that makes it highly likely you’ll find mission-driven culture lurking in every corner of the company. Sal ends our weekly company meetings by reminding us what Carlos Slim told him about our product’s future possibilities: “Billions of people are waiting.”
Epic mission: sleep 90% of day
So I completely agree with Paul Graham. “Nonprofit or for-profit?” is not a very useful question for a hacker trying to decide between two teams.
Far more relevant:
I’m only one data point. But in my experience, answering yes to all three of these questions will leave you feeling incredibly lucky, regardless of the corporate label.
*Standard “depends how you count” rule applies here.
**My memory sucks. Please correct me if this number is off, some Creeker w/ a better noggin.
Thx @jason for the read-through and tips.
Sometimes it feels like we’re playing a rousing of guess’n’check when adjusting our App Engine performance settings. The request scheduler is a bit of a black box, so it can be hard to know which knobs we should twist and how far.
But there are two really straightforward signs any App Engine dev should watch out for. Both are a big red flag signaling a configuration not optimized for performance:
You can identify whether or not your requests are wasting time in the pending queue by looking at your request logs and spotting high
pending_ms values. Here’s an example:
126.96.36.199 - - [16/Jan/2013:16:03:12 -0800]
"POST /api/v1/user/exercises/distributive_property/problems/1/attempt HTTP/1.1"
200 2558 - "Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20100101 Firefox/17.0"
"www.khanacademy.org" ms=877 cpu_ms=649 cpm_usd=0.000000 pending_ms=195
If you see lots of that, it’s probably a sign that you don’t have enough instances running to serve your traffic. The symptom is that requests are getting queued up behind other requests, waiting for other requests to finish before even starting to run your code. Hence the
The causes are trickier to spot. If you think you’ve already optimized your number of idle instances and pending latency settings you may want to focus on making your loading requests faster. If App Engine thinks it takes a really long time for a new instance to load your app’s code on the first request, it may be less willing to give you that new instance when a request comes in. Have fun tapping your toe while
pending_ms ticks up. You should also be aware of any requests that are abnormally slow and consider moving them to backends or specific task queues. Otherwise App Engine might queue up some normal request like a user loading your homepage (“I swear, it’s usually so fast!”) right behind some long-running bruiser of an API call. And all of a sudden your user is waiting an extra 500 millis just to get out of the pending queue before your homepage can even start to be served.
Loading requests can be an equally bad sign. You want your instances to stay up and stay healthy for as long as possible. They should be serving thousands and thousands of requests before needing to be recycled. Otherwise your users just sit around on their butts waiting for App Engine instances to reload all that beautiful code you’ve spent so much time writing.
These requests are spotted via
loading_request=1 in your request logs. They’re going to be slow, so you don’t want to see ‘em much.
If you’re seeing an abnormally high amount of
loading_request=1’s given how much traffic you serve, chances are that you’ve also been noticing bigger problems. This could be a sign that your requests are timing out or simply crashing when a new instance starts up. This’ll cause App Engine to kill the instance being loaded, which causes another loading request to be fired for the next request, which…well, you get the point. You might also have memory leaks that are causing instances to get shut down after serving just a few requests. Bottom line is something is causing your instances to thrash off and on too often.
As of today, you can spot both of these problems when browsing around your site if you’re using the App Engine Mini Profiler. If you’re wondering why a request is slow, open up the profiler and take a gander:
You should, of course, still keep track of aggregate
loading_request appearances via your request logs. The mini profiler just makes it easy for you to take a glance at the current request’s health.
I’d been getting a bit frustrated trying to figure out where time was being spent during a few of Khan Academy’s longer running requests. App Engine Mini Profiler usually works great, but these specific requests seemed to be spending a lot of time serializing objects into JSON representations. JSON serialization, by its nature, involves tons and tons of nested function calls. Anathema to an instrumented CPU profiler like the Python profiling tool included in Mini Profiler. So I added a simple sampling tool.
I am Fleetwood, the new star of this blog.
The ugly problem w/ instrumented profilers is they overestimate how much time is spent in frequently called functions or those with deep call stacks. Since “instrumenting” your code basically means turning this…
def monkey_print(s): print("ooh ooh aah %s ahh" % s)
def monkey_print(s): profiler.start_timing("monkey_print") print("ooh ooh aah %s ahh" % s) profiler.stop_timing("monkey_print")
…you can imagine how all those calls to
profiler.stop_timing will add a lot of unexpected overhead if your code happens to
monkey_print thousands of times.
This subtle problem can make it really, really frustrating to figure out where time’s being spent in your program. Even after years I’m still often tricked into chasing mirages. When you look at the output of a request that includes thousands and thousands of deeply nested function calls, you can never really trust how much time was actually spent in those functions.
Sampling profilers have always been around to tackle this problem. They work by periodically halting your program, grabbing the stack trace of whatever code happens to be currently running, and then letting your code go on its merry way.
By doing this enough and combining the sampled stack traces, these profilers do a pretty good job answering “Where is time being spent in my program?” by periodically asking the question, “Well, what is your program doing right now?”
I’m a really nervous car rider.
Right around the time when I started to think I’d trade my kingdom for a sampling profiler that works on App Engine, I stumbled across a Stack Overflow user who’s been on a bit of a personal crusade to get people to embrace a very simple form of profiling: smashing Ctrl+C at random points to halt your running program, then examining the resulting call stack.
He persuasively argues that even manually gathering a small number of call stack samples in this way does a great job answering the most important profiling question, “Where is time being spent?”
I figured, well, if that’s true, then to get some interesting App Engine performance info we might not need a fully-featured, built-by-the-pros sampling profiler. We only need something that can briefly inspect the call stack of the currently running request.
Unburdened by worries of perfection, the hacks began. Stack Overflow taught me how to inspect the stack of all running Python threads. Alpert gave me the tip of spawning a separate thread to periodically perform these stack inspections (since the signal library that I’d prefer to use is off-limits in App Engine).
A little duck tape here and some apologetic comments there, and a simple sampling profiler for App Engine was born.
Here it is. Far from perfect, needs more work, but we’re learning from it already.
The sampling profiler is bundled into App Engine Mini Profiler along w/ the instrumented profiler and RPC statistics it already included. Profile your App Engine app and let me know how it goes. Or try the (intentionally slow) demo here (just click the little profile link in the upper left corner).
A trail of zebras in my wake.
P.S. We’ve been getting in the habit of giving thanks at the end of Khan Academy company meetings. Whenever I talk about the mini profiler I’d like to send thanks to the Stack Exchange team for their inspiration. If you’re on .NET or Ruby, stop wasting time and check out their original. Given the timing of recent events, I’d also like to send our team’s thanks to Aaron Swartz, the brilliant creator of web.py, which inspired the design of App Engine’s webapp, which, in turn, helps power Khan Academy.
I’ve noticed that the most mature and accomplished developers I’ve worked with are also those who most frequently say “I don’t understand” when they’re listening to a technical explanation. This has been the case with coworkers both at Fog Creek and at Khan Academy.
In one way, it’s counterintuitive. Shouldn’t the senior devs already know everything? But it makes a lot of sense. Those who are most secure in their own abilities are the most comfortable to admit when they haven’t fully wrapped their minds around something. Newer devs assume that their confusion is their own fault. They don’t want to interrupt others due to their own perceived shortcomings.
“I don’t understand” is the perfect response. You’re not insulting anybody. You’re not showing weakness. You’re building a culture of respect for how smart everybody is, because you know that after a few minutes of explanation you will get it.
Either that or you’ll find a bug. I like to think of “I don’t understand” as a kind of reverse rubber ducking. Except in this version, the duck comes alive and quacks and stomps and “I don’t understand”s all over your keyboard while forcing you to explain various things.
It’s most said by the best, decades after they’ve become a master. We newer devs should follow their lead and get rid of any stigma associated with those words.
We ran into a hairy problem with our App Engine A/B testing framework a few months ago. It’s a problem unique to those relying on memcache to increment counters with statistical meaning in a shared resource environment like App Engine.
Since there are more live-action teenage mutant ninja turtle films than people who fit that bill (sorry, one person on the internet, I know you matter), this is more a story about an interesting problem than a service post.
Some of our A/B tests started looking a little…surprising. “500+ users mastered Dividing Decimals in 20 problems or less, and 200+ users mastered it in 30 problems or less,” our dashboard claimed. Hmmm. Impossible, inconsistent results like that started popping up all over. I had that awful feeling you get in your gut when you square yourself to the possibility that the metrics you’ve been using to make decisions are straight-up broken. Trust in our A/B experiment results plummeted.
Our A/B framework uses memcache as the fastest way to increment its internal counters every time somebody new participates in an experiment or triggers a conversion. For example, when a new user first comes to our exercise that helps build intuition for one step equations, they may be handed one of two alternatives for each problem they encounter. A counter (
alternative_A_participants) is incremented accordingly. Another counter keeping track of conversions (
alternative_A_conversions) is incremented similarly when we think they’ve mastered the skill. This is how we keep track of our A/B alternatives and their relative performances.
Now, there are two properties of these counters that aren’t negotiable. They must be fast, because A/B testing shouldn’t degrade performance for users. And they must be atomic, because otherwise we’d lose tons of data when we get hundreds or thousands of new participants per second, say during traffic events.
In App Engine land, memcache is the clear winner when in need of a fast, atomic counter. You could build a sharded datastore counter for atomicity, but you’ll sacrifice speed because the datastore can’t keep up with memcache. You could try something like kicking off a task queue for every increment, but you may quickly breed a single user’s HTTP request into 5 separate requests for your instances to swallow. Other tools exist, of course, but they aren’t available in App Engine.
memcache.incr is hard to beat.
The astute among you are already concerned. “Memcache shouldn’t be used to store data you can’t afford to lose,” you chide, “you never know when memcache will decide to evict your data”. True. In a situation like this, you have to be constantly running a process that persists data from memcache into something more permanent, like the App Engine datastore. We do. But you still don’t have any guarantee that a malevolent memcache won’t evict unsaved data before it gets persisted. So you also have to be willing to occasionally lose some data to bad luck. We are.
We knew that we’d occasionally lose count of an experiment’s participant or two or three or twenty. This is a relatively rare problem we’re willing to swallow. Assuming there is no statistical bias in memcache’s tendency to evict the counter for
alternative_A_conversions vs. the counter for
alternative_B_conversions, we felt comfortable comparing an experiment’s alternatives and making meaningful decisions.
That assumption was way off. Think about the problem that a PaaS like App Engine has to solve when offering memcache to all its applications. They can’t just run one big, honkin’ instance of memcache on one bigger, honkin’er machine somewhere. They have to distribute their load across multiple completely independent instances of memcache. Which means when you call
memcache.incr(key), a value could be incremented on one of any number of different App Engine memcache instances…and the instance chosen depends on a hash of the key being incremented.
In simpler words,
memcache.incr("alternative_A_conversions") stores its counter on a different machine than
memcache.incr("alternative_B_conversions"). Since memcache is a service App Engine shares among all its applications, it’s highly likely for one memcache machine to be under considerably different memory pressure than another.
Long story short? Some memcache keys (
alternative_A_conversions) can survive in memcache for 10 minutes before being evicted, giving our background persistence task plenty of time to run. Others (
alternative_B_conversions) may only last 10 seconds under high pressure. This behavior is consistent for each key…creating a statistical bias that favors one A/B alternative over another by repeatedly evicting only one of the key’s counters before we have a chance to persist its data.
What to do? We could switch away from memcache for our counters, but as mentioned above the alternatives aren’t great. We could give up on our realtime dashboard for A/B results, stop worrying about atomic counters entirely, log events for every participant and conversion, and periodically analyze our A/B alternatives offline. This may be the future for us, but we’ve been pressed for time (pretty sure we’re unique in that respect) and hoped for a quicker fix.
We had one hack available to us thanks to App Engine’s treatment of the namespace parameter available in all memcache calls. If you provide a namespace in your call to
memcache.incr, App Engine would use that namespace to determine which memcache instance to send the value to, *not* the key. This meant we could keep all keys for an experiment in a single memcache instance, which would remove the statistically biased eviction patterns. While I’m normally a lover of such simple solutions, we heard this would soon be changing, and even identically-namespace’d keys would be spread among multiple instances in coming weeks. No go.
Luckily, Google’s Fred Sauer is brilliant and had a better idea. He suggested using bit offsets to store multiple counters’ values within a single memcache key. Since memcache.incr’s ints have 64 bits, we could split them up into 4 16-bit counters, each one counting up to a maximum of 2^16-1=65,535 before needing to be persisted over to the datastore.
This ended up working beautifully. Each time we want to increment
alternative_A_conversions, we grab the memcache value stored by key
all_conversions and increment the counter stored in bit positions 0-15. When we want to increment
alternative_B_conversions, we grab the exact same memcache key and increment the counter in bit positions 16-31.
Even the most trivial bit twiddling makes web devs like me feel all hardcore. It’s a little sad.
Using a single memcache key for all of an experiment’s counters means we never have to worry about statistical bias in eviction policies affecting our ability to compare alternatives A and B. If one of an experiment’s counters is evicted at an unfortunate time, they all are.
Our A/B experiment data has looked consistent since this change, and trust in the framework is largely restored. Hats off once again to Fred and the App Engine team.
Teaser: we’re full steam ahead hiring for the summer ‘13 class of interns. Apply.
We had fifteen interns this summer and around that many full-timers.
It’d be silly for me to try to write a summary of what was accomplished. That task was doable last summer and fall, but this time too much was built. If I made a list of it all, I’d ramble even worse than normal.
Luckily, some interns took up the mantle (minus the rambling). If you’re here to get a feel for the type of work done by a Khan Academy intern, you can’t do better than Dylan Vassallo’s, Ben Alpert’s, Jamie Wong’s, Omar Rizwan’s, and Ankit Ahuja’s firsthand summaries. From launching a brand new way to explore computer science (*cough*alongside John Resig*cough*) to shoring up our backend infrastructure (*cough*alongside Craig Silverstein*cough*), these posts give a taste of how our interns spent their time (not apologizing for namedropping, sorry, it’s relevant).
But just a taste.
So before things gets any foggier in my head, I’ll share the summer themes that stick out most for me.
I prefer to just tell people to “BRAG!!!” so often they think I’m more repetitive than Inigo Montoya, but I suppose “do things, tell people” does have a less abrasive ring to it.
The intern posts linked above create immense value…for Khan Academy, for wide-eyed future KA candidates, and especially for the careers of their authors (Alpert and Vassallo are already full-timers, though, so mitts off). They’re one of my absolute favorite parts of every internship.
But they don’t tell the full story.
On the issue of gender, they actually warp it. Four of our fifteen interns were female, and they each made serious dents in the world. Heck, Jessica Liu single-handedly created an entire library of computer science content which is now teaching hundreds of thousands of learners. It was the most gender-balanced group of interns I’ve ever worked with, and I’m more than convinced this is a big reason why it was also one of the best.
Just one example of a story easily lost. It’s important enough to let stand on its own.
The top student-created spin-offs of Jessica’s Nyan Cat program
Far more important stories go untold than boring stories are shared. My rambling blog may often be on the wrong side of that battle, but hopefully you and your team won’t be. We use dedicated intern demo days, hipchat rooms for the purpose of sharing screenshots, and “You Killed My Father, Prepare to Die”-like persistence when encouraging interns to share.
I haven’t met an intern yet who complained, “I’m getting a little tired of all the mentorship around here.” I hope I do one day. No matter how much we focus on mentorship, it requires constant, conscious effort. Making sure all mentors are communicating well. Making sure interns are having a consistent code review experience. Making sure ownership is being handed out. Trying to learn from any and every frustration voiced by a previous summer’s intern. It’s a full-time job for every dev on our team, and they already have full-time jobs.
But it’s worth it. See below. You can bet we’ll be gathering next summer to discuss how to be even better mentors.
Our interns ship to real users. Whether it’s from deploying quick fixes on day one or sprinting all summer for a splashy launch in August, they will have the unmistakable feeling earned by sending their creations off into the world.
I honestly believe this is one of our biggest competitive advantages when recruiting. One of the most talented interns I’ve ever worked with told me about his previous internship’s code — it’s still rotting in source control somewhere. The first week of his summer, he told me that wouldn’t be happening again. I have immense respect for that.
There was another intern, one we lost in a recruiting battle to a big company. He emailed me a week after he started, full of regret, after being shoved away in a corner to work on some internal-only documentation tool. What a waste.
I’ve seen frustrations of all varieties melt away when a dev, full of pride, pushes a new piece of work out to hordes of hungry users. It’s cathartic and a capstone experience in any real-world development effort. To not include it in an internship is a crime.
Our team can proudly say we nailed this theme this summer.
Most tech leads or managers that I talk to think a 1:1 ratio of full-timers to interns is simply too crazy. Maybe.
One thing we know for sure? Working with interns blows the normal interview process right out of the water. As Desmond so rightly pointed out recently, if you’ve ever had an argument about whether or not to give an intern a full-time offer after three months of working side-by-side with them, you’ve really gotta question the effectiveness of your normal, oh-so-vaunted five-hour interview process.
And it’s well understood now that the best devs are only on the job market two, maybe three times in their lives. One of those is right out of college. For long-term recruiting health, you can’t beat internships. The new full-timers joining our team from the class of summer ‘12 are a testament to that.
Punchline? We’re hiring up again for next summer. We’ll be even better prepared and accomplish even more than last time. If you’ve read this far and want to be a part, apply for an internship and include “I read Kamens’s blog, and the monkey swings at midnight” — you may squeak through resume reviews a tad faster.