Smelling code versus tasting it.

Trying to teach people the difference between “good” code and “bad” code is sometimes tricky. The science vs. art conundrum often gets in the way. One metaphor that many people turn to is that of “code smells”. The idea is that sub-par coding practices – like stale food in a refrigerator – can give off distinctive odors which software developers should become sensitive to.

But I just came across a slightly different metaphor that communicates similar principals in a different way: taste.

I was listening to an interview with Linus Torvalds (the genius behind Linux and Git) in which he elegantly demonstrated the difference between what he considers “good” taste and “bad” taste in coding. He provided a beautiful example of elegance in code. (Caveat: understanding this example requires an understanding of pointers and references, although nothing beyond a good C++ 101 class; the discussion itself can be found in this video.)

Linus provided two examples of code that do the same thing. The first one looks like this:

Now, as he explains, this is how nearly every student is taught to work with singly-linked lists. It is straight-forward, and reasonably easy to follow after a few moments’ thought. And it works. All the edge cases have been accounted for.

But this code should leave a bad taste in your mouth. One clue that it is a little “off” is the requirement for an if() statement at the end, which indicates that the code works differently depending on whether it’s in the middle of the list or at the beginning.

In contrast to that example, take a look at this second approach (offered as an example of much better taste):

Here we have a function that accomplishes the exact same thing as the original code, but it just tastes better. It requires less than half of the lines of code, and more importantly, there are no special cases. It just does the job that the coder intended it to – no more and no less. Yes, it requires a clear understanding of pointers (and so it might be slightly more difficult to understand, on first review), but it also provides a great example of why pointers can be so powerful. (And the useful comments take away any remaining mystery.)

So, I thought I would pass this along as a simple but helpful educational tool for communicating what good code should taste like. Because we should all become a little better connoisseurs of code.

Posted in Uncategorized | Leave a comment

Radical Transparency

I just read a counter-intuitively fascinating article about a brand-new software tool called Eyebrowse – a tool, developed at MIT, which “lets users make their browsing history public.” The tool was not designed as an accountability system (users can selectively choose which parts of their history they want to expose to the world) but rather, as a socially collaborative way to browse the web in public. A way to share your online experiences with your friends, and to make new ones with similar interests. I like it.

But it occurred to me that we could take this concept even further.

I would like to develop a new tool. I would call it:

Openize”: a radically transparent tool for using the internet.

I am envisioning a product which would be similar to CovenantEyes, only more so. CovenantEyes lets me share my web history with one or two closely trusted friends. The Openize product would record my history in the same way, but would make it publicly available to anyone and everyone: my wife, my children, my boss, my pastor, my mom, my entire world. No blocks. No white-lists. No deletions. Radical honesty about what I read, where I go, and who I am. Times. Dates. Search-terms. Eye-tracking. You name it. And I don’t want just a list. I want a graphically interactive dashboard that would provide visual aggregation and unlimited deep-dive inspection of my entire web-surfing history, from this moment forward, made freely available to anyone who cares to look.

Imagine it: Everyone would be able to see exactly how long I spend on FaceBook. Or which articles in Reddit I’m drawn to. Or where I go late at night. No masks, no lies, no hypocrisy. There have been so many commentators lately decrying the loss of privacy in our society. But I say: embrace openness. For the vast majority of history, people have lived in small communities without ANY privacy. And for the most part (except for those who lived under despotic regimes), they thrived. Of course, that would be a lot scarier in a country where righteousness was outlawed by a corrupt government. But the reality is, we are only a few years away from technological advancements that will enable governmental invasions of our privacy that are orders of magnitude more intrusive than anything we have yet seen. Gnat-sized drones that can spy in our bedrooms have probably already been invented. So we might as well live as if it has already begun.

But I digress. I’m not recommending this strategy as a deterrent to the NSA. And by no means am I suggesting that this type of openness should be mandatory for everyone. I’m just saying that if the tool was available, I would be first in line to volunteer for it. WHY?

Because I want to be FREE.

Free to be WHO I AM in front of the whole world. Free to leave a legacy – a trail in the forest that others can follow. Free to stand boldly before all who care to look, and say “Here I stand. I can do no other.” If I ever succeed in life – if I become the man that God wants me to be – I want my descendants to be able to freely trace all the footsteps that led me to victory. On the other hand, if I crash and burn, I want them to be able analyze my mistakes with spotlights and microscopes, in the hope that they might avoid my missteps. But in any case, I don’t want to hide anything. I don’t want to be a pretender.

So who’s with me? Would you be willing to take the plunge, to rip off the covers? To boldly stand with the Apostle Paul and say “I will not be like Moses, who put a veil over his face to prevent the Israelites from seeing the fading glory” (2 Cor 3:13). To discover the power of living in Radical Transparency?

I should point out that I don’t have a tool like this. I don’t know if one exists, and I don’t yet have the technical savvy to build one myself. But I want it. And I want to know: if it were available, would you buy it? Would you use it? Will you help me build it?

Our gospel came to you not simply with words but also with power… You know how we lived among you… And you became imitators of us(1 Thess 1:5-6; emphasis added)
Posted in Uncategorized | Leave a comment

NP-complexity for Dummies (like me)

Despite all the computational theory that I have studied over the years, the definitions of P, NP, NP-hard, and NP-complete have always been fuzzy in my mind. But I think I’m finally getting the hang of it, and I want to write it out (in overly-simplistic language) before I forget (again)!

P-problems are those that can be solved in Polynomial time. That doesn’t mean they’re easy, necessarily – they may be O(n^2), or even O(n^Googol), but they’re do-able.

Then there are the other problems. The hard ones. The ones (for example) that take an exponential amount of time to solve – like cracking passwords with brute force.

There are a lot of these hard problems that nobody has ever found a polynomial-time algorithm for. Doesn’t mean one doesn’t exist, it just hasn’t yet been discovered. Now some of these problems have a particular characteristic that, although they can’t be solved quickly (yet), they can at least be verified quickly. That means that if some wizard said, “Here’s the answer to this particular problem”, you could quickly do some math and say, “Yep. He’s correct. That’s the answer all right.” (It might take magic to solve your password, but I can check one of my guesses instantaneously.)

Problems of this sort (with easy-to-test answers) are known as NP. (Important note: “NP” does not stand for “Non-Polynomial”. As a matter of fact, all P problems are by necessity also NP – they have easy-to-test answers. NP actually stands for Non-deterministic Polynomial, for reasons that aren’t really necessary to understand.)

OK, so now we have this set of NP problems (which includes all the P problems, and many others that may or may not be in P, but appear so hard that we doubt they are in P). Now it just so happens that some brainy dudes started studying all these NP problems that seemed outside of P, and they noticed a curious detail about them: They all turn out to be “equivalently” hard. (That statement is not precisely accurate, but I’ll clarify shortly.) So, for all of these problems (even the really-really hard ones that would take a billion eons to solve with current algorithms), it turns out that if you can find a way to solve any one of them quickly, you could solve all of them quickly! If you find a key that unlocks one of them, that key can be used to unlock all of them! So, the brainiacs have given this quirky set of quasi-equivalent problems a special name: they call them NP-complete. Technically put: any problem in NP can be “reduced” to any problem in NP-Complete; therefore, this set is equivalent to the hardest problems in NP.

There is another class of problems that are even “harder” than NP-complete. They are the ones that are outside of NP altogether (in other words, even their answers are not easily verifiable). However, even some of these (or perhaps all of them; I’m not clear on this part yet) can be mapped or “reduced” to the NP-complete problems. These are called the NP-hard problems. That is: they’re not in NP, but they are technically no harder than the ones that are.

SOOO, if someone wakes up tomorrow and discovers an efficient (polynomial time) solution to just one of the NP-complete problems, the entire fortress of NP-complete problems will collapse and they will all be “solved” (or, rather, “solvable”). So you say, “well of course, that’s impossible; that could never happen.” And you’d probably be right. But here’s the thing: NO ONE HAS EVERY PROVEN THAT TO BE TRUE! Despite decades of trying by the brightest of the bright, and a $1 million dollar reward. You could also win the $1million by proving that even ONE of the NP-complete problems is for sure NOT in P, but nobody has been able to do that either.

So what are you waiting for? Get busy! Go out there and win a million dollars!

Posted in Uncategorized | Leave a comment

Using Outlook Emails for ToDo lists

Here’s a simple productivity tip that I came up with recently…

At work I basically live in Outlook, so I’m not interested in introducing another application to handle my “ToDo” list. However, I’ve never been particularly fond of the Task List that Outlook provides. So I’ve come up with a simpler approach: On Friday afternoons, I email myself a list of the things I want to accomplish the following week.

One thing that this approach misses, however, is a convenient way to “check off” the individual tasks. So I set up a simple way to do this with a few easy clicks, using built-in features in Outlook. My customization makes use of the AutoCorrect feature.

1. First, I entered the following two AutoCorrect entries:

  – Two square brackets (“[]”) get replaced with an empty box.

  – Right-bracket + backslash (“]\”) get replaced with a “checked” box.

2. So now, if I type brackets+tab, I get a checkbox. (By pressing tab rather than spacebar, the checkbox is transformed into a list bullet, so subsequent lines automatically get added to the checklist with their own checkbox.)

3. To check off an item, click on the item, hit Home twice, then type R-Bracket + slash + tab (which sounds complicated, but it only requires a simple roll of the fingers; you don’t even have to hit Delete). Note: Since the items are now list items, hitting Home twice will display formatting that makes it look like your change will apply to the whole list, but only the box with the cursor will get “checked off”.

 

If you haven’t used customized Autocorrect entries before, here’s the step-by-step instructions for setting one up in Outlook 2013 (instructions work in earlier versions too):

1. In an email, use Insert->Symbol, to select and enter the symbol you want to use. (I chose Wingdings, character 168 for the empty box.)

2. Now in the email, select the symbol you just added. Then open the Autocorrect feature by navigating here:

         FILE -> Options -> Spelling and AutoCorrect… (button) -> AutoCorrect Options… (button)

3. Now notice that the symbol you selected in step 2, has been automatically placed in the “With” field, and the radio button should default to “Formatted Text”. In the “Replace” field, type the characters that you want to use to trigger the action (open & close square brackets, in my case). Hit the “Add” button, and then OK.

4. Repeat these steps for the marked checkbox. (I use Wingdings, character 254.)

5. One last thing I have to do before using this method: Since I have emailed this list to myself, it is not automatically editable. To make it so I can check the boxes off, I must open the email in its own window (not in the Reading Pane), then in the MESSAGE tab I click Actions -> “Edit Message”.

And now, voilà! you have a very convenient way of making and using ToDo list checkboxes in emails. Here’s a snippet from my morning list:

Now I can go check off that second box. (I love checking off boxes!)

Posted in Uncategorized | Leave a comment

Resume Tips (Not that I am Preparing One at the Moment)

Just read this helpful suggestion from Yvan Rodrigues over in CodeProject. I particularly like his description of a “skills grid”:

My resume is 13 pages. As ridiculous as that sounds, I get a very high interview rate and usually before I show up they know if they want me or not. I did it that way because I have been a hiring manager for many positions and you have so little to go on from a 1 page resume and cover letter. With mine, what they see is what they get. I use a skills grid to indicate my level of knowledge for various technologies using this key:

Basic knowledge: I have researched this topic; or I haven’t used this technology, but I’d like an opportunity to; or I have used this for less than 6 months; or I could discuss it at a cocktail party.
Applied knowledge: I have used this in one or more projects; or I have used this for less than 2 years.
Advanced knowledge: I use this regularly in projects; or I have been using this for 2–5 years; or with some preparation, I would feel comfortable speaking about this topic to a general audience.
Expert knowledge: This is a core technology in my projects; or I have been using this for 5+ years; or I have written or spoken to this topic; or with some preparation, I would feel comfortable speaking about this to an audience of my peers.

While I think a thirteen page resume is excessive*, Yvan’s Skills’ Key seems useful. I have used used a Skills Grid myself for years, but I just liked his wording, so I am preserving it here as a reference, in case I need to re-do my resume someday.

*Personally, I pride myself on maintaining a dense ONE-page resume. It requires exacting discipline to strip out every un-necessary word in order to keep it short and concise. But I try to treat my resume exactly like an elevator speech: Make every syllable count. (Unlike my blog, which conforms more to the Pascalian adage, “I would have written a shorter post, but I didn’t have the time.”)

Posted in Uncategorized | Leave a comment

Visual Studio 2014: A Taste of Things to Come…

While I’m not ready to jump ship from VS2012 just yet, the just-released VS2014 CTP3 has some pretty cool features (Expand the DetailsgTechnology Improvements items on that page). Some highlights*:

· Custom Layouts to make it easy for you to save custom layouts. In CTP 3, these Custom Layouts roam: any custom layouts you create will synchronize across machines that have the CTP 3 installed when you sign into the IDE with the same account.

· “PerfTips” – now see how long code took to execute directly in the editor when code execution exceeds a threshold. (More details here).

· “Lightbulbs” – If you have an issue in your code, placing your editor caret on the line where the issue is shown or hovering over the issue will present a light bulb that shows helpful actions you can take to resolve the problem together with a preview of the results of each action.

· C# refactoring support has been completely revamped. There are two new core refactorings: Inline Temporary Variable and Introduce Explaining Variable. Additionally, refactoring support for Visual Basic has been added for the first time.

* Note: Some of these features are from earlier releases of VS2014 (i.e., CTP1 & CTP2).

Posted in Uncategorized | Leave a comment

Forecasting for IEEE

PREDICTIONS

IEEE Spectrum invited me to contribute to their “Survey of the Future”, prognosticating what I expect the future will look like technologically over the next two decades. For the record, here are my predictions:

In 10 years:

Ubiquitous computing: internet of Things + cloud + wearables = SMOG. (“sensory manipulation overload grief”)

In 20 years:

Implantables: digital tatoos; optic-/aural-nerve interfaces; natural language interfaces

Details:

Over the next 10-20 years:

  1. Desktop workstations will be disappear, along with:
  2. The “internet” (as we now know it). It will no longer refer to a collection of websites; websites will no longer be monolithic, static “destinations”; the data (images, text, media) currently housed in individual websites will coalesce into an amorphous “Sea” of information (navigated, perhaps, by semi-intelligent virtual-guides). We will access this Sea via:
  3. Wearables: glasses/contacts will provide hi-def 3D VR screens; nano-sized wireless ear-buds will be “tatooed” in the ear canal. We will interact with these interfaces via:
  4. Natural language (i.e., conversational interfaces; including inaudible voice recognition through direct vocal muscle detection). People won’t be talking to “the computer”; they will be conversing directly with the Sea. Early adopters will be experimenting with:
  5. Implantables: digital tattoos and neural implants allowing thought-driven interfaces (primitive at first), and direct-sensory stimulators (“sharable emotions”). All of these innovations will give rise to:
  6. Virtual Presence: the ability to interact with anybody instantaneously, in 3D VR environments.

Other changes to expect by 2040:

  1. Drones will be as ubiquitous as cars.
  2. Passwords will be obsolete, along with email, Facebook, and Google.
  3. Shopping centers will vanish, replaced by holographic VR-shopping, automated drone delivery, and nano-scale 3D printers.
  4. Offices and school buildings will empty out, deemed unnecessary with the rise of “virtual presence”.
  5. Clubs, gyms, churches, and bars will increase in popularity, satisfying people’s continuing need to connect, exercise and fellowship.
Posted in Uncategorized | Leave a comment