Ember Responsive Acceptance Tests

    Handwritten by Tian Davis

    ember-responsive acceptance tests

    You’ve spent so much time crafting a responsive experience for what was once exclusively for desktop. You can see the light at the end of the tunnel. Then you’re hit with the dreaded question, “How do you integrate the responsive UI into your acceptance test suite?”

    Panic sets because you know the impossible has arrived.

    BREATHE.

    On our front-end team, we’re on a push to retrofit a mobile responsive UI/UX onto an existing desktop ember application.

    ember-responsive acceptance tests

    Using ember-responsive, we’ve made some initial progress so far.

    ember-responsive acceptance tests

    The goal of ember-responsive is to give you a simple, Ember-aware way of dealing with media queries. The key way ember-responsive meets this goal is by providing you ember computed properties that change based on your applications responsive breakpoints. These resulting media-* classes help us avoid using more media queries than we need.

    ember-responsive acceptance tests

    Our acceptance test suite is mature and it would be awesome if we can run those same tests against a mobile form factor (even if it’s just a desktop browser resized). Obviously, this is challenging because web browsers disable window.resizeTo and window.resizeBy APIs. So it is not practical to programmatically resize a web browser during tests.

    We’re able to update ./testem.js to resize, at-least, Chrome/Opera during an ember test -s run.

    module.exports = {
      "framework": "qunit",
      "test_page": "tests/index.html?hidepassed",
      "disable_watching": true,
      "launch_in_ci": [
        "PhantomJS"
      ],
      "launch_in_dev": [
        "PhantomJS",
        "Chrome",
        "Opera"
      ],
    
      "browser_args": {
        "Opera": [
          "--window-size=320,600"
        ]
      }
    };

    The tough part, unexpectedly, was getting ember-responsive to inject our media-* classes during acceptance tests. But if we could just get that piece working, then we would have a shot at getting the responsive form factor itself under test.

    I reached out to the ember-responsive maintainers and they were very helpful in understanding what options we had to inject the media-* classes during acceptance test runs. Of the two options, removing the ./tests/helpers/responsive.js helper seems to be the most flexible so far.

    ember-responsive acceptance tests

    With the removal of ./tests/helpers/responsive.js, media-* classes inject as expected. Now, with the new Opera instance resized to hit the media-mobile breakpoint, we’re able to run our acceptance tests against the mobile form factor.

    ember-responsive acceptance tests

    ember-responsive acceptance tests

    I think the key during the conversion is validating which class names/id names/etc are used for clicks/inputs/interaction/etc. Once we use those classes/ids on the responsive side, then BOOM, the acceptance tests can interact w/ the responsive UI and validate things are working. Missing functionality should break the tests in the Opera instance as expected.

    Ideally, we’d have an actual mobile browser, running on an actual mobile device, but that option doesn’t seem available to any team at the moment. So if you’re stuck figuring out how to bring your responsive ember application under test, give this approach a try.



    Git for TortoiseSVN Developers Tutorial

    Handwritten by Tian Davis

    Git for TortoiseSVN Developers

    Today I published a series of screencasts to explore how to use Git from a Subversion developers perspective. In particular, a Subversion developer using the ubiquitous TortoiseSVN client.

    Git is a powerful source code management platform. What many don’t know is Git can be a very flexible and forgiving source code management tool too. If you’re coming from a Subversion client like TortoiseSVN and want to start being productive with Git immediately, then this course is for you. If you’re looking for a fun way to brush up on your Git and see SVN in action, this may also be the course for you too.

    In this course, we’re going to line up SVN and Git, side-by-side and take you through everything you need to start being productive in Git immediately. The topics we’ll cover include:

    1. High-Level Differences between SVN and Git
    
    2. Organizing Branches
    
    3. Committing Code
    
    4. Merging Code
    
    5. Getting The Latest Code
    
    6. Tagging Code Releases

    Whether you’re new to a team or an established tech lead, we’ll cover all the functions teams transitioning from SVN to Git need to be successful. In this tutorial, we’ll be using Git version 1.9.3 and TortoiseSVN 1.9.2



    Using Chance.js To Generate Sample Data

    Handwritten by Tian Davis

    Chance.js

    It’s not like writing sample data is hard per se. I just always felt like it was unnecessary. Most of the time you’re trying to deliver business value.

    You’re certainly not trying to build the world’s greatest random number generator. And you’re definitely not trying to build the world’s greatest random email generator either. Unless you’re chance.js that is.

    What is Chance

    Chance is a JavaScript micro library, narrowly focused on generating random data. That data could be as general as a random number or random string. Or Chance can be as specific as a random last name or a random phone number.

    Chance comes in great during automated testing, but it is also incredibly useful when working with peer-to-peer WebRTC apps as well because it helps you generate random conference rooms and other data.

    How to use Chance

    Generating Random User Data

    For example, say you’re working with a new instance of Respoke:

    var client = new respoke.Client({
        appId: '918e31a3-34aa-40f8-aa4c-b5409f9e4053',
        developmentMode: true
    });

    When connecting, you’d supply Respoke with a unique endpointId identifying the person connecting:

    client.connect({
        endpointId: endpointId
    });

    Think of your endpointId as a unique username or email address or some other unique data point. Using Chance, we have a few good options to work with.

    First, we could use Chance to generate a random email address:

    client.connect({
        endpointId: chance.email()
    });

    If you need to use a particular domain, you could set that using Chance as well:

    client.connect({
        endpointId: chance.email({domain: 'respoke.io'})
    });

    Another potential use here is to generate a random Twitter username:

    client.connect({
        endpointId: chance.twitter()
    });

    The great part is you didn’t have to write a single utility function.

    Generating Random Room Data

    Once connected to Respoke, you’d want to then create a new group or join an existing group. Here, you could use Chance to generate a random group name.

    You could use Chance to generate a random Black Berry Pin:

    client.listen('connect', function() {
      client.join({
          id: chance.bb_pin() //'985de771'
      });
    });

    Or you could you use Chance to generate a random Facebook Id:

    client.listen('connect', function() {
      client.join({
          id: chance.fbid() //'1000039460258605'
      });
    });

    Or you could you use Chance to generate a random Unix timestamp:

    client.listen('connect', function() {
      client.join({
          id: chance.hammertime() //2273327300317
      });
    });

    My personal favorite is the hammertime method. How could you not love a library with a method called hammertime?

    According to startup lore, Hammertime was coined by a startup whose founder had an interesting interaction with M.C. Hammer. There was no name given to “Unix time with milliseconds” and while brainstorming ideas (because Unix time with milliseconds is a confusing mouthful), someone suggested Hammertime and it stuck.

    Chance is a pretty neat library for generating lots of random data. Data you would have otherwise had to craft yourself. Give it a chance…See what I did there? ;)



    Git: Update Multiple Repos

    Handwritten by Tian Davis

    Macroons Coffee

    Recently, I’ve been working with a lot of git repos at a time. I mean a lot. Like more than twenty. . . That’s the bad news.

    The code news is the code is all passionately crafted and a joy to work with. Which is one of the reasons I try to make sure I’m always working with the latest-and-greatest before diving in.

    But, I couldn’t see myself running git pull for every single respository. Believe me, I tried. . .

    Git pull itself has it’s issues, so I wanted to be mindful of commit merges. In addition, I’m often helping teammates validate new features or bugfixes in bleeding edge branches, so I wanted to make sure I’m always getting the latest-and-greatest remote branches as well.

    By default, git pull helps you do neither particualary well. That’s when I ran into the concept of git up.

    Git Up

    With all versions of Git, you can configure a git alias. So I configured an alias called git up:

    git config --global alias.up '!git remote update -p; git merge --ff-only @{u}'

    This alias downloads all of the latest commits from all upstream branches and tries to fast-forward the local branch to the latest commit on the upstream branch.

    Using this technique, I’m able to get all remote branches while keeping the repository’s commit history clean.

    But the challenge still remained of how to deal with running git up for multiple git repos.

    Going Full Shell

    I use a shell script to get the job done of updating multiple git repos:

    for repo in repo1 repo2 repo3; do
        (cd "${repo}" && git checkout master && git up)
    done

    It’s not perfect by any means, but it gets the job done. Some things to look out for. . .

    First, you have to add a new repo to your for loop for each new repo you work with. In addition, you need to have previously git cloned the listed repo. Which makes sense because you can’t checkout a branch for a nonexistent repo.

    Finally, notice the use of the git up alias we created earlier. Combining both technqiues, you’re able to reliably update multiple git repos at a time.

    If you find yourself needing to update multiple git repos at a time, give this technique a try.



    WebRTC Screensharing With Respoke and Node-Webkit

    Handwritten by Tian Davis

    NodeWebkit

    The Code

    I’ve been working with a lot of developers on WebRTC solutions. One, in particular, needed screensharing for their next-generating proctoring solution, so I built a proof-of-concept (PoC) screensharing app using node-webkit.

    A typical video call with Respoke starts with starting a call with another endpoint:

    var otherEndpoint = client.getEndpoint({
        id: theirName
    });
    
    otherEndpoint.startCall({
          onConnect: onConnect,
          onLocalMedia: onLocalVideo
    });
    
    To enable screensharing we just need to pass getUserMedia MediaStreamContraints:
    
    var otherEndpoint = client.getEndpoint({
        id: theirName
      });
    
    var constraints = {
          audio: false,
          video: {
                mandatory: {
                  chromeMediaSource: 'screen',
                  maxHeight: 2000,
                  maxWidth: 2000
                },
                optional: []
          }
    };
    
    otherEndpoint.startCall({
        constraints: constraints,
          onConnect: onConnect,
          onLocalMedia: onLocalVideo
    });

    In addition, we need to enable usermedia screen capturing in node-webkit. To do this, we added the following chromium-args to our npm package.json file:

    {
      . . .
      "chromium-args": "--enable-usermedia-screen-capturing"
      . . .
    }

    The --enable-usermedia-screen-capturing chromium flag replaces the media transmitted with your screen instead of your camera input.

    Our Respoke screensharing demo is open source. To use it, head over to our Respoke Screensharing Node-Webkit GitHub repo. Then run the following commands from your terminal:

    git clone https://github.com/respoke/respoke-screensharing-node-webkit.git
    
    cd respoke-screensharing-node-webkit
    
    npm install
    
    Open Node Webkit Instance 1
    ./node_modules/nodewebkit/nodewebkit/node-webkit.app/Contents/MacOS/node-webkit .
    
    Open Node Webkit Instance 2
    ./node_modules/nodewebkit/nodewebkit/node-webkit.app/Contents/MacOS/node-webkit .

    Give it a try. If you find any issues, we welcome pull requests.

    Media Stream Constraints

    To make screensharing work, we had to pass a MediaStreamContraint object literal. But, where does all this come from? It all starts with a w3 spec. When specs are not final, vendor implementation typically starts from a draft spec. In this case, the Media Capture and Streams working draft. There is no current consensus on what every constraints should be. The working drafts acts a starting point so vendors like Google can start implementing real examples of how constraints are used.

    For example, the working draft specifies constraints like width, height, frameRate and aspectRatio. But the Chromium MediaConstraints source code specifies so much more available constraints like minAspectRatio, maxAspectRatio, minWidth, maxWidth, minHeight, maxHeight, minFrameRate and maxFrameRate:

    class MediaConstraintsInterface {
      public:
        . . .
    
        // Constraint keys used by a local video source.
        // Specified by draft-alvestrand-constraints-resolution-00b
        static const char kMinAspectRatio[]; // minAspectRatio
        static const char kMaxAspectRatio[]; // maxAspectRatio
        static const char kMaxWidth[]; // maxWidth
        static const char kMinWidth[]; // minWidth
        static const char kMaxHeight[]; // maxHeight
        static const char kMinHeight[]; // minHeight
        static const char kMaxFrameRate[]; // maxFrameRate
        static const char kMinFrameRate[]; // minFrameRate
    
        . . .
    };

    The Chromium MediaConstraints even go on to define other interesting experimental contraints like echoCancellation, noiseReduction and cpuOveruseDetection:

    class MediaConstraintsInterface {
      public:
        . . .
        // Constraint keys used by a local audio source.
        // These keys are google specific.
        static const char kEchoCancellation[]; // googEchoCancellation
    
        // Google-specific constraint keys for a local video source
        static const char kNoiseReduction[]; // googNoiseReduction
    
        . . .
    
        // googTemporalLayeredScreencast
        static const char kCpuOveruseDetection[];
    };

    Of course there is no guarantee that those features will make it to the final Media Capture and Streams spec.

    Flags We had to use Chromium command line flags to enable screensharing. Chromium command line flags are another interesting piece of the puzzle.

    For one they enable developers to take advantage of features not available to the general public. Another is peeking behind the covers is just really cool to see. Take for example the Chromium flag we used earlier: --enable-usermedia-screen-capturing.

    It’s source code is defined in Chromium’s content_switches.h header file:

    // Defines all the "content" command-line switches.
    
    . . .
    
    namespace switches {
    // All switches in alphabetical order. The switches should be documented
    // alongside the definition of their values in the .cc file.
    . . .
    
    CONTENT_EXPORT extern const char kEnableUserMediaScreenCapturing[];
    . . .
    }

    And implemented in Chromium’s content_switches.cc file:

    #include "content/public/common/content_switches.h"
    
    #include "base/command_line.h"
    
    namespace switches {
    . . .
    
    // Enable screen capturing support for MediaStream API.
    const char kEnableUserMediaScreenCapturing[] =
        "enable-usermedia-screen-capturing";
    
    . . .
    }

    We then use those flags as chromium-args in our node-webkit apps. You can pass chromium flags to chrome directly as well. On Mac OS X, you’d use:

    open -a "Google Chrome" --args --enable-usermedia-screen-capturing

    On Windows, something like: chrome.exe --args --enable-usermedia-screen-capturing

    I hope you’re as excited about the future of WebRTC as we are. It’s a great time to be a developer. Fork our demo screensharing app using Respoke and Node-Webkit. Play around and if you like it, share it with your friends.



    WebRTC Instagram Video Filters

    Handwritten by Tian Davis

    Instagram

    With more than a $1 Billion valuation and over 75 million daily users, Instagram is arguably one of the most popular social networks today. Did you know more than 55 million photos are posted daily on Instagram? Or that the platform sees over 8,500 likes per second?

    I’ve always had a small love affair with image processing. So I figured let’s have some fun with WebRTC and find out, “Can you apply Instagram filters to a live WebRTC video stream?” To be brutally honest, the question had been burning me for months, so I finally set out to find the answer.

    What I found is indeed you can add Instagram, or rather Instagram-ish, filters to your live WebRTC video stream. Here’s a live demo I put together of the concept (Pro Tip: Use Chrome or FF 35+). This is the story of how I went about getting it done. You can follow along as I break it down step-by-step.

    First, we need a little HTML to hang our video element and controls:

    <script src="respoke.js"></script>
    . . .
    
    <div class="video-streams">
      <video id="localVideo"></video>
      <video id="remoteVideo"></video>
    </div>
    
    <p>Instagram Filters:</p>
      
    <ul id="filters">
      <li id="NoFilter">#NoFilter</li>
      <li id="Willow">#Willow</li>
      <li id="Earlybird">#Earlybird</li>
      <li id="Mayfair">#Mayfair</li>
      <li id="Amaro">#Amaro</li>
    </ul>

    Here, I included Google’s adapter.js shim to normalize WebRTC behavior across Chrome, Firefox and Opera. Then, I included a simple video tag. We’ll use that video tag to display the video stream coming from our web camera. We’ll also use that same video tag to apply our image filters against. Lastly, I created an unordered list of Instagram filters and a #NoFilter control to remove all filters from the video element. Interesting to note, #NoFilter is the most popular filter on Instagram. Now for the actual filters.

    Laying the foundation for the Instagram filters are CSS3 filters. Thankfully, Nick Georgiou of Design Pieces already did a fantastic job recreating every single Instagram filter using CSS3 filters. Here are a few we’ll use for this experiment:

    .ig-willow {
      -webkit-filter: saturate(0.02) contrast(0.85) brightness(1.2) sepia(0.02);
      filter: saturate(0.02) contrast(0.85) brightness(1.2) sepia(0.02);
    }
    
    .ig-earlybird {
      -webkit-filter: sepia(0.4) saturate(1.6) contrast(1.1) brightness(0.9) hue-rotate(-10deg);
      filter: sepia(0.4) saturate(1.6) contrast(1.1) brightness(0.9) hue-rotate(-10deg);
    }
    
    .ig-mayfair {
      -webkit-filter: saturate(1.4) contrast(1.1);
      filter: saturate(1.4) contrast(1.1);
    }
    
    .ig-amaro {
      -webkit-filter: hue-rotate(-10deg) contrast(0.9) brightness(1.1) saturate(1.5);
      filter: hue-rotate(-10deg) contrast(0.9) brightness(1.1) saturate(1.5);
    }

    Using a combination of CSS3 filters: sepia, saturate, contrast, brightness and hue-rotate; we’re able to approximate Instagram’s Willow, Earlybird, Mayfair and Amaro filters. Now for the really fun part: bringing the app to life with a little JavaScript.

    To do this we’ll leverage the camera access feature of WebRTC and then apply the CSS3 filters as we see fit. First, let’s take a look at how we access the camera:

    var constraints = {
      video: true, 
      audio: true
    };
    
    getUserMedia(
      constraints, 
      onMediaStream, 
      noMediaStream
    );
    
    function onMediaStream(stream) {
      localVideo = document.getElementById("localVideo");
      
      attachMediaStream(localVideo, stream);
      
      localVideo.play();
    }
    
    function noMediaStream (error) {
      console.log("No media stream for us.", error);
    }

    Here, we call getUserMedia and pass it three parameters. The first is a constraints object listing which devices to access, in this case, both the video camera and the mic. The last two parameters are callbacks. We use the first callback for when we successfully get a stream from our camera. We use the second callback if accessing the camera stream fails.

    Once we have the camera stream, we can then get the video element. Next, we attach the camera stream to the video element. We’ll need to call play on the video element, if we don’t then we’ll just see a camera still shot. Now that we have our video playing, we’ll want to implement a way to apply our Instagram filters to the video element.

    To accomplish this, we’ll need to setup an event to apply each filter. Let’s take a look at the JavaScript code to implement this:

    var ul = document.getElementById("filters"); 
    
    ul.addEventListener("click", function(e) {
      var filter = e.target.id;
      
      var filters = {
        "NoFilter": function() {
          localVideo.className = "";
        },
        
        "Willow": function() {
          localVideo.className = "ig-willow";
        },
        
        "Earlybird": function() {
          localVideo.className = "ig-earlybird";
        },
        
        "Mayfair": function() {
          localVideo.className = "ig-mayfair";
        },
        
        "Amaro": function() {
          localVideo.className = "ig-amaro";
        }
      }[filter]();
    });

    Here, we add an event listener to the list of Instagram filters. When a filter is clicked, we’ll get its corresponding filter id. Once we have the filter id, it’s an easy transition to apply the corresponding CSS3 filter to the video element. When we’re ready to remove all filters, we just remove all CSS3 class filters from the video element.

    I went ahead and pushed my webrtc video filter code to GitHub in its entirety. The instructions to run it locally are in the README. Git clone and you’ll be up and running in 30 seconds.

    So that’s my WebRTC project. As you can see, the video element is a first class HTML5 element. That means you can manipulate it in JavaScript and style it in CSS to your hearts delight. I just think it’s so cool you can manipulate the look and feel of a live video stream.

    I’ve heard #Mayfair is one of the top Instagram filters. Which one did you like best? I hope you had as much fun as I did and now you can add Instagram filters to your WebRTC video too.



    JavaScript Shims Versus Polyfills

    Handwritten by Tian Davis

    Originally posted on the Respoke blog, JavaScript: Shim vs Polyfill.

    Polyfilla

    I’m currently working on a conference talk about WebRTC. One of the foundations of any WebRTC library is some form of Google’s original adapter.js, which seeks to normalize the WebRTC API across various browsers. As I was about to build this particular slide, one question started to whisper in my ear, “Is adapter.js a shim or a polyfill?” Eventually, the whisper grew louder and I had to find the answer.

    As a JavaScript developer, you’ve no doubt ran across an HTML5 library and thought the same thing, “Is it really a shim, or is it really a polyfill?” If you’re like me, you stopped, looked at both terms, and then scratched your head… In this article I’ll clarify the definition of a JavaScript shim and a JavaScript polyfill so that you can be an informed developer. Knowing the difference will allow you to better choose the libraries you use.

    Back in 2010, Remy Sharp coined the term JavaScript Polyfill:

    A polyfill is a piece of code (or plugin) that provides the technology that you, the developer, expect[s] the browser to provide natively. Flattening the API landscape if you will.

    If the library determines a feature doesn’t natively exist in your browser, it will provide that functionality by any means necessary. That could mean replicating the functionality using any combination of JavaScript, NPAPI, ActiveX and Flash (or anything for that matter). More importantly, you won’t even know the polyfill is there because you’re still using the native JavaScript code.

    Remy went on to share his understanding of a JavaScript Shim:

    Shim, to me, meant a piece of code that you could add that would fix some functionality, but it would most often have it’s own API.

    This was a good start. Over the last few years, we’ve seen the JavaScript ecosystem evolve. One thing I’ve noticed is that we tend to think of shims as some smaller piece of code, whereas polyfills tend to be larger, more complex pieces of code. But there are other differences and similarities as well.

    I decided to list out those similarities and differences to give myself a better understanding of a shim versus a polyfill. Here’s what I’ve found so far:

    Similarities Between Polyfill and Shim

    - Both seek to normalize functionality across browsers

    - Both tend to extend native methods, opting for their own implementation when the native method does not exist.

    Differences Between Polyfill and Shim`

    - Shims tend to be written in a single language

    -Polyfills tend to use multiple language platforms to achieve the aim of cross-browser normalization

    Polyfills often have to use multiple language platforms because sometimes a particular JavaScript API doesn’t exist in that browser at all. I believe that’s the pivotal point here.

    Sime Vidas goes on to share his thoughts on shims vs polyfills on this Stack Overflow question on the topic:

    From what I understand: A polyfill is code that detects if a certain “expected” API is missing and manually implements it. E.g.

    if (!Function.prototype.bind) {

    Function.prototype.bind = ...;

    }

    A shim is code that intercepts existing API calls and implements different behavior. The idea here is to normalize certain APIs across different environments. So, if two browsers implement the same API differently, you could intercept the API calls in one of those browsers and make its behavior align with the other browser. Or, if a browser has a bug in one of its APIs, you could again intercept calls to that API, and then circumvent the bug. I think this is the most clear definition I’ve seen of the two so far. Vidas mentions the shim “intercepting” the API call. This creates a clearer visual than Sharp’s “fix” definition.

    It also helps validate some of the differences I’ve noticed over the years. Namely, shims appear to be written in JavaScript only because the JavaScript API already exists (it just needs some smoothing to normalize behaviors across browsers). Whereas polyfills often have to use multiple language platforms because the JavaScript API does not exist at all.

    Shim and Polyfill Real-World Parallels

    In the real world a shim is a wedge-shaped piece of wood. “Polyfilla” is the name of a spackling paste from LePage. You’d typically use a shim to level off a leaning stove or other appliance. On the other hand, spackling paste would be used to fill a hole in a wall or other crevice. Notice the stove has to exist in order to use the shim at all. Whereas the polyfill (spackling paste) is used to fill a gap in the wall. The polyfill itself isn’t exactly the same as the wall, but it acts close enough to the real thing to make it work.

    Examples of JavaScript Polyfills

    In particular, take a look at the available transports is Socket.IO. There is websocket, htmlfile, xhr-polling, jsonp-polling and flashsocket. Yes - Flash. Socket.IO is a great example of a JavaScript polyfill that manually implements WebSockets in older browsers by any means necessary. Falling back to flash allowed Socket.IO to approximate WebSockets in older browsers like IE 8 and IE 9 during a time when they held noticeable marketshare. This would not have been possible with a pure JavaScript implementation only.

    Today, the flashsocket transport is disabled by default in Socket.IO and will not activate on Chrome or other browsers that fully support WebSockets, even if flashsocket is specified as the only transport. To test flashsocket, you have to use IE 8 or IE 9, or other browsers that don’t natively support WebSockets. Had Socket.IO been designed as a shim, it simply would not have worked in older versions of IE at all because shims don’t implement features that don’t exist.

    There are many other examples of great JavaScript polyfills that use a multitude of technologies in addition to JavaScrip to achieve cross-browser compatibility. Raphael falls back to VML in older versions of IE that don’t support SVG. Store.js falls back to ActiveX and IE’s non-standard userData in IE 6 and IE 7 where localStorage does not exist. Video.js falls back to Flash in older versions of IE that don’t support the HTML5 video element. I could go on all day…

    The point is that shims only normalize browser behavior, whereas polyfills both normalize browser behavior and implement functionality where it does not exist.

    Shim vs Polyfill Checklist

    So as a developer, here’s a quick checklist to figure out if you’re developing a shim or a polyfill:

    1.Does your library normalize a JavaScript API across the major browsers?

    2. Does the JavaScript API exist in some major browsers?

    3.Does your library implement the JavaScript API where it does not exist?

    Here’s a flow diagram of the decision tree:

    Shim Polyfill Decision Tree

    So is Google’s Adapter.js a Shim or a Polyfill?

    Let’s use our decision framework to figure out the answer to the question:

    1. Does adapter.js normalize a JavaScript API across the major browsers?

    Yes. Adapter.js normalizes the WebRTC API.

    2. Does the WebRTC API exist in some major browsers?

    Yes. Currently, the WebRTC API exists in Chrome, Firefox and Opera.

    3. Does adapter.js implement the WebRTC API where it does not exist?

    No. Google’s adapter.js library does not implement the WebRTC API in either IE or Safari.

    Based on our Shim versus Polyfill guide, Google’s adapter.js is definitely a shim because it does normalize the Web RTC API, but stops short of implementing it in either IE or Safari.

    Why Does it Matter?

    I wrote this article because over the years using both terms has grown rather confusing. Some even go so far as to say shim is synonymous to polyfill. That’s simply not the case, nor should it be. There is both a place and a need for each term in the craft.

    This may all seem like semantics, but it’s not. Understanding the difference between a shim and a polyfill is the difference between your app working in some browsers and your app working in all browsers. If you’re like me, you go for all browsers or you die trying. Knowing your shims from your polyfills is key to getting you there.



    The History of The PSTN

    Handwritten by Tian Davis

    pstn-2

    SUMMARY

    Respoke gives you the power to build the next Skype in the browser, on your smartphone and even on your desktop. It’s all possible because of the Public Switch Telephone Network (PSTN). Here’s a look at where the PSTN has been and where we’re taking it…

    HISTORY OF THE PSTN

    The history of the public switched telephone network (PSTN) is the history of American Bell and AT&T. In 1875 Alexander Bell formed the American Bell Telephone Company. A year later in 1876, Alexander Bell patented the first improvement in telegraphy and made the first ever voice transmission over wire. It was hardly what we can imagine today.

    The first voice transmission used what is called a ring-down circuit. What that means is there were no dialing of numbers; No ringing of headsets. Instead, an actual physical wire connected two devices. Remember when you were a kid and you’d play tin can telephone. What did you do? You connected two tin cans by wire. Then you could here your friend talk on the other end. A ring-down circuit is a lot like playing tin can telephone, just over a greater distance.

    Initially, telephone users had to whistle into the phone to attract the attention of another telephone user. Within a year of Alexandar’s patent, he added a calling bell to make signaling easier.

    Over time, this simple design evolved from a one-way voice transmission, by which only one user could speak, to a bi-directional voice transmission, whereby both users could speak. Things started to get a little more complicated at this point.

    Moving the voices across the wire required a carbon microphone, a battery, an electromagnet, and an iron diaphragm. The concept of dialing a number to reach a destination still didn’t exist. The also process required a physical cable between each location that the user wanted to call. Clearly this does not scale…

    Placing a physical cable between every household that required access to a telephone was neither cost effective or feasible. Bell developed another method that could map any phone to another phone without a direct connection. Bell patented the device and called it a switch.

    With a switch telephone, users only needed connection to a centralized office. Then that centralized office could coordinate connected the call to its final destination.

    Imagine a pair of copper wires running from every phone to a central exchange in your town. At the exchange, the operator had a big switchboard. The switchboard had a 2 pin connection socket - called a jack socket - for every pair of wires entering the exchange.

    When you wanted to talk to another person, you would ring the operator and give the name or number of the other party. Then the operator would connect a patch cord ( a 2 wire cable with a jack plug on each end ) between the two phones and the two people could communicate. Using a patch cord - a two wire cable with a jack plug on each end - the operator would connect each party’s jack socket. Then the receiving party’s telephone bell would ring and the two parties could communicate.

    Believe it or not, the first operators were teenage boys. Surprising - I know - but they often engaged in horseplay and foul language:

    pstn-2

    Telephone companies soon began hiring young women in order to present a more civilized image to customers:

    pstn-2

    Women would go on to dominate the switchboard profession. Operators were well trained in switchboard technique and in deportment, before starting work on the switchboards. Here are a group training in switchboard technique and in deportment before starting work on the switchboard. Denver, Colorado 1910:

    pstn-2

    Here’s another group of operators at a switchboard in Santa Fe, New Mexico 1921:

    pstn-2

    Bundles of wires called trunks ran between exchanges, forming proto-networks. Networks connected together until they connected countries across the world. This was the beginning of the PSTN.

    At first the telephone operator acted as a switch. Fast forward 100 years - give or take a decade - and the electronic switch replaced the human switch.

    THE PSTN TODAY

    What started as direct home to home connections, evolved into home to central switch connections. Human powered switches we called operators evolved into analog switches and then into electronic switches. A lot also changed along the way.

    pstn-2

    Analog voice signals carried across the wire with amplifiers evolved into digital signals carried across the globe with repeaters. Repeater simply repeat whatever binary data it receives. If the repeater received 010101, it passed on 010101.

    All digital meant cleaner sound quality travelled over longer and longer distances. It also meant the PSTN could release new features faster. Features like call waiting and call forwarding and conference calling were now built into the PSTN message driven network.

    As technology progressed, the telephony industry found an alternative to message formats and during the dawn of the Internet a new transport format was invented - packets. This formed the foundation of what would become a separate data network.

    Instead of being transmitted over a circuit-switched network, the digital information is packetized, and transmission occurs as IP packets over a packet-switched network. These packet-switched networks form the foundation of the Voice Over IP (VOIP) technology we know today.

    Now we live in a world of two networks; One circuit-switched and the other packet-switched. When those worlds interoperate, they do so using protocols that enable packet-switched digital data to communicated with circuit-switched digital data. Those protocols include, but are not limited to: H.264, V8, H.232, H.323, SIP, MGCP and others.

    Currently H.323 is the most widely deployed VoIP call-control protocol. H.323, however, is not widely seen as a protocol that is robust enough for PSTN networks. For these networks, other protocols such as Media Gateway Control Protocol (MGCP) and Session Initiation Protocol (SIP) are being developed.

    PHONE NUMBERS

    Phone numbers in particular are fascinating. Phone numbers are simply different across the globe. To bring the point home, take a look at a few numbers across locales:

    USA (NANP): +1 (555) 555-5555

    India: +91 22 555 5555

    London: +44 20 5555 5555

    The North American Numbering Plan (NANP) is an integrated telephone numbering plan serving 20 North American countries that share its resources. These countries include the United States and its territories, Canada, Bermuda, Anguilla, Antigua & Barbuda, the Bahamas, Barbados, the British Virgin Islands, the Cayman Islands, Dominica, the Dominican Republic, Grenada, Jamaica, Montserrat, Saint Maarten, St. Kitts and Nevis, St. Lucia, St. Vincent and the Grenadines, Trinidad and Tobago, and Turks & Caicos.

    Regulatory authorities in each participating country have plenary authority over numbering resources, but the participating countries share numbering resources cooperatively.

    AT&T developed the North American Numbering Plan in 1947 to simplify and facilitate direct dialing of long distance calls. Implementation of the plan began in 1951.

    The International Telecommunications Union (ITU) assigned country code “1” to the NANP area. The NANP conforms with ITU Recommendation E.164, the international standard for telephone numbering plans.

    NANP numbers are ten-digit numbers consisting of a three-digit Numbering Plan Area (NPA) code, commonly called an area code, followed by a seven-digit local number. The format is usually represented as:

    +1 NXX-NXX-XXXX

    where N is any digit from 2 through 9 and X is any digit from 0 through 9. Routing calls requires multiple switching offices. The phone number itself is a coded map for routing the call.

    In the NANP countries, for example, we have 10-digit phone numbers:

    • The first three digits are the area code or national destination code (NDC), which helps route the call to the right regional switching station.

    ** The next three digits are the exchange, which represents the smallest amount of circuits that can be bundled on the same switch. In other words, when you make a call to another user in your same exchange - maybe a neighbor around the corner - the call doesn’t have to be routed onto another switch.**

    ** The last four digits of the phone number represent the subscriber number, which is tied to your specific address and phone lines.**

    Within a company or larger organization, each employee or department might have its own extension. Extensions from the main phone number are routed through something called a private branch exchange (PBX) that operates on the premises. To make an international call requires further instructions.

    The call needs to be routed through your long-distance phone carrier to another country’s long-distance phone carrier. To signal such a switch, you have to dial two separate numbers, your country’s exit code (or international access code) and the corresponding country code of the place you’re calling.

    Almost all exit codes are either 00 or 011, although there are a few exceptions like Cuba (119) and Nigeria (009). Country codes are one - to three-digit prefixes that are assigned to specific countries or groups of countries.

    For example, the country code for the United States is 1, but the United States shares that country code with Canada and several smaller island nations like Jamaica, Puerto Rico and Guam.

    PBX

    No doubt you’ve heard the term PBX before. A PBX or Private Branch Exchange is a small telephone switch - think of it as a mini exchange.

    Businesses install PBXs to reduce the number of phone lines they need to lease from the telephone company. Imagine that without a PBX, you would have to to rent one telephone line for every employee with a phone.

    With a PBX system, you only need to rent as many lines from your telephone provider as the maximum number of staff making external calls at one time. In most businesses this is only about 10-12% of the workforce.

    What you didn’t know is before the tangled mess of PBX’s gone by:

    pstn-2

    There were human powered switchboard operators in businesses, government and large commercial buildings:

    pstn-2

    Of course, not even remotely as large as your local switchboard would be. Here there were usually anywhere from 2-4 people at most:

    pstn-2

    In the PBX system, every telephone in a business location is wired to the PBX, using either standard cables or more recently Cat 5 ethernet cabling. When a member of staff picks up their phone and dials the outside access code (usually 9), the PBX connects that person to an outside line, and onto the PSTN.

    PBX solutions themselves have gone from a consortium of wires and frames to a single commodity hardware or higher grade application running open source software that allows you to create a virtual PBX.

    Open source software like Asterisk is an example of this paradigm shift. With Asterisk you can create a PBX, an IVR system, a conference bridge and virtually any other communications app you can imagine. Asterisk was one of the first open source PBX software packages.

    Asterisk supports a wide range of Voice over IP protocols, including the Session Initiation Protocol (SIP), the Media Gateway Control Protocol (MGCP), and H.323. Asterisk can interoperate with most SIP telephones, acting both as registrar and as a gateway between IP phones and the PSTN.

    PSTN AT RESPOKE

    The PBX has gone from a tangled mess of wires to software running on commodity boxes to a hosted PBX in the cloud. The web was only the natural progression.

    You can use Respoke to make phone calls and receive phone calls from phones on the PSTN as well as other Respoke client endpoints. It’s easy…

    // here's the App ID value from the portal:
    var appid = "DD90A374-0C06-456F-9D4F-E8038E6523D2";
    
    // create a client object using the App ID value
    var client = respoke.createClient({
        appId: appid,
        developmentMode: true
    });
    
    // listen for the 'connect' event
    client.listen('connect', function () {
        console.log("Connected to Respoke!");
    });
    
    //Now all you have to do is make a call
    client.startPhoneCall({
        number: "+15558675309"
    });
    
    //Attach listener to receive calls
    client.listen('call', function (event) {
        if (event.call.fromType === 'did') {
            // We got a call from a phone number!
        }
    
        if (!event.call.caller) {
            event.call.answer();
        }
    });

    Using a combination of WebRTC media channel and good ‘ol fashioned ingenuity, Respoke takes your IP based voice data and converts it into digital SIP data which can be consumed by regular phone devices.

    If you’re talking to someone on the web or VOIP device, your voice data stays on the packet-switched network - just like a regular VOIP call. If you’re talking to someone on a cellphone carrier or landline, Respoke takes care of the details to communicate with that person’s circuit-switched network.

    Of course you can do a lot of other things with Respoke as well. Like video, voice and text communications. Now you have access to the PSTN network as well. Sky’s the limit from here on out.



    JavaScript and The Monomyth

    Handwritten by Tian Davis

    The Hero's Journey

    In his seminal work, “The Hero with a Thousand Faces”, Joseph Campbell put forth the idea of The Monomyth. The idea is based on the observation that a common pattern exists beneath the narrative elements of most great myths, regardless of their origin or time of creation.

    Simply put, all of mankind’s myriad myths are but variations of a single great story. This is the monomyth and is most commonly expressed as a hero’s journey.

    In this journey, the hero begins in the ordinary world, and receives a call to enter an unknown world of strange powers and events. The hero who accepts the call to enter this strange world must face tasks and trials, either alone or with assistance. In the most intense versions of the narrative, the hero must survive a severe challenge, often with help.

    If the hero survives, he may achieve a great gift or “boon.” The hero must then decide whether to return to the ordinary world with this boon. If the hero does decide to return, he or she often faces challenges on the return journey. If the hero returns successfully, the boon or gift may be used to improve the world.

    The Monomyth has three arcs: The Departure, The Initiation and The Return. Each arc has several sub arcs.

    I want to stop here for a second because I want you to think about your favorite movie. Is it Frank Herbert’s Dune? Or Star Wars? Or The Highlander? Is it Braveheart? Or The Matrix? Or maybe even Quentin Tarantino’s Django?

    These are all but variations of a single story. JavaScript is no different.

    The story of JavaScript and how it came to be is an amazing story. But the people who gave it life and continue to nurture its growth are but actors in a greater story.

    Even in the telling of this story, you and I, experience the psychic unity of mankind. A unity that binds us to the craft. JavaScript is the hero here. This is its journey.

    The 17 Stages of the Monomyth

    The Departure

    In a monomyth, the hero begins in the ordinary world, and receives a call to enter an unknown world of strange powers and events. The hero who accepts the call to enter this strange world must face tasks and trials, either alone or with assistance.

    Collectively, mankind faced the beginnings of the strangest and most powerful of worlds it had ever created - The Internet. So sets the stage for our hero and the birth of a language…

    The Call to Adventure

    In the Call to Adventure, the hero begins in a mundane situation of normality from which some information is received that acts as a call to head off into the unknown.

    Early April 1995, Brendan Eich is recruited to Netscape with the promise of “doing Scheme” in the browser. Netscape recruited him because he could hack quickly and in part because he had some language chops.

    Explained Brendan, “I was ‘that guy’, not in any brag-worthy sense, just the only person who was in the position to do the deed, with (barely) enough skills to pull it off.”

    Brendan goes on to explain, “Many hackers could have done a better job with more time, or perhaps a better job had they been in my shoes. Who knows? But no one at Netscape could have, and the opportunity was there and then”, Brendan on the Origins of JavaScript.

    The opportunity was there and then and he took it. What a lot of folks don’t know is Brendan worked for Netscape’s founder’s previous company Silicon Graphics Incorporated (SGI); That’s how he was recruited to Netscape.

    SGI was a high performance computer designer/manufacturer. Think Alienware for business. SGI are also the folks that wrote and open sourced the C++ standard template library (STL).

    STL was an early standard library for C++ without which you’d have to write your own base data structures like lists, hashes, queues and the like. In his own words, his experience at SGI made him a “C/Unix fanboy”. “I knew the C grammer by heart”, he would later explain.

    Things seldom go according to plan and little did he know both Scheme and C would play heavy on his design of JavaScript.

    Refusal of the Call

    Often when the call is given, the future hero first refuses to heed it. This may be from a sense of duty or obligation, fear, insecurity, a sense of inadequacy, or any of a range of reasons that work to hold the person in his or her current circumstances.

    Brendan never did put Scheme in the browser. Instead, he created a new language called JavaScript. As he remembers it, “[t]he diktat from upper engineering management was that the language must ‘look like Java’.”

    After partnering with Sun Microsystems, the creators of Java, Netscape management was firm that whatever language created should be “Java’s kid brother”.

    The plan was to bring Java to the Netscape browser in the form of Java Applets. Then use JavaScript to tie everything together. The Netscape/Sun deal was to make Java for “professional” developers and JavaScript, well, for everyone else. The express goal was to embed that programming language in the source HTML of Netscape Navigator.

    Management only cared about two things. First, the new language look like Java. Finally, the new language have objects without classes. In their mind, a dumbed down version of Java. In Brendan’s mind, challenge accepted…

    Following a subversive agenda, Brendan borrowed closures and first-class functions from Scheme. Then, he borrowed the concept of prototypical inheritance from Self. Whereas Scheme was a dialect of Lisp, Self was a dialect of Smalltalk. Finally, he borrowed the look-and-feel of the language from C/C++.

    In class-based object oriented languages, classes define the properties and behaviors of objects. Object instances are then particular manifestations of a class.

    In Self, however, one makes a copy of an existing object and then adds additional specific properties and behaviors. Code which uses the existing objects are not changed.

    These would form the necessary ingredients for creating JavaScript, but it would take an act of God to birth the language into existence. In the software development industry, we call these acts of god a release.

    Supernatural Aid

    Once the hero has committed to the quest, consciously or unconsciously, his guide and magical helper appears or becomes known. More often than not, this supernatural mentor will present the hero with one or more talismans or artifacts that will aid them later in their quest.

    Netscape 2.0 Beta’s extremely tight release schedule of March 1996 forced Brendan to complete the first version of JavaScript in only ten days. Management at Netscape was sure Microsoft was gunning for them, after turning down a low ball offer to buy Netscape earlier that year.

    What’s the old saying, “Just because you’re paranoid doesn’t mean they aren’t after you.” In this case, Microsoft was out for Netscape and the killer was to be VBScript in Internet Explorer. Things were about to get real - real fast…!

    Though still the market leader in web browsers, Netscape speed up its release of Netscape 2.0. I like to think of that release as a crucible. In went Brendan Eich, his experiences with Scheme, Self, Java and C; his marching orders for the language to “look like Java” and “have objects without classes”.

    Those things were the raw Tamahagane.

    Through those inhomogeneous mixtures of concepts and directives a finer language would reveal itself. Through the crucible, out came what would be one of the most popular programming languages in the world.

    And once the soot was removed and the steel polished, JavaScript was born.

    Crossing the Threshold

    This is the point where the person actually crosses into the field of adventure, leaving the known limits of his or her world and venturing into an unknown and dangerous realm where the rules and limits are not known.

    During the Fall of 1996, Brendan rewrote the JavaScript language and built the first-ever JavaScript engine. This engine is code named SpiderMonkey and is later released as open source. JavaScript is finally decoupled from the Netscape browser. JavaScript is standardized as ECMA-262 specification called EMCAScript.

    Microsoft implements JavaScript for the first time in Internet Explorer 3.0 in August 1996 - Here, it is called JScript. As explained by JavaScript guru Douglas Crockford in his talk titled The JavaScript Programming Language on YUI Theater,

    [Microsoft] did not want to deal with Sun about the trademark issue, and so they called their implementation JScript. A lot of people think that JScript and JavaScript are different but similar languages. That’s not the case. They are just different names for the same language, and the reason the names are different was to get around trademark issues.

    Today, “JavaScript” is a trademark of Oracle Corporation. It is used under license for technology invented and implemented by Netscape Communications and current entities such as the Mozilla Foundation.

    Around this time in 1997, a fork of SpiderMonkey is developed at Netscape called Rhino. Where SpiderMokey is written in C, Rhino is developed entirely in Java. At the time, Netscape was planning to produce a version of Netscape Navigator written fully in Java and so it needed an implementation of JavaScript written in Java.

    Of course Rhino could be embedded into any Java application to provide scripting to end users. Say for example you wanted to build a version of Excel running on top the JVM. And say your wanted to provide scripting capabilities in Excel similar to VBA.

    How would you do it? Well, you could use Rhino and voila your Excel-ish application can provide users with scripting abilities written in JavaScript.

    From here on out, JavaScript would be free from confinement to any one single browser. Yet future funding of this open source language was still confined to a handful of proprietary, nominally closed source companies. Something had to give…

    The gods must have seen the irony because it wouldn’t be too long before this situation came to a head.

    Belly of the Whale

    The belly of the whale represents the final separation from the hero’s known world and self. By entering this stage, the person shows willingness to undergo a metamorphosis.

    Early 1998, Netscape along with Brendan found the Mozilla Project. Named after the original code name of the Netscape Navigator browser which is a blending of “Mosaic and Godzilla”.

    The Mozilla Project is meant to manage open-source contributions to SpiderMonkey and Netscape’s Mozilla Suite - an open source browser and email client combined. A small group of Netscape employees were tasked with coordination of the new community. Brendan serves as the group’s first chief architect.

    That same year whispers begin that AOL plans to buyout Netscape and shutdown the Netscape browser. A year later in 1999, AOL does buy Netscape. However, the shutdown doesn’t come for another four years.

    Then in July 2003, AOL officially shuts down its Netscape browser unit. That same month, Brendan helped create the Mozilla Foundation as the legal steward the Mozilla Project.

    Soon after, the Mozilla Project deprecated the Mozilla Suite in favor of creating independent applications for web browsing and email. The Firefox web browser and the Thunderbird email client were born.

    This way, the SpiderMonkey JavaScript Engine found a new home in Mozilla’s Firefox web browser. Later in 2005, Brendan becomes the CTO of the for-profit arm of the Mozilla Foundation - the Mozilla Corporation.

    JavaScript was now on the golden path. In addition to not being tied to a single browser, now it would be funded, indefinitely by a foundation dedicated to its future and the future of us all.

    The Initiation

    In the most intense versions of the narrative, the hero must survive a severe challenge, often with help. If the hero survives, he may achieve a great gift or “boon.”

    When Java applets failed, JavaScript became the de-facto language of the Web.

    With a growing number of browser vendors, the future of JavaScript laid on the foundation of inconsistent implementations and frustrated developers.

    Yet, with a little help, the promise of JavaScript everywhere had a very real chance to succeed. But, it wasn’t easy…

    The Road of Trials

    The road of trials is a series of tests, tasks, or ordeals that the person must undergo to begin the transformation. Often the person fails one or more of these tests, which often occur in threes.

    Luke Skywalker had his Lightsaber training with Obi-Wan Kanobi. Neo his sparring with Morpheus. They had it easy…

    Had they faced learning JavaScript, in what was our stone age, they would have failed like so many of us. Between 2003 and 2005, three major factors played a pivotal role for JavaScript as a development language and they didn’t work in its favor.

    First, the number of web browser vendors was growing. Then the Document Object Model (DOM) was a mess. Finally, JavaScript organization was an improbability.

    As far as web browser vendors, there was Netscape of course, but then there was also Internet Explorer (Windows and Mac OSX). There was the burgeoning Firefox and growing in popularity was Safari and Opera.

    Each implemented the spec in their own slightly different vision. Some were worse than others - I’m looking at you Microsoft. The implementation of some JavaScript objects were radically different across browser vendors.

    Some browser vendors didn’t implement certain objects at all or implemented them with another name and slightly different responsibility. Those were the best cases. At the worse end, you had browsers like Internet Explorer (IE) which implemented its own proprietary extensions and functionality.

    What that meant is if you wrote your code against IE first, your goose was pretty much cooked. A lot of enterprise companies realized this far far to late and carte blanche told developers to write code against IE and IE only.

    Most developers either didn’t know the self-defeating result of such policies or simply didn’t care. For the craftsman, however, it would be years until they could stand against such tyranny with both competence and a following.

    Until then, supporting as many browsers as possible would continue to be an uphill battle. Later, we would find that each browsers Document Object Model (DOM) implementation was just as bad - if not worse…!

    The DOM was a also mess. CSS selectors were inconsistent. DOM modification from JavaScript was inconsistent. Event management was downright agonizing.

    Even the most widely used DOM method - getElementById - returned inconsistent results from every browser. But it wasn’t just that method, nearly every DOM method was broken in some way, in some browser.

    Moral of the story was if there’s a DOM method, there’s probably a problem with it somewhere, in some capacity. All of this was compounded by the growing popularity of JavaScript and its frenzied, almost barbarian integration to come.

    Dynamic HTML (DHTML) techniques had grown in popularity. It wasn’t long before developers took to JavaScript like a fat kid to cake. JavaScript was all over there place like spaghetti with a toddler.

    By the time most of us joined a team, JavaScript was scattered inline throughout HTML. There were monstrous JavaScript files. Sometime so many they would make you dizzy.

    I once showed up to a client to find JavaScript stored in database tables and spewed out to the user like a bad burrito. Those were the times when you prayed to the Gods. It was monstrous to the point of culpability. JavaScript organization was without reason. More crime than art.

    Too many browsers, the DOM was a mess and JavaScript organization was a thing unheard of. Yet, all this insanity was sort of, well - beautiful. Beautiful because it showed developers identified with JavaScript. It showed a willingness to push the language further than anyone thought possible.

    It was this willingness to go beyond that would form a fundamental cornerstone of the JavaScript community. And with such a large elephant, we began to eat it piece by delicious piece.

    The Meeting with the Goddess

    This is the point when the person experiences a love that has the power and significance of the all-powerful, all encompassing, unconditional love that a fortunate infant may experience with his or her mother. This is a very important step in the process and is often represented by the person finding the other person that he or she loves most completely.

    January 2006, John Resig released jQuery at BarCamp NYC. jQuery opens the door for JavaScript to run consistently in every browser. Up until now, no one had taken on the challenge to reconciled JavaScript behavior across disparate browser vendors.

    The promise was simple - write your JavaScript against jQuery and your code was guaranteed to run in every browser. Now, it is common place to use jQuery. But, back then, not many people stopped to wonder what it could be to have JavaScript just work.

    Not surprising when you think about it because most code was server-side. In fact, ajax techniques had just barely started to become known. All this meant was folks simply didn’t care about the frontend. All that changed with jQuery. Because of jQuery developers started to wake up and start noticing JavaScript.

    Now, clearly John is not a Goddess and neither is he a God. But its funny because The Matrix had the Oracle and the Oracle was able to see a person’s destiny. Well, I felt like John was our Oracle and he saw not just what JavaScript was, but what it could be.

    Because of John and jQuery, collectively, we realized the dream of cross-browser development. We had finally normalized JavaScript development and laid a solid foundation for innovation. During this time, John taught us about the DOM in all its shabby self.

    Non-Obtrusive JavaScript (no inline JavaScript) started to become a very real possibility. JavaScript code organization would still remain a mess for quite some time unfortunately. But, at least it worked as expected.

    It wouldn’t be long before jQuery became synonymous with cross-browser JavaScript development, but it didn’t get there without challenges.

    Temptation away from the true path

    In this step, the hero faces those temptations, often of a physical or pleasurable nature, that may lead him or her to abandon or stray from his or her quest.

    You don’t have to wonder where JavaScript would be without jQuery. For me, the answer is clear…Nowhere! Without cross-browser compatibility, JavaScript would have been of very little use.

    So, for but a moment, I dare you to imagine a world without jQuery. What if I told you that was almost the case? It was…

    With jQuery and the dream of cross-browser development in its infancy, many developers were courted by and tempted away by other JavaScript frameworks like MooTools, Prototype and ExtJS. So began The Framework Wars…

    In hindsight, all three brought classical object oriented programming to JavaScript. Genius really…But its hard to say whether the larger development community were ready for these concepts at the frontend.

    It wasn’t like today where domain logic is steadily moving frontward. Back then, domain logic was firmly planted server-side and most developers were struggling with object oriented concepts even there, much less in a prototypical environment. For these early frameworks, classical OOP turned to be a deadly weakness in spite of its overall strength.

    You see, most JavaScript consumers at the time really had no idea the history of the language or how powerful it could be. To them it was a scripting language and “everyone” knew you didn’t do heavy lifting with those things. So hitting them with classes and design patterns right off the bat was like opening a first date with, “So…I met this amazing wedding planner.” It was simply too much.

    Not bad in and of itself - just too much too fast. Besides, most developers didn’t come for code organization and design patterns, they came for the plugins - calendars, accordion menus and the like were all the rage…

    In the beginning, jQuery looked really promising but the plugins were ugly. MooTools had some really beautiful plugins, but the Community believed you should only use one JavaScript framework and they weren’t budging. ExtJS simply felt too heavy.

    No one knows how the war started. As with all wars, the issue was about assets and who would go on to own the dollar ($) alias. It seems silly now, but folks risked mixing multiple frameworks to use widgets they liked from each.

    Developers were used to mixing widget libraries from the DHTML days. Now, they were just happy these widgets would work in multiple browsers. But as with things too good to be true, there was a catch…

    Atonement with the Father

    In this step the person must confront and be initiated by whatever holds the ultimate power in his or her life. In many myths and stories this is the father, or a father figure who has life and death power. This is the center point of the journey. All the previous steps have been moving into this place, all that follow will move out from it.

    Around May 2008, almost two years after the initial release of jQuery, The Framework Wars peaked. But with developers firmly in either camp, this war would be won with converts.

    As integral as the dollar ($) alias was, no framework wanted to change their use of the dollar ($) alias just to suit “some other framework”.

    This made it difficult to mix and match components from different frameworks without them clobbering each other. This was the turning point for the future of JavaScript and its proliferation to the masses.

    Then came jQuery.noConflict()…

    Using jQuery.noConfict() John and the jQuery team allowed developers to continue to use jQuery’s dollar ($) sign while still allowing other frameworks to work properly.

    What John Resig and the jQuery Core Team realized was that people used the dollar ($) alias for efficiency not identity. So why not let other frameworks use the $ alias and give jQuery developers the best of both worlds.

    As a result, jQuery developers didn’t have to choose efficiency over flexibility. While the other frameworks pandered, “One Framework to Rule Them ALL”, jQuery introduced jQuery.noConflict(), played it’s hand and moved aside.

    After trying jQuery, many developers never went back to either MooTools, Prototype or ExtJS. Instead, they opted to rewrite their most favorite plugins in jQuery. What came next was the Golden Age of jQuery.

    Apotheosis (becoming god-like)

    This step is a god-like phase where the hero earns a period of rest, peace and fulfillment before the hero begins the return.

    May 2008, Douglas Crockford published his seminal book “JavaScript: The Good Parts”.

    Here, Crockford identified the abundance of good ideas that make JavaScript an outstanding object-oriented programming language-ideas such as functions, loose typing, dynamic objects, and an expressive object literal notation.

    Later that year, in September 2008, Microsoft switches out its own AJAX library for jQuery and ships jQuery with Visual Studio. Two years later, in March 2011, the popular Ruby web framework, Ruby on Rails, switched out Prototype for jQuery.

    As JavaScript had become the de-facto programming language of the web, jQuery had become the de-facto JavaScript normalization framework of the web.

    The Ultimate Boon

    The ultimate boon is the achievement of the goal of the quest. It is what the person went on the journey to get. All the previous steps serve to prepare and purify the person for this step, since in many myths the boon is something transcendent like the elixir of life itself, or a plant that supplies immortality, or the holy grail.

    jQuery opened cross-browser JavaScript development to all developers. Later libraries either made jQuery the foundation for interacting with the browser or implemented a compatible (sometimes identical) API.

    JavaScript was becoming a language for “real” developers. Consistent behavior in every browser was our version of the elixir of life…It was our holy grail and led to many innovations.

    JSON or JavaScript Object Notation became the standard for transmitting objects between a server and web application using Ajax.

    The brainchild of Douglas Crockford, JSON uses human-readable text to transmit data objects consisting of attribute–value pairs. It is used primarily to transmit data between a server and web application and steadily became an alternative to XML.

    Libraries like Raphael show us we never needed flash for beautiful interactive graphics. It also showed us what a commitment to cross-browser consistently can yield.

    Raphael is a cross-browser JavaScript library that draws Vector graphics for web sites. It will use SVG for most browsers, but will use VML for older versions of Internet Explorer.

    Though the DOM remained a mess, JavaScript normalization libraries like jQuery shield us from its quirks. However, JavaScript code organization remained messy for some time. Yet, as innovation grew, so grew the potential for sane JavaScript with it.

    The Return

    The hero must then decide whether to return to the ordinary world with this boon. If the hero does decide to return, he or she often faces challenges on the return journey. If the hero returns successfully, the boon or gift may be used to improve the world.

    Refusal of the Return

    Having found bliss and enlightenment in the other world, the hero may not want to return to the ordinary world to bestow the boon onto his fellow man.

    John leaves the Mozilla Corporation, and work on jQuery, to join the Khan Academy. The project is left in the hands of a core group of developers.

    JavaScript is now cross-browser, but code organization is still a mess and often turns into spaghetti code. Lighter weight versions of jQuery, optimized for mobile devices show up like Zepto, but they only support IE10+.

    But something else was brewing under the covers. Something very few people saw coming. The very way some started to write JavaScript changed entirely. The age of the Designer Language began…

    The Magic Flight

    Sometimes the hero must escape with the boon, if it is something that the gods have been jealously guarding. It can be just as adventurous and dangerous returning from the journey as it was to go on it.

    A Designer Language is a programming language created to avoid the perceived shortcomings of an existing language, usually by creating a superset of the existing language by modifying syntax or modifying programming constructs.

    In late 2009, Jeremy Ashkenas, quietly committed CoffeeScript to the JavaScript world. CoffeeScript was a programming language that transcompiled to JavaScript. It was also the first designer language to bring back the concept of class based programming to JavaScript. In a short two years, CoffeeScript managed to influence legions of developers.

    Eventually, CoffeeScript made its way to Brendan Eich - The Creator of JavaScript - inevitably influencing the future of the language.

    Inspired by Ruby, Python and Haskell, CoffeeScript went on to become the default JavaScript language in the popular Ruby on Rails web framework.

    With those results, CoffeeScript is (arguably) the most successful designer language ever released. But that was just the beginning of JavaScript’s designer languages.

    Google Web Toolkit or GWT attempted to transpile Java to JavaScript (Oh how the mighty had fallen). Dart, GWT’s predecessor, was Google’s attempt to make GWT more like JavaScript. Effectively, Dart was Google’s answer to CoffeeScript.

    Later, Microsoft would attempt to answer CoffeeScript with there own language called TypeScript. Both TypeScript and Dart would attempt to bring static typing to JavaScript.

    ClosureScript transpiled Closure, a dialect of LISP written for the JVM, to JavaScript. Objective-J, though short lived, had identical syntax to Objective-C and transpiled to JavaScript as well.

    What’s interesting is most designer languages still allowed you to use all your favorite frontend frameworks like jQuery. Though some like GWT and Objective-J do not because they provide their own underlying framework.

    Times were changing rapidly. The grassroots investment in JavaScript by the development community didn’t go unnoticed. JavaScript became the most popular language used on GitHub.

    Rescue from Without

    Just as the hero may need guides and assistants to set out on the quest, oftentimes he or she must have powerful guides and rescuers to bring them back to everyday life, especially if the person has been wounded or weakened by the experience.

    Around this time, Apple dropped support for Flash on its new iPhone in favor of more open standards like HTML5, CSS and JavaScript. In subsequent years, Adobe itself would drop support for Flash on mobile devices all together.

    Microsoft drops support for IE6 as the countdown to the death of IE6 continued. jQuery would later create a project fork that drops support for IE6 as to decrease the libraries bloat.

    The Crossing of the Return Threshold

    The trick in returning is to retain the wisdom gained on the quest, to integrate that wisdom into a human life, and then maybe figure out how to share the wisdom with the rest of the world.

    Micro MVC libraries arise to offer JavaScript code organization along with jQuery integration. Some of those libraries include Backbone, KnockOut, Spine and JavaScriptMVC (CanJS).

    Backbone, written by the creator of CoffeeScript - Jeremy Ashkenas, is a JavaScript library with a RESTful JSON interface and is based on the model–view–presenter (MVP) design pattern.

    Backbone is lightweight and its only hard dependency is Underscore - also written by Jeremy. Underscore is a JavaScript library which provides utility functions for common programming tasks and delegates to native browser implementations when present or a compatible version when absent.

    KnockOut, Spine and JavaScriptMVC (CanJS) all aimed to organize JavaScript as Backbone did, but with each its own unique approaches.

    Around this time, Google open sources its V8 engine. Like its ancestor, SpiderMonkey, V8 is an open source JavaScript engine. V8 is the JavaScript engine behind Google’s Chrome web browser.

    Later, projects like NodeJS use the V8 engine to turn JavaScript into a legitimate server-side language. This wasn’t the first server-side JavaScript runtime. But it was the first to realize a thriving ecosystem.

    Node gave us real-time, two-way connections using the HTML5 WebSocket protocol. The primary method to take advantage of HTML5 WebSockets was through Socket.IO.

    Socket.IO is a JavaScript library for realtime web applications. It has two parts: a client-side library that runs in the browser, and a server-side library for NodeJS. Both components have a nearly identical API. Like NodeJS, it is event-driven.

    Socket.IO primarily uses the HTML5 WebSocket protocol, but if needed can fallback on multiple other methods, such as Adobe Flash sockets, JSONP polling, and AJAX long polling, while providing the same interface.

    Socket.IO normalized WebSockets across disparate browsers. Using Socket.IO, you can guarantee WebSocket consistently across all the major browsers. Socket.IO is to HTML5 WebSockets as jQuery was to JavaScript.

    What’s interesting about this is as the HTML5 spec grew nearer to completion, micro libraries grew to fill the consistency gap as browser vendors normalized functionality.

    JavaScript begins to show up in microcontrollers like Raspberry PI and Arduino. JavaScript starts showing up in flight drones like the AR Drone and Parrot as its programming language. Even object databases like MongoDB store objects as JSON.

    It was starting to become clear to everyone that JavaScript was now simply everywhere…

    Master of Two Worlds

    Mastering two worlds is usually represented by a transcendental hero like Jesus or Buddha. The person has become comfortable and competent in both the inner and outer worlds.

    With the advent of NodeJS, it was now possible to use JavaScript throughout the entire application stack. JavaScript had mastered the server-side and the frontend as well. Core business logic moved from the server to the frontend.

    Essentially, the server becomes just another API which is consumed by various frontends. So the Single Page Application (SPA) arose. Now the debate isn’t which server-side web application framework do I use, instead many developers ask what frontend framework do I use to organize my code?

    Frontend MVC frameworks like Ember and Angular arise to answer that question. Ember provides an opinionated framework with coverage for common uses cases and is unobstructive with jQuery integration. Ember is the Rails of JavaScript.

    Angular has elements of inline JavaScript called directives. Most common scenarios must be implemented by the developer. But its not really inline JavaScript, just custom Angular properties embedded in HTML. If jQuery is not present in your script path, Angular falls back to its own implementation of a subset of jQuery Google calls jQLite.

    The major competition to both Ember and Angular is the micro MVC library Backbone. Most developers combine Ember for code organization with jQuery for DOM manipulation.

    Some developers mix in a designer language like CoffeeScript, TypeScript or Dart. However, the majority of developers continue to use and refine their knowledge of JavaScript.

    JavaScript and how best to use it is now a central analysis for custom web applications. Much thought was now needed to make sound engineering decisions.

    JavaScript was now for serious engineers…

    Freedom to Live

    Mastery leads to freedom from the fear of death, which in turn is the freedom to live. This is sometimes referred to as living in the moment, neither anticipating the future nor regretting the past.

    Frontend development is now a legitimate software profession. Developers get to choose whether to focus on frontend development or server-side development. No longer is frontend development a nice-to-have with server-side being core.

    JavaScript is now the most popular programming language on the planet. But, where do we go from here?

    That I leave to you…

    Common Mythic Elements

    There are common mythic elements in all monomyths. Here’s how JavaScript compares to Star Wars and The Matrix:

    Campbell Star Wars The Matrix JavaScript
    Two Worlds (mundane and special) Planetside vs. The Death Star Reality vs. The Matrix Server-side vs Frontend
    The Mentor Obi-Wan Kenobi Morpheus Douglas Crockford
    The Oracle Yoda The Oracle John Resig
    The Prophecy Luke will overthrow the Emperor Morpheus will find (and Trinity will fall for) “The One” JavaScript will run everywhere
    Failed Hero Biggs In an early version of the script, Morpheus once believed that Cypher was “The One” MooTools, Prototype
    Wearing Enemy’s Skin Luke and Han wear stormtrooper outfits Neo jumps into agent’s skin GWT uses Java to write frontend code
    Shapeshifter (the Hero isn’t sure if he can trust this character) Han Solo Cypher CoffeeScript (Designer Languages generally)
    Animal familiar R2-D2, Chewbacca N/A SpiderMonkey
    Chasing a lone animal into the enchanted wood (the animal usually gets away) Luke follows R2 into the Jundland Wastes; The Millennium Falcon follows a lone TIE fighter into range of the Death Star Neo “follows the white rabbit” to the nightclub where he meets Trinity Brendan Eich joins Netscape to create scheme for the browser, where he invents JavaScript instead.


    Foundation Is Everything

    Handwritten by Tian Davis

    I remember when I first realized some Universities were no longer teaching Computer Engineering and Computer Science students C++. Realizing the sad truth in it, I cringed. Flushed with sadness, I was utterly sick to my stomach.

    You see, I spent my University years watching developers do things with C/C++ that would blow your mind today. Granted, these people were utter freaks of nature, and I mean that in a good way.

    Hercules

    One of them was a very good friend. I never got to tell him, but sometimes I felt as though he was a best friend. A proverbial programming demi-god since his toddler years, my very best aspirations were to be half as good a developer as he is.

    Let’s call him Hercules:

    Hercules created a welcoming environment where I could explore C++ and literally ask him any questions I wanted. Talk about lucking out!

    Artificial Intelligence, OpenGL and Game Theory were Hercules’ ideas of a good Friday night. I soaked it all up - osmosis has its place.

    Hercules would always ask, “Did you check the MSDN examples?” That was the back-in-the-day speak for, “Did you Google it?”

    And if impending doom wasn’t lurking around the corner, he’d say, “Give it five minutes.” Give it five minutes. That meant learn to sleep on problems - a technique I use almost every single day.

    For kicks we’d code trivia programs on paper. Or hop on over to the International Obfuscated C Code Contest to checkout the latest entries. C/C++ can be a powerful binding force.

    You see, what I learned most from Hercules was less about being a coding BAMF and more about being an effective mentor, teacher and guide. And there isn’t a semi-colon in the world I wouldn’t write to learn those lessons.

    Cerberus

    Then there were the Twins. I affectionately called them Cerberus:

    The title wasn’t as glamorous as Hercules. Naturally, they didn’t care for it. I understood. It’s not like they were guarding anything in particular. Quite the opposite actually.

    You see, Cerberus shared their knowledge of C++ and object oriented programming freely, albeit not always openly. Picture this…

    What if you met a guy who could code you under the table? I’m talking six feet under the table? Like get ready to pray to whatever God you worship time.

    Ok.

    Now clone this guy. Then imbue each clone with the ability to telepathically share code and communicate.

    Exactly.

    Now your starting to get a clear picture of the coding beast that is Cerberus. Programming Gods? Yes. Productivity Ninjas. OMFGBBQ. Freaks of nature? Absolutely.

    Now mind you, I was already a pretty accomplished C++ developer under Hercules’ apprenticeship - way above my peers - before I started pairing with Cerberus on big projects. Yet, the sheer speed with which Cerberus shipped product continues to haunt me ‘til today.

    My time with the Twins, bonded by our shared love for C++, taught me so many things I remain grateful for. Cerberus thought me how to break down domains into understandable and digestible models.

    They taught me how to find out which code libraries were best and how to realize when it was time for new blood. But, most importantly, my time with Cerberus taught me to ship. And shipping continues to have value beyond measure. Indeed, priceless.

    Bit by the Software Engineering Bug

    I worked on my first professional grade application with Cerberus:

    It was a Flight Management System - aptly code named Krull. Unmanaged and free C++ with an OpenGL and OpenAL core.

    A few rough edges, no doubt, but we knocked the socks off our competition. And right then, right there, I saw the inevitable success that comes from combining competent code with an eye for design.

    Not everyone will be as lucky as I was. Not everyone will have the chance to be mentored by an Olympian and to pair program with the sons of Echidna and Typhon.

    But that’s the whole point, isn’t it?

    Fundamentals should prepare you for greatness whether you’re fighting code with the sons of Olympus or simply trying to meet your project deadlines.

    C++ is one of those foundations.

    Live it. Breathe it. And you’ll be prepared for whatever comes your way.

    I mean, what do you think the Ruby interpreter is written in? Or the PHP interpreter for that matter? The roots of iOS? Nginx? This stuff isn’t magic.

    Hell, while you’re at it, throw the Node.js V8 engine into the mix. And since we’re talking Google lineage, checkout Chrome Native Client. All the tools many have come to know and love all have a single language at their core. You guessed it - C++.

    Should knowledge so crucial to the many, be vested in so little few? Hell No! The first chance you get to learn C++, do it…

    But not because I said so.

    Do it because you love open source. Do it because you cherish progress. Do it because, like me, you live and breathe this shit!



    Hustle and Code

    Handwritten by Tian Davis

    revolution

    Slow down. Learn the Fundamentals. And Hustle. Because learning to develop good software takes ten years. A lot of people will disagree with me on this. A lot of big people already have and many of us are still cleaning up that mess.

    In How to Design Programs, the authors said it well, “Bad programming is easy. Idiots can learn it in 21 days, even if they are Dummies.”

    But this isn’t 1991 and the tragedy that was Visual Basic (VB). This isn’t a time when the 21 days paradigm turned everyone - from Help Desk Support to Accountants to Managers to Lawyers - into overnight programmers. Yet, still this notion of learn to program overnight lingers more tragically than even The Bard himself could imagine.

    Why is everyone in such a rush?

    In Teach Yourself Programming in Ten Years, author Peter Norvig posits the quintessential question in this knowledge arms race, “Why is everyone in such a rush?”

    To bring the point home, Norvig did an Amazon search to see how many book titles promised to teach you something in X amount of days. He received 248 hits. Later he swapped days with hours with similar results at 253 hits. Norvig goes on to note that 96% of the titles returned were computer and programming related. That was 2001.

    Fast-forward to 2011 and the results are even more astounding. If you want to teach yourself something in X days, you’ll find 950 titles at your disposal. Swap days with hours and you’ll find 675 titles.

    That’s 3 to 4 times more titles than just a decade ago. In both cases, programming titles are the the overwhelming majority. In particular, the hours category has a higher density of computer programming books than the days category.

    Hours? Learn to program in Hours? You can’t learn the necessary fundamentals in hours, nor days. It takes years! It takes Ten Years!

    The Fundamentals - You can’t rush good wine!

    Like good wine, you can’t rush good programming. It will take time. You won’t learn it in 24 hours. But with passion and purpose and lots and lots of practice, you can get good - really good!

    You don’t necessarily have to finish a four-year Computer Engineering or Computer Science program. But you do need drilled, hands-on fundamentals. For everyone, where you get those fundamentals will be - guess what - fundamentally different.

    For some, you will be an apprentice to a parent, older sibling or mentor. For others, you might start programming games as a youngling. Still, for others, you will go to University for the fundamentals and if you are lucky, leave before the other nonsense.

    But each of you will have a different, diverse story and that’s Ok! That’s the DNA that will shape your decisions and the projects you pursue.

    The point is there are fundamentals in the development game and no matter how you get it, you better have it! You can’t rush the fundamentals either - they need time to mature and coarse through your veins. The fundamentals need time to learn you!

    You’ve got to Hustle to get Good!

    Sorry to say, but you’re not going to get good grinding assignments in a grey cube for eight hours-a-day. Nothing inherently with the color grey stops you either. It’s just that mentorship seldom takes place in corporate life.

    If you have the opportunity to learn from a master developer in a corporate environment - consider yourself lucky and thank him or her. But most importantly, when it’s your turn, pass on what you’ve learned.

    But for the other 98% of you, the mentorship isn’t coming so you’ve gotta hustle. You’ve gotta get out of your comfort zone and work on different teams shipping different types of software.

    You’ve gotta work for the megalomaniacal failed bed bug researcher turned IT Executive since he can tell a great story and once taught a call girl Excel.

    You’ve gotta work for the spoiled rich kid who will make you Director of Technology (or pick your own title) if you promise to work 21 hours a day and hook an intravenous (IV) of Red Bull to your medulla oblongata.

    You’ve gotta work for people who’s agenda is everything but making good software. It’s in those darkest hours that your judgement is intensified. It is in those darkest nights that your character is solidified and your religion hardened.

    Those are the times where you’ll learn what’s right, what’s wrong and what needs more time to simmer. Those are the times when you’ll get to see the effects of bad architecture and bad design and how that makes your Users feel. Those are the times when you’ll learn which decisions make sense and which don’t. Those are the times when you’ll learn to say, “Hell No!” and mean it!

    But most importantly, those are the times when you’ll either learn to take responsibility for the software decisions you make or you’ll join the countless many who’ve decided to never take responsibility for the decisions they make. These are the experiences that make great product designers and you will draw on them time-and-time again.

    Spartan or Arcadian? Warrior or Brawler?

    The way I see it, you have two choices. Learn from Dummies and be an Arcadian - a brawler at best. Or face your personal Agoge and emerge a Spartan - an Elite Warrior.

    You can make the cleanest light saber you want. But you can’t be a Jedi until you’ve faced your Vader. You’ve gotta hustle! And if you survive, you will have joined the elite and be able to smell bullshit and bad code a mile away - sometimes at the same time.

    And in that time, you’ll learn to appreciate Design Patterns, Software Architecture and Modern Object Oriented Design. In that time, you’ll learn to love Object Relational Mappers (ORM), Separation of Concerns (SoC) and User Experiences (UX).

    Programming for Dummies won’t teach you these - Dummies don’t care about these! They care about getting things done the quickest so they can look smarter than you. They don’t care about the end User. They don’t care about how the software will be installed. They don’t even care about how they will maintain the software. But you must!

    Dummies care about the latest IDE and the latest shiny runtime features. They don’t care about the deep mentorship and bonding that comes from Code Reviews and discussing best practices with an open mind.

    Programming for Dummies won’t teach you to appreciate data integrity or domain modeling. Programming for Dummies won’t teach you how to value your software team. You’ve got to learn to value these precepts on your own!

    Don’t back down - Step up!

    You don’t want to be an Arcadian, you want to be a Spartan. We need you to be a Spartan. And that’s not happening in 30 or 21 days. It’s not happening in 24 or 12 hours. And it’s certainly not going to happen learning from Sams or any other Dummy.

    But you can do it! You can learn the proper Software Development Life Cycle (SDLC). You can learn to care about your users and your team. It’s a lot to learn and a lot to take in. I know this and I know it can seem like a mountain to climb. But always remember this: It Ain’t the Size of the Dog in the Fight, It’s the Size of the Fight in the Dog.

    What’s the rush? Slow down. Learn the Fundamentals. And Hustle. Cause learning to develop good software takes ten years. Hustle and Code. Get used to it.



    Rails 3 and The jQuery Effect

    Handwritten by Tian Davis

    Ruby on Rails 3.0 was a big evolution of the Rails DNA. Architectural changes were hard to miss - true. But, more happened - more changed.

    In Rails 3.0 I saw a strategic change in the way the Rails Core Team viewed itself. Prototype went from being the Official JavaScript framework for Rails to being the Recommended JavaScript framework for Rails. That’s a really big deal!

    That opened the door for JavaScript options when using Rails. And dare I say, it was a venerable extension of diplomacy to the rest of the JavaScript community.

    The jQuery Effect

    There I stood as a Rails developer - proud as ever - because above all this move showed that the Rails Core Team leads. I call this The jQuery Effect and it’s about getting out of the way of progress:

    <!--The jQuery Effect-->
    <script type="text/javascript">jQuery.noConflict();</script>

    jQuery has built a vibrant community of developers. And dare I say it was this single line of code that turned the tides in the JavaScript Framework Wars.

    The JavaScript Framework Wars

    Not to long ago the JavaScript Framework Wars were armed for battle and at conflict. Who threw the first punch? We’ll never know. One thing’s for sure, it was on!

    I consider these frameworks the major Contenders: Prototype, jQuery, MooTools and Ext JS. As with all Wars the issue is about assets and who owns what. Here, the asset is the infamous dollar ($) sign.

    All this over a dollar ($) sign? Yes! Only one framework at a time could use the dollar ($) sign. Since the $() function was an integral part of each of these frameworks, no one wanted to change there use of the dollar ($) sign just to suit “that other framework.”

    Finding Common Ground

    What John Resig and the jQuery Core Team realized was that people used the $ for efficiency not identity. So let other frameworks use the $ and give jQuery developers the best of both worlds:

    <script type="text/javascript">jQuery.noConflict();</script>
    <script type="text/javascript">
      jQuery(document).ready(function($) {
        $("#button").click(function() {
           alert('Hello World!');
        });
      }); //document.ready
    </script>

    As a result jQuery developers didn’t have to choose efficiency over flexibility. While the other frameworks pandered, “One Framework to Rule Them ALL.” jQuery played it’s hand and moved aside.

    One Developer’s Story

    jQuery was young and I was an accomplished Prototype developer. I had my favorite Prototype modal dialog I just had to use for a recent project - I had too. But I also wanted to broaden my horizons and try another framework as my main.

    At that time, jQuery looked really promising but the plugins were ugly. MooTools had some really beautiful plugins, but the Communinity believed you should only use one JavaScript framework and they weren’t budging - I agree with them today ;) Ext JS felt too heavy for me and I passed.

    I really liked how polished MooTools plugins were, but chosing MooTools meant I was locked into using MooTools and only MooTools. And as I stated earlier, “I just had to have the Prototype Modal Dialog plugin.”

    So I went with jQuery. I went with jQuery because it gave me options. And in a world of uncertainty, options are the currency of progress. With a single line of code, I was able to use jQuery as my foundation and still got to use my “favorite” Prototype Modal Dialog.

    The Aftermath

    After that project, I never used that Modal Dialog again! And you know what? Something suprising happened - something I didn’t even expect to happen: I never used another JavaScript framework again either. I was bit by the jQuery bug and I was a convert for life.

    How did that happen? How? A single line of code? Really? Yes. jQuery.noConflict(); allowed me to test the framework and not have to choose between some of my favorite plugins and jQuery. It was not an all or nothing transaction.

    The jQuery team knew its JavaScript framework was great and so it decreased the friction needed to adopt jQuery and let evolution play its course. It was a success! Today jQuery is one of the most popular and widely used JavaScript frameworks.

    I often wonder how many developers converted to jQuery in that fashion. How many picked up jQuery and never looked back because of The jQuery Effect. We may never know the true numbers, but I’d gamble the numbers would be both shocking and impressive.

    Rails 3.1 and jQuery Sitting in a Tree

    When Rails 3.0 was released, I believe DHH and the Rails Core Team took a similar stance as the jQuery team did so long ago: Give developers options and get out of the way of progress - let evolution play its course.

    Earlier today DHH announced:

    I believe Ruby on Rails is the most powerful and beautiful web application framework today. If you doubt that, I dare you to look around the ecosystem. You WILL NOT find a single - respectable - web framework, written in any language, that hasn’t adopted the paradigms and philosophies of Rails. Not a single one!

    DHH and the Rails Core Team know Rails is the best framework out there and it is up to them to reduce the friction needed to adopt the framework. With the countless number of developers using jQuery, shipping Rails 3.1 with jQuery is a powerful move indeed.

    The Lesson

    The lesson here is simple. Reduce the friction between your Product and potential Customer. Make it easy for your customer to access your product. Then the product can stand on its own and wow the customer.

    But until you get the product in your customer’s hand, what do you have? Nothing. Not even a chance. That’s a powerful thing that John Resig and the jQuery Core Team understood well and it paid off big time. First by its adoption my Microsoft and ASP.Net MVC and now by Ruby on Rails.

    I commend both the jQuery Core Team and the Rails Core Team. And many thanks for teaching this powerful lesson indeed. I can’t wait to get my hands on the Rails 3.1 release.



    Silly Rabbit, Parsing HTML is for Kids!

    Handwritten by Tian Davis

    trix

    If I had a nickel for every time a developer says, “Don’t worry, I’ll just whip up a RegEx and parse that HTML in no time.” I’d retire right this very second. I’ve grown less and less shocked over the years. Still I’m left surprised every time.

    To be honest, I’ve been there done that. That’s why I know it’s generally a bad idea. Given, there are times when you have to suck it up and use a Regular Expression. But experience has shown me that this is rarely one of those times.

    Red Pill - Do you even have to ask?

    Whether it’s reading HTML or stripping HTML tags out of user generated content, the feeling is always the same - utter shock! Talking a developer out of using RegEx to parse HTML is like talking a good friend off the ledge. Curiously, no one ever says, “Hey, let’s use RegEx to parse XML!” So what’s the deal with HTML? Is it the familiarity?

    Hell, even Jon Skeet cannot parse HTML using regular expressions and the last guy to try went batshit crazy.

    Time is too valuable to start down this path. Spend what little time you have implementing your core requirements and business logic. This is not a fight you need to pursue. Just say no and get back to the primary task at hand.

    The problem

    Where do I begin? Where do I even begin? What can I say that hasn’t already been said.

    Honestly, I don’t know where to begin. The truth is there will be hundreds of scenarios that you didn’t or couldn’t or will never think about. Maybe even thousands.

    What this boils down to is simple: You will set your project in a never ending cycle of fix-break-fix because chances are there will always be HTML to break your little RegEx parser.

    You don’t want that! Believe me, you don’t need that! Technical Debt is not something to take on lightly. It’s not something to take on at all. But for God’s sake, don’t take it on to parse HTML.

    The solution

    Don’t take on Technical Debt by rolling your own RegEx HTML parser. Don’t do it because the solution is simple - use an HTML parser library. Preferably, one with XPath and CSS3 Selector support.

    Depending on your language and platform, I understand this is easier said than done. But if you can pull it off, the benefits will far far outweigh the thrill of slinging on your mouse and riding into the sunset with your trusty Aeron deluxe chair.

    In fact, I’ll do you one better. Instead of just saying, “Grab an HTML Parser.” I’ll point out fine HTML Parsers for different platforms:

    Ruby: Try Nokogiri JavaScript: Try jQuery PHP: Try PHP5 DOMDocument .Net(C#): Try Html Agility Pack VB6: Try MSHTMLThe DOM Parser used in IE Python: Try lxml Perl: Try HTML:Parser Java: Try HTML Cleaner

    There, now all you have to do is get up to speed. Then see if one of these full-fledged HTML Parsers will do the trick.

    Using an HTML Parser isn’t some new bag of tricks either. Don’t be surprised to see jQuery-like syntax for some of the more syntactically pleasing libraries.

    Toss that cowboy hat out the door

    This is not about being a Cowboy Coder. You do not have to be a Hero. I’m pretty sure you have a business task to accomplish. And I’m equally sure that task isn’t to write an HTML Parser. So don’t do it - just say no!

    You’ll thank yourself in a couple weeks when the Red Bull and adrenaline wears off. It’s really not worth it! Granted we all have to start from somewhere. Think of this as your new beginning.



    Object Oriented Ruby: Classes, Mixins and Jedi

    Handwritten by Tian Davis

    Objects, Ruby Classes and Mixins are the topics today. Ruby is a dynamic language with a focus on simplicity and productivity. Object Oriented programming in Ruby is at once elegant and simple. Ready for more power and flexibility? Let’s get started…

    Most academics approach object oriented discussions with Tonka toys and furry kittens. Well, academia can take a backseat. From here on out, dump the Tonka toys and put the kittens to bed - we’re talking Lightsabers and Jedi baby!

    Objects

    An object is an instance of a class. Each instance having its own unique attributes and state.

    dojo.rb

    require 'Jedi'
    
    anakin = Jedi.new("Anakin")
    anakin.duel
    anakin.lightsaber("Ruby")
    anakin.juyo

    Here, the anakin object is an instance of the Jedi class. Apparently, Anakin is having trouble with some galactic fraktards and fires up his Ruby lightsaber using the lightsaber method. Then Anakin proceeds to bring the hurt using a lightsaber combat form called Juyo.

    Using the Jedi class is deceivingly simple. That simplicity is possible because of a well designed domain model (collection of ruby classes that model a particular system).

    In this article, I’ll take you behind the scenes and show you the techniques you need to write clean, maintainable and awesome Ruby code.

    Ruby Classes

    In Ruby, an class (object) encapsulates a group of attributes (state) and methods (operations). A developer manipulates an object’s state (instance variables) only through class accessors and methods.

    Jedi.rb

    Here’s how a Jedi class might look:

    require 'Padawan'
    require 'Forms'
    
    class Jedi < Padawan
      include Forms
    
      def initialize(name = 'Unknown')
        @name = name
        puts "Jedi.initialize"
      end
    
      def duel
        puts "Only a Sith deals in absolutes."
      end
    end

    Padawan.rb

    And the Padawan class might look like this:

    require 'Force'
    require 'Lightsaber'
    
    class Padawan
      attr_accessor :name
    
      include Force
      include Lightsaber
    
      def inspect
        puts "Force-Sensitive: " + @name
      end
    end

    Padawan would be derived from maybe a Sentient class. But for simplicity I’m including in the Padawan class attributes you might have seen in a class like Sentient. For example, the @name attribute.

    Constructor (Initialization)

    The initialized method is Ruby’s constructor method for class implementations. It’s called whenever a new instance of a class is created. For example, here a new instance of the Jedi class is called. The result is the anakin object:

    #dojo.rb
    anakin = Jedi.new("Anakin")

    This example illustrates a beautiful feature of Ruby - Flexible Initialization. This means you have the option to pass a parameter or not:

    #Jedi.rb
    def initialize(name = 'Unknown')
        @name = name
        . . .
    end

    You can choose not to pass a parameter. In that case, creating an instance of the Jedi object might look like this:

    sith = Jedi.new

    And the default value of the @name attribute would be Unknown.

    Inspect

    You’ve noticed by now that when we try to look at an object directly, we are shown something cryptic like <#Jedi:0x101237498 @name=”Anakin”>. This is just a default behavior, and we are free to change it.

    All we need to do is add a method named inspect. You should return a string that describes the object in some meaningful way - including the states of some or all of its instance attributes.

    Here, we return the value of the @name attribute on the Padawan class. In this instance of the Jedi object, the value of the @name attribute is Anakin. So I setup the inspect method to return Force-Sensitive: Anakin.

    Run irb from Terminal. Then load ‘dojo.rb’. And type Anakin. Type Exit to leave irb.

    Accessors

    A class may have many attributes. It really all depends on the domain you’re trying to model. Accessing those attributes directly is not a good practice. Instead, you should create accessors to read and write to those attributes.

    To accomplish this in some languages is a pain. You would have to define a getter method and setter method for each class attribute. Imagine having 10 attributes. That’s 20 different accessors you would have to write - in addition to writing your attributes!

    Ruby has a more elegant solution - the attr_accessor shortcut:

    class Padawan
      attr_accessor :name
      . . .
    end

    This gets you the following Ruby class methods for FREE:

    #attr_reader
    def name
      @name
    end
    
    #attr_writer
    def name(value)
      @name = value
    end

    Awesome! This is at once elegant and beautiful. You can add as much attribute accessors as you like: attr_accessor :name, :weight, :etc.

    You’re not going to always want a getter and setter. Sometimes you might only need one. You can do that in Ruby:

    attr_reader :midi_chlorian
    attr_writer :heart_rate

    You get the picture!

    Methods

    Methods are how we interact with our Ruby classes. Methods allow us to encapsulate activities specific to the particular class:

    #Jedi.rb
    def duel
      puts "Only a Sith deals in absolutes."
    end

    Methods sometimes perform work on a Ruby classes’ attributes. Sometimes a method just performs work specific to the class. In our Jedi class, we have a duel method. This method is used to convey some Jedi axiom just before battle begins.

    Like an initialize constructor, a method can take parameters. A Ruby method can even have a default parameter. So our duel method could have looked like:

    #Jedi.rb
    def duel(opponent = 'Sith')
      puts "You disappoint me " + opponent + "."
      puts "Only a Sith deals in absolutes."
    end

    Coming from static languages, Ruby’s minimalistic approach to object oriented programming is at once refreshing and inspiring.

    Still with me? Great! Let’s move on to more advanced topics.

    Inheritance

    Eventually, you’ll reach a situation where different Ruby classes have the same attribute and perform the same basic actions.

    We know that a Jedi evolved from a Padawan. Therefore, a Jedi should posses the skills of a Padawan with the further developed skills of a Jedi Knight.

    In Ruby, such relationships can be expressed using inheritance. Here, the Padawan class is called the parent class:

    class Jedi < Padawan
    . . .
    end

    In this way we can reuse the Padawan class for other force-sensitives that we wouldn’t consider Jedi. For example, we could have a Sith class:

    class Sith < Padawan
    . . .
    end

    Darth Vader was once a Padawan until he turned to the Dark Side and became a Sith Lord. As such, you would expect him to have the knowledge and training of a Padawan with the learning of a Dark Knight.

    Polymorphism

    There will be cases where certain methods of the parent class will need to be implemented differently in the inheriting class. For example:

    class Sith < Padawan
      def inspect
        puts "Peace is a lie. There is only passion. - " + @name
      end
    end

    Here, the inspect method from the Sith class will be used instead of the inspect method from the Padawan class. Object oriented languages (like Ruby) that facilitate this type of behavior are said to be polymorphic. Therefore, the above is an example of Polymorphism.

    Rather than exhaustively define every characteristic of every new class, we only need to redefine the differences between the parent class and the child class.

    Does Ruby support multiple inheritance?

    No. Ruby was designed with single inheritance. This was on purpose. Single inheritance encourages you to develop a deeper understanding of your domain model.

    That said, there are times when a domain model could benefit from sharing methods that do not require a full blown class. That’s where Ruby Mixins come in.

    Mixins

    Mixins are Ruby modules. Modules are a collection of methods. You cannot create an instance of a module. Therefore, modules do not maintain state.

    After requiring a Ruby module, you would then include that module in your Ruby class. This is called a Mixin:

    require 'Lightsaber'
    
    class Padawan
      . . .
      include Lightsaber
      . . .
    end

    Lightsaber.rb

    Here is the Mixin Lightsaber module:

    module Lightsaber
      def lightsaber(crystal = "Jade")
        puts "Lightsaber.initialized: " + crystal
      end
    end

    So when I instantiate an instance of the Jedi class, I get access to the Lightsaber module - pure Mixin love:

    #dojo.rb
    require 'Jedi'
    
    anakin = Jedi.new("Anakin")
    . . .
    anakin.lightsaber("Ruby")
    . . .

    Modules are for sharing behavior (methods), while classes are for modeling relationships between objects. Ruby classes can Mixin a module and receive all its methods for free.

    Active Record - Base Class or Mixin?

    We might think Active Record should have been included rather than extended by a subclass. You can use your own parent class at that point right?

    So should Active Record have been implemented as a Mixin instead of a Base Class? Hell No!

    Saying it should is a half-truth at best. What never follows such statements is a disclosure that you will spend exorbitant amounts of time mapping your class attributes to a database DSL (domain specific language).

    This behavior really adds up to your detriment when dealing with larger domain models. But this is never said. It’s that little gotcha that, not surprisingly, never gets mentioned.

    I believe for the best speed, simplicity and maintainability, Active Record is best implemented as a base class that you inherit with a single line.

    Compare that to the many lines you will spend to map your class attributes to database types. It really is a no-brainer.

    I could stop here, but I believe you’ll be able to make better judgments when you have a fuller picture of the issue. So I’m going to tell you when implementing Active Record as a mixin makes sense.

    It makes sense when you’re dealing with legacy code that already has a base class coupled with an inefficient, custom data access layer. Then I recommend using a Ruby ORM like Datamapper.

    You will spend a lot of time wiring up your domain models, but you’ll get a mature and flexible data layer - just the type of sprucing up any legacy app could use.



    Rails 3.0 rescue from Routing Error Solution

    Handwritten by Tian Davis

    Well, I’ve got good news and I’ve got bad news. As of Rails 3.0.1, using rescue_from in your ApplicationController to recover from a routing error is broken! That’s the bad news.

    The good news is I have a solution that will keep you in unison with the Rails Core Team. The Rails team has promised a fix some time in Rails 3.1. Until then, I’ve got readers and I’ve got customers and I shudder at the thought of showing them a generic error page.

    The Situation

    It’s bad enough an error has occurred in the first place. At that point I want to take control of the situation and rescue my audience from a bad experience back to enjoyment!

    Previously in Rails 2.3.8 and below you could handle routing errors elegantly using rescue_from ActionController::RoutingError:

    class ApplicationController < ActionController::Base
      rescue_from ActionController::RoutingError, :with => :render_404
    
      private
      def render_404(exception = nil)
        if exception
            logger.info "Rendering 404: #{exception.message}"
        end
    
        render :file => "#{Rails.root}/public/404.html", :status => 404, :layout => false
      end
    end

    However, things are a little different in Rails 3. The reality is, as of Rails 3.0.1, ApplicationController can’t catch ActionController::RoutingError and thus, we cannot take advantage of rescue_from like we used to.

    Now, for those of you who don’t know, I’m a realist. So, I’m not expecting the Rails team to spring a solution overnight.

    Personally, I’m going to wait for the Official fix from the Rails Core Team. In the meantime, I need a simple, no side effects solution that I can use right now!

    Simple Solution

    This is one of those times when it’s great to be a developer. There is nothing we can’t solve with a little elbow grease and ingenuity.

    Expanding on the suggestion given by the Rails core team, here’s the solution I use to handle routing errors in Rails 3.0:

    config/routes.rb

    This code should go to the end of your routes.rb file. That way it will be given the least priority and therefore, act as a wildcard catchall for all those rogue url resources.

    Yourapp::Application.routes.draw do
      #Last route in routes.rb
      match '*a', :to => 'errors#routing'
    end

    NOTE: The “a” is actually a parameter in the Rails 3 Route Globbing technique. For example, if your url was /this-url-does-not-exist, then params[:a] equals “/this-url-does-not-exist”. So be as creative as you’d like handling that rogue route.

    app/controllers/errors_controller.rb

    Here, I handle my routing errors. I leverage previous 404 handling code from my original ApplicationController mentioned above. So, my errors_controller.rb looks like this:

    class ErrorsController < ApplicationController
      def routing
        render_404
      end
    end

    However, feel free to modify to fit your individual needs. Everyone’s situation will be slightly different. For example, if you’re not going to reuse your 404 error handling logic, then here’s the full ErrorsController without inheritance:

    class ErrorsController < ApplicationController
      def routing
       render :file => "#{Rails.root}/public/404.html", :status => 404, :layout => false
      end
    end

    I’m big on keeping things simple. I believe solutions should be simple without being simplistic. Like I said earlier, I look forward to an Official solution from the Rails Core Team. Until then, this gets the job done!

    Well, I encourage you to dig into Rails 3 and have a little fun. And if you have another solution, post here so we can discuss. Until next time Beloved, take care!