Abstract Sequential Stephanie Hobson's abstract thoughts in the sequence she has them.

Flipping Tables at CSSDevConf

Here are my slides for Flipping Tables: Displaying Data on Small Screens which I had the pleasure of giving at CSS Dev Conf.

Links I mentioned in my talk

Writing for Every Reader

I recently spoke at #a11yTOConf, a conference by the organizers of the regular a11yTO meetup. The organizers put together a great line up of speakers. It was the first #a11yTOConf but I hope it won’t be the last!

Here are my slides for Writing for Every Reader.

Resources I mentioned:

Articles that were helpful to me:

More on accessibility:

The MDN Redesign “Behind the Scenes”

This article was originally published on Mozilla’s Hacks Blog.

Kuma, the code that produces the MDN site, is a weird mix of the old and the new. MDN turned ten in 2015 and there’s still code and content around from those very first days. When I sat down to start coding the current redesign, the first thing I did was remove the last few traces of the last redesign. In contrast, we have a cutting-edge audience: 92% of our users have a browser with CSS grid support! We enabled http2, and 98% of our users have seen benefits from that.

One of the ways we deal with old code in kuma is with the campsite rule: Always leave your campsite better than you found it. A redesign touches a lot of files, and this was a great opportunity to clean up and refactor — at least until the deadline started getting close.

A redesign is also a great time to change stuff you’re afraid of breaking. People are more understanding of you working the bugs out of something new than breaking something that’s worked for years. I removed 640 lines of stale code during the redesign. (And if I broke a six-year-old XPCOM tutorial you use daily by removing the custom list-style-type, please file a bug!)

One website with two looks

Rather than working on the CSS for the redesign in a separate “redesign” folder, we duplicated all the files and added “-old” to the file name of the old files, which means that all of our redesign work is happening in the “official” files. This preserves the git history and means we don’t have to move anything around after launch. Once we’re happy with the code, we can delete the “-old” files with confidence.

To serve the new styles to our beta testers and the “-old” ones to everyone else, we use Django Waffle. Waffle can also be used to serve different content but because there’s a strong separation of presentation and content on MDN, we’ve made very few changes to the HTML.

Bugs our beta testers found

MDN is huge, and we can’t test every page in every locale. We’re really lucky to have active beta testers. :) Some of the biggest things they turned up for us were:

Highlighting

Problems with Zilla Highlight in Vietnamese and when there's inline code examples.

We started out by replicating Mozilla’s brand “highlight” effect by using the Zilla Slab Highlight font, but we abandoned that pretty quickly when problems turned up in many of our locales and when used in combination with inline code.

The current approach puts a full-width black background on h3 and h4 headings by default, and then some JavaScript runs to add a <span> tag so that the black background hugs the heading text. Progressive enhancement at work.

We went back and forth about this for a while, wondering if the JavaScript and extra <span> was worth the effort. But we stuck with it because it makes the h3 and h4 headings much easier to spot when scanning the page.

What’s Taiwanese for Slab Serif?

Showing the difference between Zilla's thick strokes and the thin strokes of Taiwanese.

Previously we used Open Sans as our heading text. With the redesign, we switched to Zilla Slab. Open Sans was thin and an average height for a font. It didn’t look out of place next to the system fallbacks for Chinese or Japanese characters.

Zilla is big and thick, and we started getting feedback about the contrast with these system fallbacks. Additionally, the character set for Zilla is European Latin only at the moment, so Vietnamese, which uses Latin characters plus a couple Latin characters not used in Europe, displayed font fallbacks in the middles of words.

To address both these problems, we implemented a system that allowed us to override the site fonts on a per-locale basis.

Contrast

We received many complaints about the old design’s low-contrast display. We went a bit too far the other way with this design and received complaints about the high contrast. We’ve toned it down to the ever-popular #333 now.

What’s next

We’re moving on from this to make specific improvements to the article pages:

Who is “we”?

The MDN dev team is:

We blog sporadically on the Mozilla Marketing Engineering & Operations blog.

You should also read this blog post by our Product Manager, Kadir Topal, about The Future of MDN.

HTTP/2 on MDN

This article was originally published on the Mozilla Marketing Engineering & Operations blog.

We enabled HTTP/2 on MDN’s CDN.

We didn’t do anything to optimize for HTTP/2, we just enabled it.

We’re seeing performance improvements.

You don’t have to get ready before you start using HTTP/2

While doing research to see if turning it on without doing any optimizations was a good idea I read things like:

“It also means that all of those HTTP1 performance techniques are harmful. They will make a HTTP2 website slower, not faster – don’t use them.” – HTTP2 for front-end web developers

And:

“However, many of the things you think of as being best practices can be detrimental to performance under HTTP/2.” – Getting Ready For HTTP2: A Guide For Web Designers And Developers

Which suggest that enabling HTTP/2 on a site optimized for HTTP/1.1 could result in a slower site.

A better way to interpret those quotes is:

If you optimize for HTTP/1.1 and turn on HTTP/2 your site will not be as fast as it could be – but it might still be faster!

On MDN we concatenate a lot of our files but we don’t concatenate all of them. For example, our article pages have 9 different files coming from our CDN. I thought we could benefit from a bit of HTTP/2’s multiplexing and header compression. And we did. You can see the DNS lookup time drop off in this waterfall from Pingdom:

Waterfall showing over 0.3s of DNS look up for each request.

Waterfall showing DNS lookup for only first asset requested.

Some numbers

Overall, our tests don’t show a huge improvement in page load speed but there are small improvements for everyone, and a real improvement for users located far away from our servers. (Hi Australia and China!)

Service Location Browser HTTP/1.1 HTTP/2 Change
Pingdom Dallas Chrome 1.54s 1.34s 0.2s
Pingdom Melbourne Chrome 2.94s 2.80s 0.14s
WebPageTest London IE11 2.39s 2.37s 0.02s
WebPageTest Australia Firefox 5.61s 5.17s 0.44s
Google Analytics All Chrome 3.74s 3.04s 0.7s
Google Analytics All Firefox 3.99s 3.71s 0.28s
Google Analytics Australia All 3.01s 1.69s 1.32s
Google Analytics China All 8.10s 6.69s 1.41s

I tried to segment our users in Google Analytics to make sure we did not have a negative impact on users relying on HTTP/1.1 and… I couldn’t find enough users to draw any conclusions. MDN is lucky like that. (It’s possible the IE11 test in the table above is on Windows 7 and does not support HTTP/2, but WebPageTest doesn’t identify the OS.) In theory, older browsers should not be affected because the protocol falls back to HTTP/1.1.

There was a lot of variation in the page speed data I examined. I recommend running your before and after benchmark tests multiple times on multiple days so you can take an average. Try to wait a week before drawing conclusions from your analytics data as well.

In a perfect world you don’t increase the amount of code on your site or invalidate anyone’s caches in the sample time period, but we don’t develop in a perfect world.

Read more on HTTP/2

Next

Get our pages into data centres around the world.

This involves changing our hosting services, not a small task, and changing our pages to serve the same content to all logged out users.

Decrease asset size by removing support for older browsers.

If you think working on MDN was a great job because we have very modern browser support requirements, remember we’re also working on a 10 year old code base.

Thanks for using MDN!

Flipping More Tables

A few weeks ago I gave my talk on displaying data tables on small screens at Cascadia Fest. Which is a super awesome 3 days conference in the Pacific North West focusing on CSS and JavaScript (Browser and Server side!)

The video of my talk has been posted:

And my slides are available here.

Google Analytics, Privacy, and Event Tracking

This article was originally published on the Mozilla Hacks blog.

Many of us use Google Analytics (GA) or similar third-party services to keep track of how people interact with our websites; telling us things like when people visit and what they click. This data can help us make important decisions, such as when to schedule maintenance or if a feature can be removed.

Because these services are installed on multiple websites these third-parties can collect a lot of information about people as they move around the internet. Not everyone is comfortable with that. Some people have started taking steps to protect their privacy by installing ad blockers like Ghostery or enabling tracking protection in Firefox.

I think this is great!

Sometimes this breaks things

When code depends on these third-party services and they’re not there stuff goes wrong.

I’m going to use tracking external links as my example because it’s a common use case, but I started investigating this problem because we accidentally broke the ability to navigate within our website for users who were choosing not to load the GA script.

I’m also going to use the universal analytics tracking code (analytics.js) for my examples here, not the classic analytics code (ga.js), so if you notice some differences between my code and yours that might be why.

What goes wrong?

The current method (Jan 2016) for tracking outbound links as recommended by Google looks like this:

<a href="http://www.example.com"
   onclick="trackOutboundLink('http://www.example.com'); return false;">
        Check out example.com
</a>
var trackOutboundLink = function(url) {
    ga('send', 'event', 'outbound', 'click', url, {
        'transport': 'beacon',
        'hitCallback': function(){document.location = url;}
    });
}

What Google is recommending here is:

  1. add return false to your links so that clicking on them doesn’t take the user off the page
  2. when a user clicks a link, add that click to the GA queue of events to track and tell GA to take the user to the new page after that is done
  3. GA processes the event
  4. GA triggers the callback, taking the user to the new page

This guarantees that the user doesn’t leave the site before their action is recorded. However, if GA fails to process the event… it never triggers the callback, breaking all links off the website. :(

We are relying on GA to send the user to the new page, but what if the GA code never executes?

How we can avoid problems

If you’re using the current recommended method of adding GA to your site you’re including something like this on all your pages:

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)})(window,document,'script','//www.google-analytics.com/analytics.js','ga');

ga('create', 'UA-xxxxx-x', 'auto');
ga('send', 'pageview');

This snippet creates a global ga object. The ga object is available in JavaScript whether the rest of the script successfully loads from Google or not.

This means two things:

  1. It is possible to add events to the queue before GA has loaded. (Cool! This is handy for tracking events that happen really quickly.)
  2. We can’t check if GA has loaded by checking for the existence of the ga object like we used to because it’s always there.

So how do we know if GA has loaded? Well, clever people have figured out that when it initializes, GA alters the global ga object in a few different ways. None of these changes are officially documented by Google which means any of them could break in the long term.

In Tracking outbound links with Google Analytics, Andreas Veithen (who works at Google but was writing on his personal blog) suggests that the most reliable check is to see if the ga object still has the q array. The array is removed after GA initializes and processes all the events waiting in the ga.q queue. Because ga.q is created by the snippet we paste on our sites, changing it would require us to make changes to our websites and so would be a major change for Google to undertake. On the other hand, it would also be possible for Google to change their code to not remove ga.q so I’m not sure this is more reliable than any other method.

In Google Analytics, Ghostery, and Event Tracking John Morton suggests checking for the create property on the ga object. This technique has been around the longest and is probably the most widely used (I say this after a quick informal glance at GitHub).

In Workaround for when the hitCallback function does not receive a response (analytics.js), Dom Sammut observes that we can check for the loaded property on the ga object. This is a method used informally in some Google documentation so it’s probably also somewhat reliable. I like this one because it is obvious what we’re checking for.

Opting for the last method, we alter Google’s suggested code like so:

var trackOutboundLink = function(url) {
    // check if the GA object exists and that it has initialized
    if(window.ga && ga.loaded) { {
        // if yes, rely on GA to follow link
        ga('send', 'event', 'outbound', 'click', url, {
            'transport': 'beacon',
            'hitCallback': function(){document.location = url;}
        });
    } else {
        // if not, follow link ourselves
        document.location = url;
    }
}

Making this kind of check before relying on any third-party script is good practice. You should also be including a local fallback for any script that is not coming from your own domain – including CDNs.

Respecting our users’ privacy

Most users care about their privacy and there are more steps we can take to protect it. Depending on where you work you may also be required by your country’s laws or your company’s privacy policy to take extra care with users’ data.

Configuring Google Analytics for privacy

If Google Analytics is your analytics platform of choice, there are a number of steps you can take to make it more private.

One of these is an easy configuration option to request anonymized user IP addresses. Essentially, Google Analytics changes the last octet of the user’s IP address to 0 before recording it.

This is triggered by adding an additional line to your tracking snippet after ga('send', 'pageview');.

ga('set', 'anonymizeIp', true);

Of course, Google still gets the users’ IP address (that’s the way the web works), we’re just trusting them not to store it.

Respecting Do Not Track (DNT)

It’s also possible to detect when a user has enabled Do Not Track and be pro-active about what scripts you choose to load.

Schalk Neethling documents the process in Respect User Choice ~ Do Not Track. In a nutshell, we wrap the GA call with a check to see if the user has enabled DNT before executing the call to include the file.

function _dntEnabled() { 
    // include the helper function from https://github.com/schalkneethling/dnt-helper/blob/master/js/dnt-helper.js

}
// Check the value of DNT before including GA
if (!_dntEnabled()) {
    // include the Google Analytics snippet
}

This means that the ga object will not be added to the global scope for our page, but that’s okay, the code snippet above handles that case gracefully as well.

Do you have other techniques/suggestions for using analytics services while respecting the online privacy of visitors to your site? Feel free to share your experience.

Flipping Tables

I was very pleased to be invited back to Style and Class to give a talk about making data tables responsive. Style and Class is a great Vancouver meetup I can’t recommend enough.

Here are my slides, though the video and animated gifs don’t play :(

Flipping Tables: Displaying Data on Small Screens from Stephanie Hobson

Download the slides.

And the links I encouraged everyone to check out:

(Still) Making Websites Better for Everyone

In the next few weeks I’ll be speaking at JavaScript Open Day Calgary, View Source, and JavaScript Open Day Toronto giving an updated version of Web Accessibility: 
Making Websites Better for Everyone.

Edit: videos have been posted for View Source and JavaScript Open Day Toronto. I’m more on the ball for View Source but the JSOpenDay one has some demos.

In my talk I encourage people to try out different assistive technologies themselves:

Here are two demo pages that list server status for a fictional hosting company, you should compare the bad example to the good example with one of the technology options listed above.

And here are all the resources I mention in my talk:

Conference Debrief: An Event Apart Austin 2015

Last week, because working at Mozilla is awesome, I got to attend An Event Apart Austin, the conference put on by the A List Apart organization. It’s consistently one of the highest quality conferences for front-end/user experience talks. I try to go every other year.

The conference was great and the organizers did a good job making the speakers accessible, sitting them with us at lunch and keeping them around for the mixer. The caliber of the attendees is also consistently high. I have great conversations no matter who I sit with at lunch.

Most of these speakers are pros who have done their talks in multiple places so its usually possible to find videos of them online with a bit of Googling, and they might be speaking at a conference near you!

Conference overview

Designing for performance and designing with empathy were common themes this year and we got good overviews of some exciting CSS and HTML features that will be widely supported in the near future.

My Favourite Talk

A CSS geek at heart I enjoyed Jen Simmons’ “Modern Layouts: Getting Out of Our Ruts” the most. “We’re about to see a renaissance in design on the web,” she promised at one point and I, for one, will be excited to see more than rectangles on the web. She covered CSS Shapes, Regions, Grid Layout, and the new view-port units, illustrating wonderfully with some really unique examples of layout from *gasp* print which she advocated we translate (not transfer) to the web. A similar version of this talk online if I’ve piqued your interest.

Talk Summaries

Jeffrey Zeldman
The Fault, Dear Brutus (or: Career Advice From a Cranky Old Man)
Zeldman made a compelling argument for quitting your job regularly :P He had some anecdotes about how it is the easiest path to career advancement, and this resonated with my own experience. He said the same could be accomplished by switching teams within an organization and I am glad we have so much sideways movement at Mozilla.

Yesenia Perez-Cruz
Design Decisions Through the Lens of Performance
Yesenia made a clear case for designers needing to take a lot of responsibility for the performance of a website. Designers make decisions about some of the things that add the most weight to our pages: fonts, images, and functionality – before developers ever see them. She said performance budgets should be set and understood as part of the requirements process and designers can be educated to stay within them and enlisted to help communicate with clients about staying within them as well. Great talk +1 would watch again.

Cameron Moll
Unified UX
Cameron talked about how a brand’s presence covers more screens than just a desktop and said we should be unifying user experience across browsers, devices, apps, and accounts citing the fact that 40% of all online US users start an activity on one screen and finish on another. He said we should pay attention to form and function (making widgets look and work similarly, respecting our users mental models) and data symmetry (syncing shopping carts, reading places, etc).

Karen McGrane
Content in a Zombie Apocalypse
Like Cameron spoke about the need for a unified UX, Karen spoke about the need for unified content – as much for the organization’s benefit as the users’. She encouraged us to find ways to create and update content in one place to share across all platforms. She despaired about how poor a tool a WYSIWYG editor is at translating styling decisions into semantic information (It’s not a list of links floated to the right with a narrow max width – it’s a table of contents!) and hoped for a future with better tools for storing and updating structured text so content doesn’t have to be redesigned every time a site is.

Ethan Marcotte
Laziness in the Time of Responsive Design
Ethan, who invented the term responsive web design, encouraged us to avoid the f-word for our sites – frameworks – and encouraged us to solve problems in context, watching to see where content needs adjustments rather than blanket redesigning the experience at different sizes. And he didn’t miss the opportunity to tell us hamburger menus are still awful.

Lara Hogan
Designing for Performance
Where Yesenia talked about process improvements for better performance Lara talked about code improvements. She talked in-depth about image compression, font sub-setting (explaining how the infamous 4 and a horse stars review bug occurred), style guides and OOCSS, and ended talking about setting a public performance budget and how to make performance everyone’s concern rather than having performance cops.

Eric Meyer
Designing for Crisis
Based on his personal experience sitting in a stranger’s car barrelling down a highway, racing the med-evac helicopter carrying his young daughter to a hospital in an unfamiliar city, Eric implored us to add a new persona to our design toolbox: a user in crisis. We like to think we’re designing for cheerful, calm, with-it users living in the home of the future but we don’t get to decide when, where, or how a user will access our sites. He gave examples of how this persona could help the banking, hospital, airline, and insurance industries. And he gave some creative solutions to help us do user testing for these scenarios including using CSS to blur the page, JavaScript to transpose letters in words, or playing the Jeopardy theme song to testers. He encouraged us not to treat these users as edge cases, because defining edge cases defines the limits of what we care about, and we should care about these users. I don’t see much potential for an MDN user to be in crisis but it’s easy to imagine a Firefox user who just lost all their data or thinks an account of theirs has been hacked could be panicking and not thinking clearly.

Brad Frost
Atomic Design
Pioneer of style guides and pattern libraries, Brad reminded us what these look like and why they are valuable and demoed his open source tool patternlab.io for helping to create and maintain them. He has a great system, you can read about it here.

Matt Haughey
Empathy Is Good Business: How Front-Line Customer Support Can Make You A Better Designer
Continuing the empathic design theme Matt encouraged us all to spend a couple hours a week doing front-line customer support and explained how Slack is set up so that all employees must do that. It gives everyone a better idea of how users are using the product and what their pain points are. Rather than having personas to refer to designers have real people they could reference. They also don’t talk down to their users or judge the feedback they give them. It’s all data that makes the product better.

Jeremy Keith
Resilience: Building a Robust Web That Lasts
Jeremy praised the flexibility of HTML and CSS and encouraged us to embrace their resilience rather than fighting the inherent variability of the systems. He reminded us that we can build systems that work for everyone with very basi functionality and then layer enhancements on top of it. At it’s heart instagram should be a bunch of <img>s and a <input type=”file”> and there’s no reason to require JavaScript to see your friend’s images or add your own. Layout, fonts, file previews, filters, are all enhancements. “If you build pages with the idea that parts other than HTML are optional, you will create a better and stronger web page.” – gov.uk

How To Debug CSS

Some quick notes and links for the talk I gave at WordCamp Vancouver about Debugging CSS.

Edit: You can now watch a recording of Debugging CSS. Spot the three projector failures!

Steps

  1. Is the HTML valid?
  2. Is the CSS valid?
  3. Is the right CSS being applied?
    1. Is what you typed being applied?
    2. Is other stuff being applied?
    3. Is what’s being applied right?
  4. Does CSS support what you’re doing?
  5. Does the browser support it?
  6. Can you isolate the problem?
  7. Ask for help

Tools

Other resources

← Before