Archive for the ‘Firefox’ Category.

A billion crashes: 2012 in review

In 2012, on the Socorro project, we:

  • Collected more than one billion crashes: more than 150TB of raw data, amounting to around half a petabyte stored. (Not all at once: we now have a data expiration policy.)
  • Shipped 54 releases
  • Resolved 1010 bugs.  Approximately 10% of these were the Django rewrite, and 40% were UI bugs.  Many of the others were backend changes to support the front end work (new API calls, stored procedures, and so on).

New features include:

  • Reports available in build time as well as clock time (graphs, crashes/user, topcrashers)
  • Rapid beta support
  • Multiple dump support for plugin crashes
  • New signature summary report
  • Per OS top crashers
  • Addition of memory usage information, Android hardware information, and other new metadata
  • Timezone support
  • Correlation reports for Java
  • Better admin navigation
  • New crash trends report
  • Added exploitability analysis to processing and exposed this in the UI (for authorized users)
  • Support for ESR channel and products
  • Support for WebRT
  • Support for WebappRTMobile
  • Support for B2G
  • Explosiveness reporting (back end)
  • More than 50 UI tweaks for better UX

Non-user facing work included:

  • Automated most parts of our release process
  • All data access moved into a unified REST API
  • Completely rewrote front end in Python/Django (from old KohanaPHP version with no upgrade path)
  • Implemented a unified configuration management solution
  • Implemented unified cron job management
  • Implemented auto-recovery in connections for resilience
  • Added statsd data collection
  • Implemented fact tables for cleaner data reporting
  • Added rules-based transforms to support greater flexibility in adding new products
  • Refactored back end into pluggable fetch-transform-save architecture
  • Automated data export to stage and development environments
  • Created fakedata sandbox for development for both Mozilla employees and outside contributors
  • Implemented automated reprocessing of elfhack broken crashes
  • Automated tests run on all pull requests
  • Added views and stored procedures for metrics analysts
  • Opened read-only access to PostgreSQL and HBase (via Pig) for internal users

I believe we run one of the biggest software error collection services in the world.  Our code is used by open source users across the internet, games, gaming (casino), music, and audio industries.

As well as working on Socorro, the Webtools team worked on more than 30 other projects, fixed countless bugs, shipped many, many releases, and supported critical organizational goals such as stub installer and Firefox Health Report.  We contributed to Gaia, too.

We could not have done any of this without help from IT (especially WebOps, SRE, and DB Ops) and WebQA.  A huge thank you to those teams. <3

I’ll write a part two of this blog post to talk more about our work on projects other than crash reporting, but I figured collecting a billion crashes deserved its own blog post.

Edited to add: I learned from Corey Shields, our Systems Manager, that we had 100% uptime in Q4.  (He’s still working on statistics for the whole of 2012.)

Ship it: a big week in Webtools

They say multi-tasking is hard. They also say DevOps is hard. Let me tell you about a bunch of engineers who think “hard” means “a nice challenge”.

Last week was an amazing one for the Webtools family. We pushed three releases to three major products. People inside Mozilla don’t always know exactly what types of things the Webtools team works on, so allow me to tell you about them.

1. Bouncer

Bouncer is Mozilla’s download redirector. When you click on one of those nifty “Download Firefox” buttons on mozilla.org, that takes you to Bouncer, which redirects you to the correct CDN or mirror where you can actually get the product that you want. Bouncer is also one of the oldest webapps at Mozilla, having been originally authored by my boss, Mike Morgan, many years ago.

Bouncer hadn’t had code changes in a very long time, and when we realized we needed to change it to support the new stub installer for Firefox, we had to spin up new development and staging environments. In addition, IT built out a new production cluster up to the new standards that have come into use since the last time it was deployed.

The code changes for stub installer are mainly around being intelligent enough to understand that some products, like the stub, can only be served from an SSL CDN or mirror. We don’t want to serve all products over SSL because of cost.

On Wednesday we shipped the new infrastrucure, and the code changes. You can read more about that it in bug 800042.

Thanks to Brandon Savage (Webtools), Anthony Ricaud (Websites), Fred Wenzel (Dev Ecosystem), Jake Maul (WebOps), Chris Turra (WebOps), Corey Shields (Systems), Stephen Donner (Web QA), Matt Brandt (Web QA), Raymond Etnoram (Web QA), and Ben Hearsum (RelEng) for making this possible.

2. Air Mozilla

As you probably know, Air Mozilla is the website that broadcasts Mozilla meetings, brownbags and presentations. On Friday we shipped a brand new version, built on top of Django. The old version was hosted in Wordpress, and was really a simple way to present content. The new version has full calendaring integration, LDAP and BrowserID support, and better ways to find old presentations.

Thanks to Tim Mickel (Webtools Intern), Peter Bengtsson (Webtools), Richard Milewski (Air Mozilla), Zandr Milewski (SpecOps), Dan Maher (WebOps), Chris Turra (WebOps), Brandon Burton (WebOps), Jason Crowe (WebOps), and Corey Shields (Systems).

You can see details of the release in bug 799745.

3. Socorro

We also shipped a regular Wednesday Socorro release. Socorro is the crash reporting service for Mozilla products, including Firefox, Firefox for Mobile (”Fennec”), Firefox OS (”Boot to Gecko”), and Thunderbird.

In this release we shipped five bug fixes and enhancements. This number was a bit lower than usual, as most people are crunching to complete the front end rewrite (more on that in a moment).

You can read more about the release in bug 800140.

Thanks to the whole team for working on this: Adrian Gaudebert, Brandon Savage, Chris Lonnen, Lars Lohn, Peter Bengtsson, Robert Helmer, Schalk Neethling, Selena Deckelmann, and of course Matt Brandt (Web QA) and Brandon Burton (IT).

An aside: Socorro on Django

We are also very close to feature parity with the new Django-based version of the Socorro webapp to the old PHP webapp. We needed to rewrite this code, because the version of the framework used in the old version is four years out of date, and there was no upgrade path for it - newer versions break backwards compatibility. Since we had to rewrite it anyway, we have moved to use the same framework as the majority of other webapps at Mozilla. This allows for easier contributions by other Mozillians. We should reach parity in the next couple of days, and plan to ship the new code in parallel with the old, subject to secreview timing.

finally:

I am incredibly proud of the impact, quality, and sheer quantity of our work over the last weeks. These projects will enable many good things throughout Mozilla. Good work, people, stand tall.

Webtools is a small team, and we could not do what we do with incredible support from IT and QA. I like to think of this as the Webtools family: we are all one team; we all work together to get the job done come hell, high water, or zombies in the data center.

Just remember, there’s a reason the Webtools mascot is Ship It Squirrel.

Rapid releases: one webdev’s perspective

People still seem to be very confused about why Mozilla has moved to the new rapid release system. I thought I’d try and explain it from my perspective. I should point out that I am not any kind of official spokesperson, and should not be quoted as such. The following is just my own personal opinion.

Imagine, now, you work on a team of web developers, and you only get to push new code to production once a year, or once every eighteen months. Your team has decided to wait until the chosen twenty new features are finished, and not ship until those are totally done and passed through a long staging and QA period. The other hundreds of bugs/tickets you closed out in that 12-18 months would have to wait too.

Seems totally foreign in these days of continuous deployment, doesn’t it?

When I first heard about rapid releases, back at our December All Hands, I had two thoughts. The first was that this was absolutely the right thing to do. When stuff is done we should give it to users. We shouldn’t make them wait, especially when other browsers don’t make them wait.

The second thought was that this was completely audacious and I didn’t know if we could pull it off. Amazingly, it happened, and now Mozilla releases use the train model and leave the station every six weeks.

So now users get features shortly after they are done (big win), but there’s been a lot of fallout. Some of the fallout has been around internal tools breaking – we just pushed out a total death sprint release of Socorro to alleviate some of this, for example. Most of the fallout, however, has been external. I see three main areas, and I’ll talk about each one in turn.

Version numbers

The first thing is pushback on version numbers. I see lots of things like:
“Why is Mozilla using marketing-driven version numbers now?”
“What are they trying to prove?”
“How will I know which versions my addons are compatible with?”
“How will I write code (JS/HTML/CSS) that works on a moving target?”

Version numbers are on the way to becoming much less visible in Firefox, like they are in webapps, or in Chrome, for that matter. (As I heard a Microsoft person say, “Nobody asks ‘Which version of Facebook are you running?’”) So to answer: it’s not marketing-driven. In fact, I think not having big versions full of those twenty new features has been much, much harder for the Engagement (marketing) team to know how to market. I see a lot of rage around version numbers in the newsgroups and on tech news sites (HN, Slashdot, etc), which tells me that we haven’t done a good job communicating this to users. I believe this is a communication issue rather than because it’s a bad idea: nowhere do you see these criticisms of Chrome, which uses the same method.

(This blog post is, in part, my way of trying to help with this.)

Add-on compatibility

The add-ons team has been working really hard to minimize add-on breakage. In realistic terms, most add-ons will continue to work with each new release, they just need a version bump. The team has a process for bumping the compatible versions of an add-on automatically, which solves this problem for add-ons that are hosted on addons.mozilla.org. Self-hosted add-ons will continue to need manual updating, and this has caused problems for people.

The goal is, as I understand it, for add-on authors to use the Add-on SDK wherever possible, which will have versions that are stable for a long time. (Read the Add-ons blog on the roadmap for more information on this.)

Enterprise versions

The other thing that’s caused a lot of stress for people at large companies is the idea that versions won’t persist for a long time. Large enterprises tend not to upgrade desktop software frequently. (This is the sad reason why so many people are still on IE6.)

There is an Enterprise Working Group working on these problems: we are taking it seriously.

finally:

Overall, getting Firefox features to users faster is a good thing. Some of the fallout issues were understood well in advance and had a mitigation plan: add-on incompatibility for example. Some others we haven’t done a good job with.

I truly believe if we had continued to release Firefox at the rate of a major version every 18 months or so, that we would have been on a road to nowhere. We had to get faster. It’s a somewhat risky strategy, but it’s better to take that risk than quietly fade away.

At the end of the day we have to remember the Mozilla mission: to promote openness and innovation on the web. It’s hard to promote innovation within the browser if you ship orders of magnitude more slowly than your competitors.

Notice that I mention the mission: people often don’t know or tend to forget that Mozilla isn’t in this for the money. We’re a non-profit. We’re in it for the good of the web, and we want to do whatever’s needed to make the web a better and more open place. We do these things because we’re passionate about the web.

I’ve never worked anywhere else where the mission and the passion were so prominent. We may sometimes do things you don’t agree with, or communicate them in a less-than-optimal way, but I really want people who read this to understand that our intentions are positive and our goal is the same as it’s always been.

The future of crash reporting

This post first appeared in the Mozilla Webdev Blog on August 5 2010.

In recent blog posts I’ve talked about our plans for Socorro and our move to HBase.

Today, I’d like to invite community feedback on the draft of our plans for Socorro 2.0. In summary, we have been moving our data into HBase, the Hadoop database. In 1.7 we began exclusively using HBase for crash storage. In 1.8 we will move the processors and minidump_stackwalk to Hadoop.

Here comes the future

In 1.9, we will enable pulling data from HBase for the webapp via a web services layer. This layer is also known as “the pythonic middleware layer”. (Nominations for a catchier name are open. My suggestion of calling it “hoopsnake” was not well received.)

In 2.0 we will expose HBase functionality to the end user. We also have a number of other improvements planned for the 2.x releases, including:

  • Full text search of crashes
  • Faceted search
  • Ability for users to run MapReduce jobs from the webapp
  • Better visibility for explosive and critical crashes
  • Better post-crash user engagement via email

Full details can be found in the draft PRD. If you prefer the visual approach you can read the slides I presented at the Mozilla Summit last month.

Give us feedback!

We welcome all feedback from the community of users - please take a look and let us know what we’re missing. We’re also really interested in feedback about the best order in which to implement the planned features.

You can send your feedback to laura at mozilla dot com - I look forward to reading it.

Moving Socorro to HBase

This post first appeared in the Mozilla Webdev Blog on July 26 2010.

We’ve been incredibly busy over on the Socorro project, and I have been remiss in blogging. Over the next week or so I’ll be catching up on what we’ve been doing in a series of blog posts. If you’re not familiar with Socorro, it is the crash reporting system that catches, processes, and presents crash data for Firefox, Thunderbird, Fennec, Camino, and Seamonkey. You can see the output of the system at http://crash-stats.mozilla.com. The project’s code is also being used by people outside Mozilla: most recently Vigil Games are using it to catch crashes from Warhammer 40,000: Dark Millenium Online.

Back in June we launched Socorro 1.7, and we’re now approaching the release of 1.8. In this post, I’ll review what each of these features represents on our roadmap.

First, a bit of history on data storage in Socorro. Until recently, when crashes were submitted, the collector placed them into storage in the file system (NFS). Because of capacity constraints, the collector follows a set of throttling rules in its configuration file in order to make a decision about how to disseminate crashes. Most crashes go to deferred storage and are not processed unless specifically requested. However, some crashes are queued into standard storage for processing. Generally this has been all crashes from alpha, beta, release candidate and other “special” versions; all crashes with a user comment; all crashes from low volume products such as Thunderbird and Camino; and a specified percentage of all other crashes. (Recently this has been between ten and fifteen percent.)

The monitor process watched standard storage and assigned jobs to processors. A processor would pick up crashes from standard storage, process them, and write them to two places: our PostgreSQL database, and back into file system storage. We had been using PostgreSQL for serving data to the webapp, and the file system storage for serving up the full processed crash.

For some time prior to 1.7, we’d been storing all crashes in HBase in parallel with writing them into NFS. The main goal of 1.7 was to make HBase our chief storage mechanism. This involved rewriting the collector and processor to write into HBase. The monitor also needed to be rewritten to look in HBase rather than NFS for crashes awaiting processing. Finally, we have a web service that allows users to pull the full crash, and this also needed to pull crashes from HBase rather than NFS.

Not long before code freeze, we decided we should add a configuration option to the processor to continue storing crashes in NFS as a fallback, in case we had any problems with the release. This would allow us to do a staged switchover, putting processed crashes in both places until we were confident that HBase was working as intended.

During the maintenance window for 1.7 we also took the opportunity to upgrade HBase to the latest version. We are now using Cloudera’s CDH2 Hadoop distribution and HBase 0.20.5.

The release went fairly smoothly, and three days later we were able to turn off the NFS fallback.

We’re now in the final throes of 1.8. While we now have crashes stored in HBase, we are still capacity constrained by the number of processors available. In 1.8, the processors and their associated minidump_stackwalk processes will be daemonized and move to run on the Hadoop nodes. This means that we will be able to horizontally scale the number of processors with the size of the data. Right now we are running fifteen Hadoop nodes in production and this is planned to increase over the rest of the year.

Some of the associated changes in 1.8 are also really exciting. We are introducing a new component to the system, called the registrar. This process will track heartbeats for each of the processors. Also in this version, we have added an introspection API for the processors. The registrar will act as a proxy, allowing us to request status and statistical information for each of the processors. We will need to rebuild the status page (visible at http://crash-stats.mozilla.com/status) to use this new API, but we will have much better information about what each processor is doing.

Update: we’re frozen on 1.8 and expect release later this month.

Socorro: Mozilla’s Crash Reporting System

(Cross-posted from the Mozilla WebDev blog.)

Recently, we’ve been working on planning out the future of Socorro.  If you’re not familiar with it, Socorro is Mozilla’s crash reporting system.

You may have noticed that Firefox has become a lot less crashy recently - we’ve seen a 40% improvement over the last five months.  The data from crash reports enables our engineers to find, diagnose, and fix the most common crashes, so crash reporting is critical to these improvements.

We receive on our peak day each week 2.5 million crash reports, and process 15% of those, for a total of 50 GB.  In total, we receive around 320Gb each day!  Right now we are handicapped by the limitations of our file system storage (NFS) and our database’s ability to handle really large tables.   However, we are in the process of moving to Hadoop, and currently all our crashes are also being written to HBase.  Soon this will become our main data storage, and we’ll be able to do a lot more interesting things with the data.  We’ll also be able to process 100% of crashes.  We want to do this because the long tail of crashes is increasingly interesting, and we may be able to get insights from the data that were not previously possible.

I’ll start by taking a look at how things have worked to date.

History of Crash Reporting

Current Socorro Architecture

The data flows as follows:

  • When Firefox crashes, the crash is submitted to Mozilla by a part of the browser known as Breakpad.  At Mozilla’s end, this is where Socorro comes into play.
  • Crashes are submitted to the collector, which writes them to storage.
  • The monitor watches for crashes arriving, and queues some of them for processing.  Right now, we throttle the system to only process 15% of crashes due to capacity issues.  (We also pick up and transform other crashes on demand as users request them.)
  • Processors pick up crashes and process them.  A processor gets its next job from a queue in our database, invokes minidump_stackwalk (a part of Breakpad) which combines the crash with symbols, where available.  The results are written back into the database.   Some further processing to generate reports (such as top crashes) is done nightly by a set of cron jobs.
  • Finally, the data is available to Firefox and Platform engineers (and anyone else that is interested) via the webui, at http://crash-stats.mozilla.com

Implementation Details

  • The collector, processor, monitor and cron jobs are all written in Python.
  • Crashes are currently stored in NFS, and processed crash information in a PostgreSQL database.
  • The web app is written in PHP (using the Kohana framework) and draws data both from Postgres and from a Pythonic web service.

Roadmap

Future Socorro releases are a joint project between Webdev, Metrics, and IT.  Some of our milestones focus on infrastructure improvements, others on code changes, and still others on UI improvements.  Features generally work their way through to users in this order.

  • 1.6 - 1.6.3 (in production)

    The current production version is 1.6.3, which was released last Wednesday.  We don’t usually do second dot point releases but we did 1.6.1, 1.6.2, and 1.6.3 to get Out Of Process Plugin (OOPP) support out to engineers as it was implemented.

    When an OOPP becomes unresponsive, a pair of twin crashes are generated: one for the plugin process and one for the browser process.  For beta and pre-release products, both of these crashes are available for inspection via Socorro.  Unfortunately, Socorro throttles crash submissions from released products due to capacity constraints.  This means one or the other of the twins may not be available for inspection.  This limitation will vanish with the release of Socorro 1.8.

    You can now see whether a given crash signature is a hang or a crash, and whether it was plugin or browser related.  In the signature tables, if you see a stop sign symbol, that’s a hang.  A window means it is crash report information from the browser, and a small blue brick means it is crash report information from the plugin.

    If you are viewing one half of a hang pair for a pre-release or beta product, you’ll find a link to the other half at the top right of the report.

    You can also limit your searches (using the Advanced Search Filters) to look just at hangs or just at crashes, or to filter by whether a report is browser or plugin related.

  • 1.7 (Q2)

    We are in the process of baking 1.7.  The key feature of this release is that we will no longer be relying on NFS in production. All crash report submissions are already stored in HBase, but with Socorro 1.7, we will retrieve the data from HBase for processing and store the processed result back into HBase.

  • 1.8 (Q2)

    In 1.8, we will migrate the processors and minidump_stackwalk instances to run on our Hadoop nodes, further distributing our architecture.  This will give us the ability to scale up to the amount of data we have as it grows over time. You can see how this will simplify our architecture in the following diagram.

    New Socorro Architecture

    With this release, the 15% throttling of Firefox release channel crashes goes away entirely.

  • 2.0 (Q3 2010)

    You may have noticed 1.9 is missing.  In this release we will be making the power of Hbase available to the end user, so expect some significant UI changes.

    Right now we are in the process of specifying the PRD for 2.0.  This involves interviewing a lot of people on the Firefox, Platform, and QA teams.  If we haven’t scheduled you for an interview and you think we ought to talk to you, please let us know.

Features under consideration

  • Full text search of crashes
  • Faceted search: start by finding crashes that match a particular signature, and then drill down into them by category.
    Which of these crashes involved a particular extension or plugin?  Which ones occured within a short time after startup?
  • The ability to write and run your own Map/Reduce jobs (training will be provided)
  • Detection of “explosive crashes” that appear quickly
  • Viewing crashes by “build time” instead of clock time
  • Classification of crashes by component

This is a big list, obviously. We need your feedback - what should we work on first?

One thing that we’ve learned so far through the interviews is that people are not familiar with the existing features of Socorro, so expect further blog posts with more information on how best to use it!

How to get involved

As always, we welcome feedback and input on our plans.

You can contact the team at socorro-dev@mozilla.com, or me personally at laura@mozilla.com.

In addition, we always welcome contributions.  You can find our code repository at
http://code.google.com/p/socorro/

We hold project meetings on a Wednesday afternoon - details and agendas are here
https://wiki.mozilla.org/Breakpad/Status_Meetings

Foxes

We have a litter of fox kits in the back field at our house.  Today we managed to catch them on film.  Please enjoy our very own foxkehs.  :)

Foxkeh?

Edited to add: Some people apparently don’t know Foxkeh…for comparison:
Foxkeh,  (C) 2006 Mozilla Japan

You can view the whole set here:
http://flickr.com/photos/lauraxthomson/sets/72157605003262452/

Firefox 3 Beta 3 Add-ons Manager and Add-ons API

Yesterday the beta 3 of Firefox 3 was released to the world.  This beta contains the new Add-ons Manager, and people seem to be liking it so far - ArsTechnica says

One of the most promising and impressive new features in beta 3 is an
integrated add-on installer system that allows users to search for and
install add-ons from addons.mozilla.org directly through the Add-ons
Manager user interface.

The new Add-ons Manager is the result of collaboration between a bunch of smart Mozilla people - Madhava Enros and Dave Townsend to name two - and a small contribution from yours truly. 

The Add-ons Manager pulls data about Recommended addons and search results from the main addons.mozilla.org (AMO) website via the AMO API, which is my project.    When you ask for a recommendation, the Add-ons manager pulls a RESTian URL like

https://services.addons.mozilla.org/en-US/firefox/api/list/recommended/all/

checks for addons that you don’t yet have installed from that list, and displays details of the remaining addons.

The API will be (is) available to the community as well, and promoted once testing is complete.  If you’d like to experiment with the API then draft documentation is available at
http://wiki.mozilla.org/Update:Remora_API_Docs
(This will move to the Mozilla Developer Center once it’s more fleshed out.)  Please file any bugs you find.

I’m still working on tweaks and bug fixes: I’ve already fixed a bunch of character encoding issues in different languages, and applied some performance tweaks. (Some still to go into production.)  Right now, I’m working on speeding up search.  Search is slow on the whole of AMO, and later this year I plan to implement a full text search.  Right now it’s just tweaking - it’s slow because when you search all the possible translations are searched (think many left joins), and the plan is to rejig the database to only search your local translation plus English (since many add-ons are only available in English, and we wouldn’t want you to miss out).

Anyway, it’s been great fun working on this project so far, and it’s incredibly rewarding to think that something I wrote is incorporated into my favorite browser.