Monday 30 November 2009
Facebook StumbleUpon Twitter Google+ Pin It

Additional functionality for Google Chrome's Developer Tools

The last few weeks, Google Chrome's developer tools have become much more useful. Besides benefiting from the work the WebKit team has done to improve Web Inspector (our developer tools are partially based on Web Inspector), we also recently released the heap profiler and the timeline tab in Google Chrome's Developer Channel.

With the heap profiler you can now take a snapshot of the JavaScript heap at any point in time. A heap snapshot helps you understand memory usage, and by comparing snapshots you can also follow memory usage over time. You will find the heap profiler in the profiles tab along with the sample-based CPU profiler.

The new timeline view gives you a complete overview of where time is spent when loading a web app. All events -- ranging from loading resources over parsing and executing JavaScript to calculating styles and repainting -- are plotted on a timeline.

Besides these product improvements, we've tried to make the Google Chrome Developer tools easier to find and understand by putting together mini site with tutorials and videos.



To take our newest release for a spin, get Google Chrome from the Developer Channel and you'll automatically be brought up to date. We welcome your feedback and your contributions to improve developer tools in WebKit and Google Chrome even more.

Friday 20 November 2009
Facebook StumbleUpon Twitter Google+ Pin It

Captions available for all Google I/O videos

We work hard to make sure that the videos on the GoogleDevelopers channel on Youtube are captioned, but when I/O added over a hundred hours of video content, we got a little behind. I'm happy to announce that we're finally caught up! Every English and Spanish video from I/O now has captions that you can turn on in YouTube.

Didn't know we had captions? Just click to select captions from the menu in the lower right corner of the video player.

Some caption and subtitle-related news:
  • A group of volunteers from Russia used the translated.by software to crowdsource translation for Google Wave video captions. Thank you, habratranslation! Check out one of the Wave videos with Russian subtitles. (You have to choose Russian from the caption menu in YouTube to see them.)

  • If you'd like to help translate captions for any of our videos, please email google-video-captions@googlegroups.com with a request. We'd be happy to share any caption files that you might be interested in under a creative commons attribution license. If you send us the translation, we'll credit you in the video caption track and blog about how awesome you are.

  • In addition to machine translation for captions, YouTube now provides experimental automatic caption transcription using the same speech recognition algorithms found in Google Voice. The GoogleDevelopers channel is part of the initial pilot, so this feature is available on many of our videos. To learn more, check out the blog post on the Official Google Blog.

Wednesday 18 November 2009
Facebook StumbleUpon Twitter Google+ Pin It

The latest addition to Google's open source projects

Did you know Google has released more than 300 open source projects to date? Yesterday, we announced the latest addition to Google's open source projects - YouTube Direct, a new tool that enables any developer to solicit video submissions, moderate and display them on their website, all powered by YouTube. We recognize the role that open source plays at Google and how it helps us create better applications and we try to give back to the community as much as possible.

YouTube Direct was built on top of YouTube's public APIs and is designed to run on Google App Engine - Google's highly scalable platform. To date, several media organizations like ABC News, The Huffington Post and Politico have taken advantage of the open platform to deploy their own version of YouTube Direct to empower citizen journalism and enrich their site in the process. We look forward to see for more creative usage of the tool.

Welcome to Google Developer Relations, Don!

A couple days ago, Google welcomed Don Dodge to our Developer Relations team, where he joins us as a Developer Advocate working with developers, startups, and other Google Apps partners. We're expecting Don to be a fantastic addition to our team. He's already a prominent voice in the developer community, well-known and highly-regarded among entrepreneurs, technologists, and the media.

In the TechCrunch post first announcing Don's availability, Michael Arrington wrote how Don, "makes a big effort to give young startups the attention they deserve. This is a guy who gives a heck of a lot more to the community than he ever takes back." This dedication to the community of developers and the businesses they build is one of the things that excites us the most about having Don on our team. These businesses have been central to Google's success over the years, so we already know that Don's attitude will fit right in with our efforts. Don has deep experience working in startups from his days at companies like AltaVista, Napster, and Groove Networks, and has always continued to maintain the connection and passion for that community since leaving their ranks to join Microsoft, and now Google. We are eager for Don to share his personal experience and professional insights with developers and small businesses integrating with Google Apps, and be an advocate for developers and partners inside the company.

Don already wrote about his first day on the job at Google. Tomorrow you can hear him speak on the Enterprise Cloud Summit Panel in New York City. You can follow Don on his personal blog, email him at dondodge at google.com, or follow @dondodge on Twitter.

Tuesday 10 November 2009
Facebook StumbleUpon Twitter Google+ Pin It

Go: A New Programming Language

Have you heard about Go? We released a new, experimental systems programming language today. It is open source and we're excited about sharing it with the development community. For more information, check out the Google Open Source blog.

Monday 9 November 2009
Facebook StumbleUpon Twitter Google+ Pin It

Use compression to make the web faster

Every day, more than 99 human years are wasted because of uncompressed content. Although support for compression is a standard feature of all modern browsers, there are still many cases in which users of these browsers do not receive compressed content. This wastes bandwidth and slows down users' interactions with web pages.

Uncompressed content hurts all users. For bandwidth-constrained users, it takes longer just to transfer the additional bits. For broadband connections, even though the bits are transferred quickly, it takes several round trips between client and server before the two can communicate at the highest possible speed.  For these users the number of round trips is the larger factor in determining the time required to load a web page. Even for well-connected users these round trips often take tens of milliseconds and sometimes well over one hundred milliseconds.

In Steve Souders' book Even Faster Web Sites, Tony Gentilcore presents data showing the page load time increase with compression disabled.  We've reproduced the results for three highest ranked sites from the Alexa top 100 with permission here:

Data, with permission, from Steve Souders, "Chapter 9: Going Beyond Gzipping," in Even Faster Web Sites (Sebastapol, CA: O'Reilly, 2009), 122.


The data from Google's web search logs show that the average page load time for users getting uncompressed content is 25% higher compared to the time for users getting compressed content. In a randomized experiment where we forced compression for some users who would otherwise not get compressed content, we measured a latency improvement of 300ms.  While this experiment did not capture the full difference, that is probably because users getting forced compression have older computers and older software.

We have found that there are 4 major reasons why users do not get compressed content: anti-virus software, browser bugs, web proxies, and misconfigured web servers.  The first three modify the web request so that the web server does not know that the browser can uncompress content. Specifically, they remove or mangle the Accept-Encoding header that is normally sent with every request.

Anti-virus software may try to minimize CPU operations by intercepting and altering requests so that web servers send back uncompressed content.  But if the CPU is not the bottleneck, the software is not doing users any favors.  Some popular antivirus programs interfere with compression.  Users can check if their anti-virus software is interfering with compression by visiting the browser compression test page at Browserscope.org.

By default, Internet Explorer 6 downgrades to HTTP/1.0 when behind a proxy, and as a result does not send the Accept-Encoding request header. The table below, generated from Google's web search logs, shows that IE 6 represents 36% of all search results that are sent without compression.  This number is far higher than the percentage of people using IE 6.

Data from Google Web Search Logs

There are a handful of ISPs,  where the percentage of uncompressed content is over 95%.  One likely hypothesis is that either an ISP or a corporate proxy removes or mangles the Accept-Encoding header.  As with anti-virus software, a user who suspects an ISP is interfering with compression should visit the browser compression test page at Browserscope.org.

Finally, in many cases, users are not getting compressed content because the websites they visit are not compressing their content.  The following table shows a few popular websites that do not compress all of their content. If these websites were to compress their content, they could decrease the page load times by hundreds of milliseconds for the average user, and even more for users on modem connections.

Data generated using Page Speed

To reduce uncompressed content, we all need to work together.
  • Corporate IT departments and individual users can upgrade their browsers, especially if they are using IE 6 with a proxy. Using the latest version of Firefox, Internet ExplorerOpera, Safari, or Google Chrome will increase the chances of getting compressed content.  A recent editorial in IEEE Spectrum lists additional reasons - besides compression - for upgrading from IE6.
  • Anti-virus software vendors can start handling compression properly and would need to stop removing or mangling the Accept-Encoding header in upcoming releases of their software.
  • ISPs that use an HTTP proxy which strips or mangles the Accept-Encoding header can upgrade, reconfigure or install a better proxy which doesn't prevent their users from getting compressed content.
  • Webmasters can use Page Speed (or other similar tools) to check that the content of their pages is compressed.
For more articles on speeding up the web, check out http://code.google.com/speed/articles/.

Thursday 5 November 2009
Facebook StumbleUpon Twitter Google+ Pin It

Introducing Closure Tools

Millions of Google users worldwide use JavaScript-intensive applications such as Gmail, Google Docs, and Google Maps. Like developers everywhere, Googlers want great web apps to be easier to create, so we've built many tools to help us develop these (and many other) apps. We're happy to announce the open sourcing of these tools, and proud to make them available to the web development community.

Closure Compiler
Closure Compiler is a JavaScript optimizer that compiles web apps down into compact, high-performance JavaScript code. The compiler removes dead code, then rewrites and minimizes what's left so that it will run fast on browsers' JavaScript engines. The compiler also checks syntax, variable references, and types, and warns about other common JavaScript pitfalls. These checks and optimizations help you write apps that are less buggy and easier to maintain. You can use the compiler with Closure Inspector, a Firebug extension that makes debugging the obfuscated code almost as easy as debugging the human-readable source.

Because JavaScript developers are a diverse bunch, we've set up a number of ways to run the Closure Compiler. We've open-sourced a command-line tool. We've created a web application that accepts your code for compilation through a text box or a RESTful API. We are also offering a Firefox extension that you can use with Page Speed to conveniently see the performance benefits for your web pages.

Closure Library
Closure Library is a broad, well-tested, modular, and cross-browser JavaScript library. Web developers can pull just what they need from a wide set of reusable UI widgets and controls, as well as lower-level utilities for the DOM, server communication, animation, data structures, unit testing, rich-text editing, and much, much more. (Seriously. Check the docs.)

JavaScript lacks a standard class library like the STL or JDK. At Google, Closure Library serves as our "standard JavaScript library" for creating large, complex web applications. It's purposely server-agnostic and intended for use with the Closure Compiler. You can make your project big and complex (with namespacing and type checking), yet small and fast over the wire (with compilation). The Closure Library provides clean utilities for common tasks so that you spend your time writing your app rather than writing utilities and browser abstractions.

Closure Templates
Closure Templates grew out of a desire for web templates that are precompiled to efficient JavaScript.  Closure Templates have a simple syntax that is natural for programmers.  Unlike traditional templating systems, you can think of Closure Templates as small components that you compose to form your user interface, instead of having to create one big template per page.

Closure Templates are implemented for both JavaScript and Java, so you can use the same templates both on the server and client side.


Closure Compiler, Closure Library, Closure Templates, and Closure Inspector all started as 20% projects and hundreds of Googlers have contributed thousands of patches. Today, each Closure Tool has grown to be a key part of the JavaScript infrastructure behind web apps at Google.  That's why we're particularly excited (and humbled) to open source them to encourage and support web development outside Google. We want to hear what you think, but more importantly, we want to see what you make. So have at it and have fun!

Wednesday 4 November 2009
Facebook StumbleUpon Twitter Google+ Pin It

New personalization features in Google Friend Connect

Today, we're excited to announce several new features for Google Friend Connect that make it possible for website owners to get to know their users, encourage users to get to know each other, and match their site content (including Google ads) to visitors' interests. Check out the Google Social Web Blog for an overview of these new features.

We also want to point out that there are APIs for developers who want to play with the interests data programmatically. With the new interest data described on the Social Web Blog, developers can write custom polls and access the interests data directly. Friend Connect provides API level access to both individual interests information as well as aggregate information for all users of a site. Interests information can be added programmatically for the signed-in user or via the poll gadget, and it can be accessed via both the JavaScript API and the OpenSocial REST API. The Guitar Universe example site should give you an idea of some of the things that are possible with this new launch.

As always, feel free to ask technical questions related to the Friend Connect APIs in the developer forum.

Tuesday 3 November 2009
Facebook StumbleUpon Twitter Google+ Pin It

OAuth Enhancements

Google has recently added three important enhancements to our OAuth support:
  1. The ability to use OAuth without registration
  2. Support for software apps installed on a computer or mobile phone
  3. Additional controls for our Google Apps Premier and Education customers which allows administrators to give another web application access to a subset of the data Google stores for that organization
Below is an overview of each enhancement, or you can refer to our updated OAuth documentation.

1. The ability to use OAuth without registration

Based on consistent feedback from our developers, we added the ability to use OAuth without having to register the website ahead of time. This change is especially helpful for developers working on test servers that cannot be accessed directly from the Internet.

2. Support for software apps installed on a computer or mobile phone

Many of the larger enterprises that use the Google Apps service choose to run their own login system. They accomplish this by leveraging our support for the SAML protocol which defines a way for Google to redirect the user to the company's login system to be authenticated before accessing their mailbox at Google.  However, in this situation Google normally does not have a password for the user — especially if the enterprise authenticates the user with a password and with a second factor of authentication (such as a token generator they carry on a keychain). Unfortunately, there are many installed software applications created by both Google and ISV developers that use Google's APIs, and those applications are hardcoded to ask a user for their email and password using Google's ClientLogin API. With this new OAuth feature, the software application can now launch a web browser and start a process that both logs the user in through their central SAML login system, and that also gets the user's consent to access their data hosted at Google. Because the user authentication is done in the web browser, it will work with the enterprise's existing login system.  Google is encouraging any ISV that uses the ClientLogin API to add support for this new OAuth flow, enabling usage by the large enterprise customers described above. Google is also planning to enhance our Google Apps Sync for Microsoft Outlook to support this feature such that Outlook can be used with both Google Apps and an enterprise's central login system.

3. Additional controls for our Google Apps Premier and Education customers which allows administrators to give another web application access to a subset of the data Google stores for that organization

This feature for our Google Apps Premier customers enhances our existing OAuth for Google Apps domain administrators, also known as 2-legged OAuth. This feature enables domain administrators to allow specific IT apps or third party web services limited access to user accounts via a centralized permissions system under the control of the  domain administrator. For example, with this new system, an administrator can use the Google Documents API to configure every user in the domain to have a Google Docs folder named "Human Resources" that is automatically populated with common employee forms.  The company might also sign up with an Enterprise SaaS vendor such as Manymoon and specify that Manymoon can access the Google Calendars of all of their users, providing tighter integration with Manymoon's project scheduling features. Previously, this feature required giving the third party vendor access to all of the data that Google stored for that organization, but with this new feature, administrators can limit access to particular data sources (Calendar, Documents, etc). Refer to our documentation for more information.

Hybrid Onboarding

Do you operate a website and wish you could increase the percentage of users who finish the registration process? As discussed on Google's main blog, Google has been working with Plaxo and Facebook to improve the registration success rate for Gmail users. We now see success rates as high as 90%, compared to the 50-60% rate that most websites see with traditional registration mechanisms. This result was achieved using a combination of our OpenID, OAuth and Portable Contacts APIs. While those APIs have been available for over a year, we have added a number of refinements based on our experience with Plaxo and Facebook. Our documentation now has information on those new features, including:
  • OpenID User Interface Extension 1.0 (including the ability to display the favicon of the website)
  • x-has-session, which is an enhacement to checkid_immediate requests via the UI extension. If the request includes "openid.ui.x-has-session," it will be echoed in the response only if Google detects an authenticated session
  • Support for the US Government's GSA profile for OpenID
  • PAPE (Provider Authentication Policy Extension) to support forced password reprompts
  • Support for not only Google Accounts, but also our Google Apps customers, as discussed on the Enterprise blog

For more details, please refer to our OpenID documentation.

While these technologies are all standards-based, the methods for how to combine them to achieve this success rate are not obvious, and took a while for the industry to refine. More information is available in the Hybrid Onboarding Guide, but below is a quick summary of some of the best practices for this hybrid onboarding technique:
  • The technique is primarily for websites with an existing login system based on email addresses.
  • It also assumes the website will send email to users who are not yet registered, whether it is through traditional email marketing or social network invitations.
  • The website owner then needs to choose a small set of email providers such as Yahoo and Google that support these standards.
  • Whenever the website sends email to a user at one of those providers, any hyperlinks that promote registration at the website should be modified to communicate the email address (or at least domain) of the user back to the website's registration page.
  • If the registration page detects a user from one of these domains, it should NOT start the traditional process of asking the user to enter a password, password confirmation, and email. Instead, it should prominently show a single button that says "Sign up with your Google Account" — where Google is replaced with the name of the email provider.
  • If the user clicks that button, the website should use the OpenID protocol to ask the email provider to authenticate the user, provide their email address, and optionally ask for access to their address book using the hybrid OpenID/OAuth protocol and the Portable Contacts API. More details about this flow are available on the OpenID blog.
  • Once the user returns to the website, it can create an account entry for the user. The website can also mark the email address as verified without having to send a traditional "email verification" link to the user. If the website received the user's permission to access their address book, it can now download it and look for information about the user's friends.
    • In the unusual case where an account already exists for that email address, the website can simply log the user into that pre-existing account. 
  • For any newly registered user, the website should then display a page that confirms the user is registered and that indicates how they should sign in in the future.
  • To make the login process simple, the website should modify their login box to include a logo for each of the trusted email providers it supports, or use one of the other user experiences for Federated Login.
  • If a user clicks the email provider button, they can again be sent to that provider's site using the OpenID protocol. When the user comes back, the website can either detect that they previously registered, or if it is a new user, the website can create an account for them on the fly.
    • In some cases the account may already exist for that email address, but it was not initially registered using OpenID. In that case, the website can simply log the user in to that pre-existing account.