Thursday 30 September 2010
Facebook StumbleUpon Twitter Google+ Pin It

WebP, a new image format for the Web

Cross-posted from the Chromium Blog

As part of Google’s initiative to make the web faster, over the past few months we have released a number of tools to help site owners speed up their websites. We launched the Page Speed Firefox extension to evaluate the performance of web pages and to get suggestions on how to improve them, we introduced the Speed Tracer Chrome extension to help identify and fix performance problems in web applications, and we released a set of closure tools to help build rich web applications with fully optimized JavaScript code. While these tools have been incredibly successful in helping developers optimize their sites, as we’ve evaluated our progress, we continue to notice a single component of web pages is consistently responsible for the majority of the latency on pages across the web: images.

Most of the common image formats on the web today were established over a decade ago and are based on technology from around that time. Some engineers at Google decided to figure out if there was a way to further compress lossy images like JPEG to make them load faster, while still preserving quality and resolution. As part of this effort, we are releasing a developer preview of a new image format, WebP, that promises to significantly reduce the byte size of photos on the web, allowing web sites to load faster than before.

Images and photos make up about 65% of the bytes transmitted per web page today. They can significantly slow down a user’s web experience, especially on bandwidth-constrained networks such as a mobile network. Images on the web consist primarily of lossy formats such as JPEG, and to a lesser extent lossless formats such as PNG and GIF. Our team focused on improving compression of the lossy images, which constitute the larger percentage of images on the web today.

To improve on the compression that JPEG provides, we used an image compressor based on the VP8 codec that Google open-sourced in May 2010. We applied the techniques from VP8 video intra frame coding to push the envelope in still image coding. We also adapted a very lightweight container based on RIFF. While this container format contributes a minimal overhead of only 20 bytes per image, it is extensible to allow authors to save meta-data they would like to store.

While the benefits of a VP8 based image format were clear in theory, we needed to test them in the real world. In order to gauge the effectiveness of our efforts, we randomly picked about 1,000,000 images from the web (mostly JPEGs and some PNGs and GIFs) and re-encoded them to WebP without perceptibly compromising visual quality. This resulted in an average 39% reduction in file size. We expect that developers will achieve in practice even better file size reduction with WebP when starting from an uncompressed image.

To help you assess WebP’s performance with other formats, we have shared a selection of open-source and classic images along with file sizes so you can visually compare them on this site. We are also releasing a conversion tool that you can use to convert images to the WebP format. We’re looking forward to working with the browser and web developer community on the WebP spec and on adding native support for WebP. While WebP images can’t be viewed until browsers support the format, we are developing a patch for WebKit to provide native support for WebP in an upcoming release of Google Chrome. We plan to add support for a transparency layer, also known as alpha channel in a future update.

We’re excited to hear feedback from the developer community on our discussion group, so download the conversion tool, try it out on your favorite set of images, and let us know what you think.

Monday 27 September 2010
Facebook StumbleUpon Twitter Google+ Pin It

New OAuth support for Google Apps APIs

Cross-posted from the Google Enterprise Blog

Google Apps is designed to provide a secure and reliable platform for your data. Until today, Google Apps administrators had to sign requests for calls to Google Apps APIs using their username and password (this is called ClientLogin Authorization).

Yet sharing passwords across sites can pose security risks. Furthering our commitment to make the cloud more secure for our users, today we are pleased to announce support for OAuth authorization on Google Apps APIs.

There are several advantages to using OAuth instead of the username/password model:

  • OAuth is more secure: OAuth tokens can be scoped and set to expire by a certain date, making them more secure than using the ClientLogin mechanism.
  • OAuth is customizable: Using OAuth, you can create tokens that scripts may only use to access data of a particular scope when calling Google Apps APIs. For instance, a token set to call the Email Migration API would not be able to use your login credentials to access the Google Apps Provisioning API.
  • OAuth is an open standard: OAuth is an open source standard, making it a familiar choice for developers to work with.

The Google Apps APIs that support the OAuth signing mechanism are:

  1. Provisioning API
  2. Email Migration API
  3. Admin Settings API
  4. Calendar Resource API
  5. Email Settings API
  6. Audit API

OAuth support for Google Apps APIs is another step towards making Google Apps the most secure, reliable cloud based computing environment for organizations. To learn more about OAuth support and other administrative capacities launched in Google Apps this quarter, join us for a live webinar on Wednesday, September 29th at 9am PT / 12pm EST / 5pm GMT.

Administrators for Google Apps Premier, Education, and Government Editions can use OAuth authorization for Google Apps APIs starting today.For more information about the OAuth standard, visit http://oauth.net.

Thursday 23 September 2010
Facebook StumbleUpon Twitter Google+ Pin It

Our first ever G-days in Egypt and Jordan

Google is dedicated to making the Internet relevant and useful to Arabic speakers, and to developing meaningful and local products for the Middle East. We fully realise that we cannot foster this growing Internet ecosystem alone, and we therefore believe that tech entrepreneurs and developers have the opportunity to transform the Web for the world and for the Middle East.

So for the first time ever in Egypt and Jordan, Google is very excited to host its G-days, in Cairo between December 8th and 10th for G-Egypt, and Amman between December 12th and 14th for G-Jordan.

Each day of the 3-day conference will cater to a different audience, spanning computer science students and professors, professional developers, webmasters, entrepreneurs, small businesses and tech marketers. Take a look at our sites (G-Egypt and G-Jordan) to learn more about the G-day that might fit your appetite. You must pre-register on the websites as space is limited - you will then be fully registered as soon as we send you a confirmation.

Some of Google’s best and most engaging engineers, product managers, business managers and leadership will be speaking about Google’s open web and mobile technologies. Attendees will have the chance to interact with Googlers and explore Google’s technologies through a combination of tech talks and breakout sessions. We’re getting ready to make these events fun, insightful and interesting so we hope to see you there !

On Twitter : #gegypt #gjordan @GoogleDevMENA

Tuesday 21 September 2010
Facebook StumbleUpon Twitter Google+ Pin It

Google Developer Day registration open for Munich, Moscow and Prague

Registration opens today for Google Developer Day in Europe and Russia! As you saw from our agenda announcement, they promise to be jam-packed with great speakers and fantastic content.

Register to attend on the following dates, in the following places:

Stay updated on Developer Day news by following us at:

Our official hashtags are #gddde, #gddru and #gddcz.

Look forward to seeing you there!

Thursday 16 September 2010
Facebook StumbleUpon Twitter Google+ Pin It

Google Relaunches Instantiations Developer Tools - Now Available for Free

(Cross-posted from the Google Web Toolkit blog)

In early August, Google acquired Instantiations, a company known for its focus on Eclipse Java developer tools, including GWT Designer. We're happy to announce today that we're relaunching the following former Instantiations products under the Google name and making them available to all developers at no charge:

  • GWT Designer
    Powerful Eclipse-based development tools that enable Java developers to quickly create Ajax user interfaces using Google Web Toolkit (GWT)

  • CodePro AnalytiX
    Comprehensive automated software code quality and security analysis tools to improve software quality, reliability, and maintainability

  • WindowBuilder Pro
    Java graphical user interface designer for Swing, SWT, GWT, RCP, and XWT UI frameworks

  • WindowTester Pro
    Test GUI interactions within Java client rich applications for the SWT and Swing UI frameworks

Now that these products are available again, we hope you’ll start using them within your GWT projects. Meanwhile, our next step is to more deeply unify them into the GWT family of tools by blending the fantastic Instantiations technology into the Google Plugin for Eclipse (GPE). So, there’s much more to come, including things we’re pretty sure you’ll like, such as UiBinder support in GWT Designer.

You can download any of the tools from the GWT download page. If you have questions or comments we’d love to hear from you. The best place to discuss the tools above is at http://forums.instantiations.com. As always, continue to discuss GWT and GPE at the main GWT Group.

We would love to stay in better touch with you as we have more news about how we are integrating the Instantiations products into the Google Web Toolkit suite. Sign up if you’d like to receive email updates on these products and other developer tools.

Wednesday 15 September 2010
Facebook StumbleUpon Twitter Google+ Pin It

Google Developer Day Registration Now Open for Brazil

As promised, registration for our Sao Paulo Google Developer Day is now open. Developer Day in Brazil takes place on October 29, 2010. More details on sessions and speaker bios are now available on the site.

Follow @googledevbr (hashtag: #gddbr) to stay updated on developer news in Brazil.

UPDATE: Registration closes on Sep 28, 2010.

Prediction API: Make smart apps even smarter

Since its announcement at Google I/O, the Google Prediction API has seen an outstanding response from the developer community. Developers participating in the Prediction API preview are already using it to identify spam, categorize news, and more.

Today we’re adding new features to the Prediction API to make your apps even smarter:

Multi-category prediction: Imagine you’re writing a news aggregator that suggests articles based on the kinds of stories the user has read before. Previously, using the Prediction API, each article could only be tagged with one label - the most pertinent one. For example, an article about a new truck might be labeled as “truck,” but not “roomy” or “quiet.” Now articles can be tagged with all of those labels, with the labels ranked by pertinence, enabling your app to make better recommendations.

Continuous Output: You’d like to create a wine recommendation app. Matching a wine to personal preferences is a tricky task, dependent on many factors, including origin, grape, age, growing environment, and flavor presence. Previously, your app could only label wine as “good,” “decent,” “bad,” or some other set of pre-defined values. Using the new continuous output option, your app can provide a fine-grained ranking of wines based on how well they fit the user’s preferences.

Mixed Inputs: You’re creating an automatic moderator for your blog. You could already classify incoming posts automatically based on comment text and the username of the poster (text inputs), but not the number of times they’ve posted before or the number of users that have liked their posts (numeric inputs). We’ve now added support for mixed inputs, so both numeric and text data can be incorporated in your moderation helper, greatly improving accuracy and letting you get back to making content rather than managing it.

Combining Continuous Output with Mixed Inputs: To further enhance your automatic moderator, you can use continuous output to set thresholds for automatic posting, automatic rejection and manual moderation, further reducing your workload.

You can get all the details about these and other new features on the Prediction API website. We are continuing to offer the Prediction API as a preview to a limited number of developers. There is no charge for using the service during the preview. To learn more and sign up for an invitation, please join the waitlist.

Tuesday 14 September 2010
Facebook StumbleUpon Twitter Google+ Pin It

Increase your productivity with the Google Analytics API

(Cross-posted from the Google Analytics Blog)

Back in Episode 10 of Web Analytics TV, (32:00), Lisa C from Melbourne asked how to pull a trending report from Google Analytics for the top organic search landing pages. This was such a great question, that we wrote 2 articles and released sample code describing how you can automate retrieving this data from Google Analytics Data Export API. But first let’s look at the results.

Here is a graph plotting traffic to the top 100 landing pages for organic search for all of June for www.googlestore.com.

Let’s Analyze:
This is the typical trend graph you can find across the Google Analytics web interface. By itself, all you can tell is that something happened during the spike. what you can’t figure out is which page actually increased in traffic; to do so would require lots more digging.

Now let’s try again:
Here is a stacked area graph of each of the top 100 landing pages for organic search.

Let’s Analyze:
Awesome right! So obvious why this is cooler. But let me explain.

Lisa’s graph, above, presents significantly simplified insights. Notice how much more we can get from this graph. We can see the green page is what caused the big spike. Also we see that the blue and orange pages had interesting changes in traffic patterns; changes we couldn’t identify from the graph on the left. Being able to break down the totals graph is indeed a gold mine for analysis.

Typical actions you, or Lisa (!), can take from this data are to get the organic search keyword to send traffic to the blue page. Then to identify the keywords sending traffic to the green and orange page and see if we can increase traffic to other pages.

Exporting the Data from the web interface:
Anybody can pull this data from the Google Analytics web interface. You simply create a custom report with landing pages and entrances. Then drill into each landing page, and export the data to a csv file. Finally you go through all csv files and compile them into a single file for analysis. Let's illustrate:

Going through each report individually is a LOT of manual work, but we can automate all of this using the Data Export API; reducing hours of work into a few minutes!

Using the Data Export API to Automate:
In part one of our series, we demonstrate how to use the Data Export API to automate the exact task above. A user specifies 1 query to determine the top landing pages. The for each landing page, a separate query is used to get the data over time.

This is great and we built it to work with any query with a single dimension. But notice that the number of queries grows with the number dimensions. In fact this program requires n + 1 queries so if you want data for 1,000 dimensions, it will take 1,001 queries.

This is bad because there is a daily quota of 10,000 queries for the Data Export API. So if you ran this program 10 times, with 1,000 dimensions, it would require 10,010 queries completely using your quota. ouch!

Optimizing Data Export API Requests:
To reduce the number of queries requires, the second part of this series describes an alternate approach to retrieving the same data, but minimizes the number of queries required. In the second approach, we use Data Export API filter expressions to return data for multiple dimensions in each request.

This approach dramatically reduces the amount of quota required. In the best case, only 2 queries are required.

Using this second approach, analysts can now run this report to their hearts content. They can do this for different time frames, and different dimensions, comparing organic vs paid traffic, trends of keywords by search engine, even compare traffic by geography.

As we mentioned, we wrote two articles describing both approaches and released the sample code for the application. Let us know the amazing insights you find through using this tool.

Have fun!

Monday 13 September 2010
Facebook StumbleUpon Twitter Google+ Pin It

DevFest Asia Pacific Tour -- Registrations Open!

We’re gearing up for the next round of Google DevFest events, and we’re excited to be back in Asia! Android, HTML5/Chrome, Social Web, and Geo are among the topics we’ll be covering. Our Developer Advocates, along with local speakers, will be on hand to give sessions, answer questions, and check out what each of you are building.

Visit the DevFest site to see the full list of DevFest events.

You can also follow us on Buzz and Twitter.

Space is limited at each location, so register early. Please note that registering does not guarantee you a spot at the event, so check for the email confirmation.

Hope to see you all there!

Get your HTML5 game on

This has been an exciting year for web developers, with all the new features being made possible by HTML5 and browsers getting faster by the day. One of the big surprises has been the rise of HTML5 gaming, with the open technology stack of HTML, CSS, and Javascript becoming a viable platform for games on the web. That’s why, next month, SPIL Games and Google will be running an HTML5 Game Jam event on both sides of the Atlantic, and you’re invited!

In the Netherlands, we’ll be hosting a sleepover event at the Hilversum headquarters of SPIL Games. SPIL recently converted their 47 mobile portals to work on mobile HTML5, and is running a $50,000 HTML5 games contest. Hilversum is a quick train journey from Amsterdam and the spacious premises are the perfect setting for hardcore games hacking, which means we’ll be coding HTML5 games all weekend. Participants will be able to sleep over on-site. In which case, please bring a sleeping bag, change of clothes, and don’t forget your toothbrush! We’ll also mail out a list of hotels in the Hilversum area for those who’d rather book a room instead (at your own cost).

Meanwhile, we’ll be running a parallel event at Google’s office in downtown San Francisco. We won’t quite be pulling an all-nighter like our friends in the Netherlands, but we will keep our doors open till midnight.

This will primarily be a hands-on hackathon in both locations, which runs from Saturday October 9th, 10am to Sunday October 10th, 6pm. We’ll kick off with short talks on the technology, followed by pitches from anyone with ideas on a great game. Then it will be hacking all day. We’ll pick up again on Sunday at 10am, and wrap up at 6pm with presentations and judging. There will also be chillout areas with games and diversions, and food and drinks to fuel your frenetic hacking.

The event is free of charge and places are limited. Sign Up here, and we’ll mail back with confirmations soon. We’re looking forward to see what games you can build using HTML5!

Friday 10 September 2010
Facebook StumbleUpon Twitter Google+ Pin It

License Evolution and Hosting Projects on Code.Google.Com

Nearly 6 years ago when we first started thinking about doing project hosting on code.google.com we noticed something particular about the other open source project hosting sites. They either accepted all Open Source Initiative (OSI) approved licenses, like Sourceforge, or they only accepted one, like the Free Software Foundation's Savannah project, which only accepted GPL'd projects.

In our day-to-day work looking after open source licensing, we lamented the proliferation of licenses and decided that we would split the difference and only offer a very limited subset of the approved OSI licenses choices to our users as a stand against the proliferation of the same. You see, we felt then and still feel now that the excessive number of open source licenses presents a problem for open source developers and those that adopt that software. Thus when we launched project hosting on code.google.com, we only launched with a small subset of licenses.

This was hardly a barrier to adoption. While there were some complaints from some corners, in the intervening 5+ years since then, we've grown to become one of the largest hosts while allowing that ethic behind license choice to persist.

What's changing and why change now?

We've added an option to the license selector to allow any project to use an OSI approved license. Simply select “other open source” and indicate in your LICENSING, COPYING or similar file which license you are using.

Public domain projects are still only allowed on a case by case basis, as true public domain projects are quite rare and, in some countries, impossible. We encourage those that want to truly ship public domain to look at how D. Richard Hipp does things around SQLite and emulate his style. Email google-code-hosting@googlegroups.com if you’d like to request that license be applied to your project.

(Please note: we will continue to hunt down and kill non-open source projects or other projects using Google Code as a generic file-hosting service.)

Why change now? The TL;DR version is that we think we've made our point and that this new way of doing things is a better fit to our goal of supporting open source software developers.

The longer form of the reason why is that we never really liked turning away projects that were under real, compatible licenses like the zlib or other permissive licenses, nor did we really like turning away projects under licenses that serve a truly new function, like the AGPL. We also think that there were inconsistencies in how we handled multi-licensed projects (for instance: a project that is under an Apache license, but has a zlib component.)

To rectify this, we decided to add an additional option to the license selector that would accommodate some flexibility around open source licenses. We hope you find it useful and look forward to seeing how you use the site!

Wednesday 8 September 2010
Facebook StumbleUpon Twitter Google+ Pin It

Apps Script Hackathon in Mountain View, CA

Google Apps Script is a JavaScript cloud scripting language that provides easy ways to automate tasks across Google products and third party services. If you want to learn more about Google Apps Script and meet the Apps Script team, here’s your chance! We will be holding an Apps Script hackathon in Mountain View, CA on Thursday, September 23 from 2pm - 8pm.

After we cover the basics of Apps Script, you can code along with us as we build a complete script, or you can bring your own ideas and get some help and guidance from the team. There will be food, power, and Apps Script experts available to help throughout the day. Just bring your laptop, ideas, enthusiasm, and basic knowledge of JavaScript. Check out out the details of the event and be sure to RSVP to let us know you’re coming!

Tuesday 7 September 2010
Facebook StumbleUpon Twitter Google+ Pin It

Sign up with Google using OpenID

Some websites use the OpenID standard so that users don’t even need to type a password to sign in. While Google does not yet support the usage of OpenID for replacing passwords on its own sites, we are involved in the OpenID community’s efforts to research how to best implement that type of support.

As a next step in those community efforts, we announced today the use of OpenID for the Google signup process.

Currently, Google only offers this feature for Yahoo! users. However, as it is based on an Internet standard, we plan to use it in the future with other email providers that add support for this usage of OpenID and related standards like OAuth, such as in the Microsoft Live identity APIs.

Other websites that need to verify a user’s email address can also implement this technique using Yahoo!’s OpenID API. In addition, it can be used to verify the addresses of Gmail and Google Apps users because those email systems expose the necessary APIs for OpenID. For example, Plaxo is one of the many websites that takes advantage of this feature of Gmail and Yahoo! Mail.

Friday 3 September 2010
Facebook StumbleUpon Twitter Google+ Pin It

New Sidewiki “Sidebar” web element




We are very pleased to announce a new Sidewiki “sidebar” web element. Google Sidewiki allows visitors to your website to contribute helpful information and read other visitors’ insights alongside the pages of the website. The new web element is a Sidewiki button, which, when clicked, displays a fully functional Sidewiki sidebar to the left of the page content. This means that your visitors can see the Sidewiki content for your page even if they don’t have Google Toolbar or the Sidewiki Chrome extension installed.

You can choose from several different look and feels created by Google or even create a new custom one. Use our wizard to choose the desired look and behavior, embed the generated code in your page, and you’re done. Here's a sketch of what it looks like when a visitor is looking at the Sidewiki content.

Go to http://www.google.com/webelements/sidewiki/ to get started. If you'll be using the element on your site, we’d love to hear about it via @googlesidewiki on Twitter.

Deep dive articles for the Analytics Data Export API

(Cross-posted from Google Analytics Blog)

On the Google Analytics API Team, we’re fascinated with what people create using the Data Export API. You guys come up with some really amazing stuff! Lately, we’ve also been paying a lot of attention to how people use it. We looked at whether the API has stumbling points (and where they are), what common features every developer wants in their GA applications, and what tricky areas need deeper explanations than we can give by replying to posts in our discussion group.

As a result of identifying these areas, we’ve written a few in-depth articles. Each article is meant as a “Deep Dive” into a specific topic, and is paired with open-source, sample reference code.

In no particular order, the articles are as follows:

Visualizing Google Analytics Data with Google Chart Tools
This article describes how you can use JavaScript to pull data from the Export API to dynamically create and embed chart images in a web page. To do this, it shows you how to use the Data Export API and Google Chart Tools to create visualizations of your Google Analytics Data.

Outputting Data from the Data Export API to CSV Format
If you use Google Analytics, chances are that your data eventually makes its way into a spreadsheet. This article shows you how to automate all the manual work by printing data from the Data Export API in CSV, the most ubiquitous file format for table data.

Filling in Missing Values In Date Requests
If you want to request data displayed over a time series, you will find that there might be missing dates in your series requests. When requesting multiple dimensions, the Data Export API only returns entries for dates that have collected data. This can lead to missing dates in a time series, but this article describes how to fill in these missing dates.

We think this article format makes for a perfect jumping off point. Download the code, follow along in the article, and when you’re done absorbing the material, treat the code as a starting point and hack away to see what you can come up with!

And if you’ve got some more ideas for areas you’d like us to expound upon, let us know!

Thursday 2 September 2010
Facebook StumbleUpon Twitter Google+ Pin It

Drupal 7 - faster than ever

This is a guest post by Owen Barton, partner and director of engineering at CivicActions. Owen has been working with Google's “Make the Web Faster” project team and the Drupal community to make improvements in Drupal 7 front-end performance. This is a condensed version of a more in-depth post over at the CivicActions blog.



Drupal is a popular free and open source publishing platform, powering high profile sites such as The White House, The New York Observer and Amnesty International. The Drupal community has long understood the importance of good front-end performance to successful web sites, being ahead of the game in many ways. This post highlights some of the improvements developed for the upcoming Drupal 7 release, several of which can save an additional second or more of page load times.



Drupal 7 has made its caching system more easily pluggable - to allow for easier memcache integration, for example. It has also enabled caching HTTP headers to be set so that logged out users can cache entire pages locally as well as improve compatibility with reverse proxies and content distribution networks (CDNs). There is also a patch waiting which reduces both the response size and the time taken to generate 404 responses for inlined page assets. Depending on the type of 404 (CSS have a larger effect than images, for example) the slower 404s were adding 0.5 to 1 second to the calling page load times.



Drupal currently has the ability to aggregate multiple CSS and JavaScript files by concatenating them into a smaller number of files to reduce the number of HTTP requests. There is a patch in the queue for Drupal 7 that could allow aggregation to be enabled by default, which is great because the large number of individual files can add anything from 0-1.5 seconds to page loads.



One issue that has become apparent with the Drupal 6 aggregation system is that users can end up downloading aggregate files that include a large amount of duplicate code. On one page the aggregate may contain files a, b and c, whilst on a second page the aggregate may contain files a, b and d - the “c” and “d” files being added conditionally on specific pages. This breaks the benefits of browser caching and slows down subsequent page loads. Benchmarking on core alone shows that avoiding duplicate aggregates can save over a second across 5 page loads. A patch has already been committed that means files need to be explicitly added to the aggregate, and fix Drupal core to add appropriate files to the aggregate unconditionally.



Drupal has supported gzip compression of HTML output for a long time, however for CSS and JavaScript, the files are delivered directly by the webserver, so Drupal has less control. There are webserver based compressors such as Apache’s mod_deflate, but these are not always available. A patch is in the queue that stores compressed versions of aggregated files on write and uses rewrite and header directives in .htaccess that allow these files to be served correctly. Benchmarks show that this patch can make initial page views 20-60% faster, saving anything from 0.3 to 3 seconds total.



The Drupal 7 release promises some real improvements from a front-end performance point of view. Other performance optimizations will no doubt continue to appear and be refined in contributed modules and themes, as well as in site building best practices and documentation. In Drupal 8 we will hopefully see further improvements in the CSS/JS file aggregation system, increased high-level caching effectiveness and hopefully more tools to help site builders reduce file sizes. If you have yet to try Drupal, download it now and give it a try and tell us in the comments if your site performance improves!



Google Developer Day 2010 Agenda: Android, Chrome & HTML5 and Cloud Platform



We are now ready to share the Google Developer Day agendas for Tokyo, Sao Paulo, Munich, Moscow and Prague. We have so much technical content to share but alas, Developer Day is a one-day event. There may still be changes to the agenda, but here is a sneak peek at where we are.

Globally, we will feature three major tracks:
  • Android - With the continued momentum and growth of the platform, we would like to continue the conversation with you at Developer Day. We will feature sessions on Android performance, mobile user experience and best practices on building apps, and we will also deep dive on a new feature, Cloud to Device Messaging (C2DM).

  • Chrome & HTML5 - We will discuss how to build an app for the Chrome Web Store and how to improve its development and performance. We’ll show which aspects of HTML5, Chrome Developer Tools and Native Client can be most useful to you. Finally, we will cover everything auth-related to show you when and where to use various authentication tools and how they integrate with our APIs and products.

  • Cloud Platform - Building off of our series of announcements at Google I/O, we will feature sessions on App Engine, App Engine for Business, Spring integration, Google Web Toolkit, Google Storage for Developers, BigQuery and Prediction API. Be prepared for code samples, how to optimize performance and a glimpse into what else is on our roadmap.
We are happy to announce that Eric Tholome, Product Management Director for Developer Products, will be a keynote speaker in Sao Paulo, Munich, Moscow and Prague. In addition, we are happy to invite as our second keynote speaker:
  • Sao Paulo, Brazil - Mario Queiroz, VP Product Management

  • Munich, Germany - Dr. Wieland Holfelder, Engineering Director

  • Moscow, Russia - Dr. Gene Sokolov, Head of Moscow Engineering
Due to the success of the Venture Capital sessions at Google I/O and the growing VC activity in our global markets, a new addition this year is Venture Capital panels at most of our Developer Days. Come hear from your local VCs on what they look for in startups.

The Sao Paulo and Moscow keynote presentations will have live translation, and for sessions, check the FAQ section of your Developer Day site. We will have savvy gurus available to answer your questions during Office Hours, and you will have a chance to meet Googlers and each other over Happy Hour.

Registration will open on September 15th for Sao Paulo and on September 22nd for Munich, Moscow and Prague. Tokyo’s registration is now closed.

In the meanwhile, please follow us on this blog and on Twitter to keep up-to-date with the latest news on Google Developer Day and other development topics: @googledevjp (Japan), @googledevbr (Brazil) and @gddru (Russia).

Hashtags: #gdd2010jp, #gddbr, #gddde, #gddru, #gddcz

SVG documents searchable on Google

Just a heads up that it should now be easier for users to find SVG files when searching on Google. That’s right, we’ve expanded our indexing capabilities to include SVG. Feel free to check out our Webmaster Help Center for the complete list of file types we support, and our Webmaster Blog for more information on our SVG announcement.