February 10, 2009
Google.org, the philanthropic arm of Google, has today announced Google PowerMeter, a tool that will take energy consumption data from smart household energy meters and make the data available and easy to understand.
This will be very useful to bring social energy measurement alive, where you and I can compare our energy use and work out how to reduce it. It helps that Google.org are also pushing for free and open access to energy data for consumers. This from their December submission to Californian energy regulators:
Accordingly, Google urges the Commission to include the following principles in its smart grid policy, discussed in greater detail below:
- Consumers should have direct access to real-time electricity usage information.
- Electricity usage information should be freely available to consumers.
- Electricity usage data should be made available in a standardized, open format, freely available to third-parties with permission from the consumer.
Freely available, standardised, open access to real-time energy data. Once consumers have that, they can close the loop and easily reduce consumption.
The Google PowerMeter looks like access to smart meter billing information placed into some energy visualizations tools, and what also looks like some detection of the signature of particular appliances energy use.
Here’s an introductory video:
That all looks very cool.
The part that really interests me is that this gives a big push forward for open access to energy data, which then allow a whole ecosystem of tools and applications to develop to aid people in reducing their energy consumption, CO2 emissions, and money spent on energy.
Once we can make these energy measurements available, we can make them social, compare with each other, learn and save energy.
For a long time the big energy industries haven’t been too interested in opening up and giving us information, especially real-time information.
Let’s hope PowerMeter comes out of testing soon, and we get to see it operating here in the UK. And let’s get these open standards up and running ASAP. We’ve got a lot of measuring to do and changes to make to bring our energy consumption down.
February 9, 2009
You may recall news stories last month claiming that a google search results in 7g of CO2 emissions. This story resulted in a storm of comment and reporting, a clarification from google (0.2g per search), and somewhat of a clarification from the original study’s author. But all the resulting hoo har goes to show:
- The original claim was woefully unclear as reported
- Releasing research headlines without the research is troublesome and results in misunderstandings
- We’ll need to get a lot better at identifying what we are actually measuring when talking energy and CO2
I want to break this story down and inject some facts in, and hopefully we’ll learn something in the process.
So, starting at the beginning:
The Sunday Times reported on January 11 that a Google search produced about 7g of CO2. In their words:
Performing two Google searches from a desktop computer can generate about the same amount of carbon dioxide as boiling a kettle for a cup of tea, according to new research.
While millions of people tap into Google without considering the environment, a typical search generates about 7g of CO2 Boiling a kettle generates about 15g.
Now, you’d hope the rest of the article would go on to clarify this a bit. That 7g per search. What does that include? Where are the boundaries drawn around what a search is? Not explained. So, that get left to indivudual interpretation and that’s where this sort of measurement and claim gets messy and there is a resulting storm of voices claiming This and That.
Google quickly posted a blog post and said that the energy required by Google’s servers to handle one search is 0.2g CO2.
Supposedly measuring the same thing, but we have over an order of magnitude difference? This comes down to what is actually being measured, as later clarifications revealed.
The original 7g of CO2 per search was actually made up from several searches and a few minutes of time sitting at a PC, and it included the energy of the PC used to start the search, not just Google’s servers. Here’s the Jan 16 clarification by The Times:
A report about online energy consumption (Google and you’ll damage the planet, Jan 11) said that “performing two Google searches from a desktop computer can generate about the same amount of carbon dioxide as boiling a kettle” or about 7g of CO2 per search. We are happy to make clear that this does not refer to a one-hit Google search taking less than a second, which Google says produces about 0.2g of CO2, a figure we accept. In the article, we were referring to a Google search that may involve several attempts to find the object being sought and that may last for several minutes. Various experts put forward carbon emission estimates for such a search of 1g-10g depending on the time involved and the equipment used
Bingo. That’s the detail we originally needed. It ain’t about Google’s servers or the search itself, but about you sitting down in front of a foot-warming PC with a big, bright screen and tapping away for a bit trying to find something out. And we now have a range of 1g to 10g depending on circumstances.
So, the mention of Google at all in the story is pretty spurious, they claim 0.2g for their part of the search, the rest is elsewhere. A more correct statement could to be something like… “Using a PC and the Internet produces CO2 at the rate of between 1 and 10g CO2 per few minutes depending on your computer setup and what you are doing” (or something like that, please don’t quote this statement).
Basically, the Sunday Times got it wrong. They did the classic lazy blame-somebody-else story, blaming the CO2 on Google, when it is really much more about a home PC and how it is used, and the rest of the Internet equipment used to move all that data around.
One more quote, from a followup article from TechNewsWorld put it basically to rest:
One problem: the study’s author, Harvard University physicist Alex Wissner-Gross, says he never mentions Google in the study. “For some reason, in their story on the study, the Times had an ax to grind with Google,” Wissner-Gross told TechNewsWorld. “Our work has nothing to do with Google. Our focus was exclusively on the Web overall, and we found that it takes on average about 20 milligrams of CO2 per second to visit a Web site.”
And the example involving tea kettles? “They did that. I have no idea where they got those statistics,” Wissner-Gross said.
An average 0.02g of CO2 per second. That’s 1.2g per minute, or 72g CO2 per hour.
Contrast that to driving your car, which likely produces 200g CO2 per km or more. Drive 1km, or browse the net for nearly three hours?
April 8, 2008
Thanks Google, what a tasty birthday present. I’m thinking the Google App Engine is going to be a lot of fun. I guess the devil is going to be in the scaling. Haven’t read that bit of the docs yet.
January 2, 2008
It is traditional. Writing some predictions for 2008. I’m going to focus on the Internet, social media and associated technology.
Google Search: Trust
I think 2008 will be the year when we’ll realize that we can’t have search being a closed algorithm any more. I get the feeling that it is going to be just too easy for a couple of folk at Google to work out how to pervert search a tiny bit and make a couple of billion extra in revenue. Given that you can do that, it is going to happen eventually, isn’t it, despite the ‘Do No Evil’ thing, which is just sounding more and more defensive these days.
Time to dust off that wikia vision of open search and get moving on it. Ooooh look, the are launching something on Jan 7th. We’ve only got the one Internet, and it would be a pity if we lost trust in our search results.
Also, I’ve always been really uneasy about the whole SEO thing. It feels to me like the SEO gurus are like high priests claiming to know what God is thinking.
2008 will be the year we collectively forget about facebook. And give up on social networking for the sake of social networking. My hope is that Open Social and similar will help make possible really useful applications that are socially enabled.
You pronounce that web two dollars. I predict the end of Web 2.0 rounded corner build-it-and-think-of-a-business-model bubble. Why? Because with weakening economies in the US and Europe, VCs belts are going to tighten and there will be less money lying around for the high-risk punt at gathering a few million members to somehow later.
Those that have collected the few million members will start the money making machines. I’d predict some good old-fashioned outrage as fun Web 2 sites start to sell their members data or attention to stay afloat.
I’m hoping the focus goes back on to decent revenue-making businesses and some really good ideas emerge and start and work. And people actually pay for it and are happy doing that. People don’t mind paying for stuff, as long as they can really see the value. You need more than (another plain old) social network to pass that test.
The answer to the question that twitter is
I think this year we’ll see the answer to the question “What is twitter for?” And I’m not sure we are going to like the answer. See Web 2$ above. I’d love twitter to stay its lovely simple self, but I’m just a little worried it can’t be.
A new A-List :-)
The old A-listers will collapse en mass from spending too many long nights mumbling into seesmic and will be replaced with a new widgetized microblogging A-list that say nothing useful but say it all the time all over the place. Oh hang on, has this already happened? :-)
October 10, 2007
My trusty Powerbook g4 is back in the shop again. Sigh. After a couple of days of fairly weird behaviour, it got to the point of presenting a black screen and running the fan full blast.
Dead logic board suspected.
This is the third major fault since July.
I’ve only had it back for a couple of weeks since the last logic board replacement. We’ll see what happens.
What this practically means is that I move all my non-online files to Lib’s old iMac g4, which works great but is a bit slow these days. Still a pleasure to use. And reinstall the current set of apps i’m using. MySQL, PHP5, gCal.app
Thankfully, a lot of my working tool and files are held online these days:
* email in gmail
* calendar in google calender
* more and more documents and spreadsheets in google docs
* code in subversion archives at online service providers
and so on.
What isn’t online: local app, experiments in code not yet in subversion. Offline writing. *My Todo List* grr.
I’m hoping Apple will come through with something, given how many failures I’ve had lately.
May 31, 2007
Today I’m heading into London to join in the fun at the 2007 Google Developer Day.
Pretty much all the day is going to be focussed on getting the most out of Google’s APIs. I’m most familiar with youtube’s APIs at this point, but I’m pretty intrigued with what is going on in the mapping and mobile areas.
November 24, 2006
So I’m sitting on a train on the way home from London. It takes an hour or so to get from London Bridge to Brighton. I climb on the train on one of the carriages that has the t-mobile wifi stickers on the doors. But as happens at least half the time, there’s no wifi. Oh well. When there is wifi, it at least is free.
So I have the laptop out anyway, writing and replying to emails and you know what? I flip over to the browser and get a popup from my Google Calendar reminding me of an event in my Calendar coming up shortly.
Resilience: Keep working even when the network goes away for a short while.
Not for ages, though, but at least be resilient for a number of minutes of outage. The standard web page (GET/PUT) normally handles this pretty well, but with AJAX you have the ability to tie you application close to the server. Don’t if you can avoid it except where necessary.
I typed a bunch of appointments into my Google calendar here on the train with no network. I wonder if they’ll get committed to the server when I reconnect. There’s no reason why not, really, and if it works, I’ll always keep Google calendar in a browser window and use it even offline.
[Update: I got home and no, it didn't remember appointments entered while offline. I can sort of understand that from a transactional point of view -- what if the window closes or the PC shuts down.]
February 28, 2006
The Ethical Hacker has a set of top ten searches you can perform with Google to see if you site leaks security information. It all starts with using the site: prefix to target your site, then see what you can find.