Wednesday, November 30, 2011

Is Windows 8 For Tablets Already Dead?


A few shots were fired against Windows 8 and its tablet edition today. There is the thought that Microsoft may have missed the best opportunity to introduce Windows 8 for tablets, or may have missed an opportunity to grow he interest in Windows tablets and make you lust after a juicy Windows 8 tablet. With the iPad, Amazon and Barnes&Noble, the market could be saturated and the overall mood in the tablet market appears to agree. Should Microsoft simply scrap the idea to run Windows 8 on tablets?

Windows 8 Start Screen
Windows 8 Start Screen: Tiles can be rearraged

How the tablet expectations have changed. We are still talking about an opportunity of apparently gazillions of tablets per year and an environment where we generally believe that it is everyone’s opportunity to lose – despite the fact that we have not seen a single successful Android tablet yet. If there was anyone who needed a confirmation that tablets are not about hardware, but about the platform value, that evidence should have been provided with the Amazon tablet, which underwhelms on specs, but is sold solely through brand and platform perception. Not that it has been a secret, but Amazon and Barnes&Noble have been the only major companies that followed through with this thought and offer devices that now occupy the tablet opportunity below in the $250 range and below.

The impact was strong enough to convince traditional PC makers that there is no significant profit left and that a quick exit may be a good idea. In a best case scenario, market analysts in Taiwan now believe that non-iPads and non-Kindles may be able to capture 10 to 15% of the tablet market. Forrester’s opinion adds to that depressed mood and states that “Microsoft has missed the peak of consumer desire for a product they haven’t yet released.: While 46% of consumers wanted a Windows 8 tablet in Q1 2011, only 25% wanted one in Q3. Measuring consumer interest in an imaginary product that lacks the perception value of the Apple product is a stroll on thin ice and little more indication of a potential market, so those 46% should be taken with a grain of salt.

However, Forrester has reason to criticize Microsoft that it has not maintained the Windows 8 momentum and Microsoft marketing has clearly failed to keep Windows 8 tablets in our minds. The spicy part of that failure is not so much that Microsoft did not market Windows 8 or its port to ARM or any potentially great hardware. The failure is that we have no idea what experience Windows 8 will offer on a tablet. Forrester reminds us that late-comers to a market, what Windows 8 clearly is, need differentiators to succeed. The Kindle Fire succeeds by offering the Amazon cloud platform and most of the Android tablet experience for a relatively low price that is backed by a strong brand. Forrester says that Microsoft has to take a lesson from Amazon to turn the corner. That, of course, would require that Microsoft is failing at this time.
Sure, the marketing was underwhelming, but is Microsoft really failing? I believe that this is a questionable assumption to make.

Tablets have platform value. To understand a Windows 8 tablet, we need to understand the Windows 8 platform and the public has not seen this platform. We don’t know much integrated the platform will be and how Microsoft will connect PCs, tablets, phones and its strangely under-marketed Xbox Live platform. If Windows 8 seamlessly bridges PCs, tablets, phones as well as its video game and entertainment service, Microsoft has, conceivably, the most powerful platform with an very compelling value proposition. Windows 8 could easily become the fabric that bridges the gaps Microsoft currently has to deal with. Imagine a tablet that accesses content on Xbox Live, imagine you can play the same games on your tablet that you play on your PC or TV. Imagine a phone that can access private data in the same way a PC does and a tablet does – via Microsoft’s Skydrive cloud service. In many ways, Microsoft is further along than Apple. However, Microsoft needs to find a way to connect the loose ends and create a platform value that will convince you to buy a Windows tablet and phone.

What we tend to forget about Windows 8 is the fact that it is a bet on touchscreens and the fact that touchscreens have never worked on vertical screens such as notebook and desktop screens. Touchscreens are made for horizontal applications. Other than the majority of analysts, we believe that the Windows 8 touch interface is a pretty risky strategy for traditional PCs, but makes sense for tablets. Very few users want to reach across a keyboard and touch the screen with one hand while supporting it with the other. If the Metro interface works, it will work best on tablets.

Microsoft should be thinking about some tablet marketing, but it will be more important to create an integrated platform experience, not just software that happens to run on a tablet. If Windows 8 will be integrated across phones, tablets, Xbox Live, and PCs, Windows 8 tablets have a big opportunity to make a big impact.

Wolfgang Gruener in Business on November 29

Tuesday, November 29, 2011

SPDY: How The Kindle Fire May Inspire A Much Faster Internet


Google developer Mike Belshe posted some thoughts on the future of SPDY, which addresses shortcomings of HTTP and accelerates Internet connections. While not confirmed by Google or any other company, a SPDY gateway that could enhance Internet connections dramatically in the future.

Google

It appears that Amazon’s Kindle Fire tablet and the integrated Silk browser that leverages acceleration via Amazon’s cloud services could prompt some new thoughts how mobile web browser could get faster. An intriguing idea is offered by Mike Belshe, who envisions SPDY to become much more available than it is today. Instead of requiring individual web servers to be configured for SPDY, ISPs could install SPDY gateways, which would automatically support the technology for Chrome users as well as Firefox users sometime in 2012.

SPDY is designed to deal with some of the of problems in HTTP, which was first documented in 1995 and related to web content that was much simpler than what we are developing and consuming today. Both TCP and HTTP have evolved into a bottleneck of data downloads and are constantly under scrutiny how these protocols can be made much more efficient in today’s world. HTTP is especially criticized for latency issues since HTTP can only fetch one resource at a time and servers cannot communicate with a client without a client request. HTTP also uses uncompressed and redundant request and response headers. SPDY uses TCP as the underlying transport layer and is available next to HTTP, but offers far less latency.

SPDY supports unlimited connection streams, can prioritize and even block requests if a communication channel gets overloaded and supports header compression. SPDY also allows the server to communicate with a client without a client request. SPDY still uses HTTP methods, headers and “other semantics.” However, the connection management and data transfer formats are modified. According to Google, SPDY decreases the number of open connections per page, from 30-75 to just 7 or 8.

If Belshe has his way, a future SPDY would allow users to run multiple tabs via a single SPDY connection stream through the carrier, the network address translation table and the SPDY gateway into the Internet: “Because SPDY can multiplex many connections, the browser can now put literally every request onto a single SPDY connection,” he writes. “Now, any time the browser needs to fetch a request, it can send the request right away, without needing to do a DNS lookup, or a TCP handshake, or even an SSL handshake. On top of that, every request is secure, not just those that go to SSL sites.”

SPDY today is limited largely to Google sites, all of which support the protocol (you can monitor SPDY connections via chrome://net-internals in Chrome), and give Google a noticeable performance advantage in its applications when they are accessed with Chrome. Mozilla said that it will be adding SPDY to Firefox in 2012, but there is no information from Microsoft if IE will get SPDY support as well. If Belshe is right, the implications for SPDY in a mobile web could web much greater than it is on the desktop.

Daniel Bailey in Products on November 29

Chrome First To Get Much Anticipated Gamepad API


Google continues to quickly develop browser to work well with HTML5 applications. There is now a Gamepad API, which was initially suggested by Mozilla as “Joystick API“. Back in August, Google developer Scott Graham started a discussion thread on W3C’s site promoting the API. In September, Nokia’s Art Barstow officially informed the W3C that Mozilla and Google had draft specs for a Joystick API.

The idea of the Gamepad API is to make the browser much more appealing to video gamers, especially since Google’s NaCl can run traditional video games in a browser – and those joysticks should be working in those scenarios already. This should become even more interesting when NaCl 3D will eb available. However, the bigger view is input support beyond the mouse and touch. For example, TV and video remote controls could suddenly work insider the browser as well.

The Gamepad API can be enabled via flag at about:flags.

Chrome 17 nightly builds have received a few other interesting additions over the past week. Profile icons can now be moved directly to the Windows desktop. Chrome Frame now works on IE7 as well and WebGL has been enabled for the WebkitGTK port.

Gruener in Products on November 28

Saturday, November 26, 2011

Apple iTV May Crash CES Party


Rumors of an upcoming Apple iTV have been flying for quite some time, but the latest batch of information reaching us from China has a level of detail that makes us wonder just how close this new device really is. Word has it that Apple is wrapping up negotiations with cable providers, but the hardware appears to be finalized and may be in production at this time.


Specs
Speaking on condition of anonymity, our source described the “iTV” as a device looks similar to a Mac desktop LCD and will become available in 42-inch, 46-inch, and 52-inch sizes. The 240 Hz LED TVs appear to be targeted at the U.S. only and could get smaller screen sizes to accommodate other geographies as well. The TVs integrate Apple’s A5 dual-core processor, WiFi as well as Bluetooth wireless networking.
The note also mentioned up to 64 GB of flash memory as local storage, but we are not so sure about this one, as iCloud will be a killer application that enables the iTV to become the information hub of the consumer’s digital life. Not only will all information be available in one place, but there will also be apps that are bridging the iPhone, the iPad and iTV as the iTV will also run iOS.

Standout Features: Siri, gestures
Our source was rather passionate about this TV and mentioned that the iTV “will revolutionize” the TV viewing experience. One of the key components will be Siri, which will allow for natural language voice control. Similar to the iPhone 4S experience, consumers will be able to retrieve information based on voice input and this Siri will also be aware of information that is stored on an iPhone or iPad and connect information such as calendar data.

There was a note of gesture recognition, similar to what Microsoft’s Kinect does. However, we have doubts about this one as Microsoft has patented Kinect left and right and we don’t believe that Microsoft would grant Apple access to this technology, given its substantial investments. However, Apple received an open-air gesture patent some time ago, which leveraged light beams to recognized the position of body parts such as hands relative to the screen. From a user perspective, this makes a lot of sense, but we don’t know whether this is a feature that will really be available.

Consumption vs. creation
Our source mentioned that there will be no keyboards as these TVs are designed to be pure content consumption devices and not content creation devices. While content can be controlled via iPhones, iPods and iPads, we were told, Apple does not envision these TVs to be web browsing products or email clients. Makes sense to us as email works much better on personal devices and does not tend to be a public application whose content you would want to share with anyone who walks in your living room. We have no information about any control devices, but are told that gestures are the main input method. Subjectively, we could imagine simple trackpads or touchscreen devices that serve as controllers for those who do not have an iPod, iPhone or iPad.

We have no information on the price of the TV.

However, we were told that the TV is in production and could be shipping in the near future and Apple could be targeting a January announcement – which would coincide with the Consumer Electronics Show in Las Vegas.

Needless to say, there is no way to verify whether the information we received is, in fact, true. However, we give our senior management source a reasonable level of credibility.

However, whether true or not, we are hearing more chatter about a TV built by Apple. The specs are rather secondary as Apple CE devices have never been about hardware specs. If such a device will be offered, we know that it will be another device that focuses on a cohesive consumer experience that is in line with the experience offered by the iPod, iPhone and iPad. We know that it can’t be just a TV – it needs to be a device that redefines video content consumption just like the iPhone redefined the cellular phone.

Siri, gestures and iCloud have every potential to do just that.

Wolfgang Gruener in Products on November 25

Thursday, November 24, 2011

Xbox Live Scam: How Can People Be So Stupid?


If you missed it: Evidently, a bunch of folks, largely in Europe, got tricked by an email offering free Microsoft points by going to a fake website and disclosing their credit card information. The cost to each person is estimated to be between $150 and $400. So, how can people be so stupid? Actually, what you should be asking is: How can you make sure you don’t make the same embarrassing mistake?

Phishing scams work on three principles: Greed, convincing you that the attacker can be trusted, and our tendency to have tunnel vision when we see something we want. Anyone can be tricked, my own Xbox live account was compromised after someone phished the Xbox support site and got them to reset my password so they could get access. Apparently, I permanently lost my original gamer tag.

Red Flags
The first step in protecting yourself is to set up red flags that trigger you to stop and think about what you are doing. The first red flag is when someone contacts you rather than you contacting them through email or a phone call (before there was the internet phones were used to get this information).   Immediately consider when you get an email or call from a service, your bank, or vendor that they may not be who they say they are.

The second red flag is if they ask for your unique ID, as they should know it – given they are calling you.   But even if they have IDs, you should remember that this is, often, public information. it doesn’t mean they actually are who they say they are either.

Xbox 360 - Kinect Bundle

The third red flag is any unique personal information like birthdate, mother’s maiden name, or the last 3 digits of your social.     They may need these to identify you, but at this point you should consider taking down their number, verifying that this number actually goes to them, and calling them back. If you don’t verify the number, anyone can answer the phone and say they are someone else.

Any credit card information requested in its entirety should cause you to immediately stop and reconsider the call. They should already have your credit card information and there should be no reason for them to ask for it again, unless this is a subscription renewal call and the card they have is out of date.   Personally, I recommend going to the subscription web site (from your bookmarks and not clicking on a link in an email) and putting that information in personally and never giving it over the phone.

Finally, and the biggest red flag of all, is anyone asking for your password. If they are who they say they are ,they don’t need to log into your account to get anything done. They have administrator’s access and even asking for your password should violate their own policies and open them to liability. There is no legitimate reason for them to ask for your password, none. Hang up the phone and then call up the vendor and report that you may have been attacked.

Don’t Be Stupid

One final warning about all scams, they depend heavily on your own dishonesty.   Often we’ll see a deal that looks too good to be true and we’ll go for it like a starving dog that sees a raw hamburger. The other day I saw an ad for a motorcycle that was priced at about 25% of its market value and I damn near had to sit on my own hands before my brain kicked in and noticed the guy was using a generic email address and had misrepresented what city he was in. I am positive it was a scam, but I got “great deal blindness”.

If something sounds too good to be true, bet that it is and rather than thinking you are taking advantage of some idiot, consider that they are betting you are the idiot. Here is another thought: If they really are stupid and sell something so cheap,  why hasn’t someone else bought it? Consider what they’ll do if you do, in effect, cheat them?    Sometimes the aggravation really isn’t worth taking advantage of others,  particularly when there is a good chance they are taking advantage of you.

Anyone can be cheated; the trick is to assure you aren’t the target.

Rob Enderle in Business on November 23

Wednesday, November 23, 2011

Chrome Gets Improves Memory Performance, But Firefox Leads



Chromium Logo

According to Google, garbage collection pause times previously depended on the amount of memory used. The result was an effect that Google describes as “hiccuping”. The new garbage collector in Google’s V8 JavaScript engine now reduces those pause times “dramatically while maintaining great peak performance and memory use,” Google said.

The company proves its point with a WebGL benchmark in which Chrome’s score increases from 6 in the current stable version 15 to 34 in the developer and nightly version (Chrome and Chromium 17) that integrate the incremental garbage collector (on our test system). However, Chrome is not the best browser in this test. Our Opera 11.60 checked in with a score of 46, the latest Firefox 11 nightly build with 48 and the current stable Firefox 8 with 126. Compared to the 590 frames Chromium was able to paint within 10 seconds, Firefox achieved 617.

Google said that the new garbage collector “improves interactive performance and opens up new possibilities for the interactive web.” There are no real-world examples of the performance changes of the incremental garbage collector yet.

You can try the improvements in a Chrome 17 dev build.

Wolfgang Gruener in Products on November 22

Tuesday, November 22, 2011

Aging Your Digital Pictures: There Is A Patent For That


Photos that look just as good after 10 years as they did on the day they were taken is a circumstance we are used to in times of digital photography. However, historically, it is questionable at best and plain incorrect in most cases. Environmental influences degrade the quality of your printed picture over time. IBM believes that digital images should follow suit and has filed a patent for an automatically aging file system.

Let’s be honest: Searching through shoeboxes filled with old family pictures can be a much more enjoyable activity than sitting in front of a screen and searching through file folders of potentially tens of thousands of pictures. Even worse, the emotional connection to pictures when they gracefully age, especially noticeable via discolored content, can be much greater in printed picture than it is the case with digital counterparts. Someone at IBM felt that the boring never changing status of content in digital images, at least until there is corrupt data, is something that needs to be changed.

The resulting idea was submitted as a patent application in May of 2010: A file system that would, for example, work with doc, jpg and gif files and dynamically change the content of the picture or document over time to simulate the environmental effects on a real printed picture. According to the inventors, there “is a need for a new kind of filing system that automatically and selectively ages files contained therein such that the files themselves are caused to age with time and are not maintained in their originally stored state.” And: “There is a need to provide such an aging function to apply automatically to all files stored on the filing system without requiring a continuing user monitoring effort.”


Makes sense to me. Would you deliberately destroy your own files and complain if the degradation has not progressed enough in a certain time frame? Of course not. An aging feature should do that for you – conveniently, swiftly and without silly questions. Who would want to maintain the same quality of the original digital picture over an extended period of time? Taken to the extreme, imagine the ginormous dilemma we’d be in had we been able to preserve all those fragile documents in the library of Alexandria in their original quality. The inventors claim that the visual effect seen in an aging file system would be immediately indicative of the age of the photograph, which would probably support those of us who are too lazy to check the properties of the image file and discover the year, day, hour, minute and second a picture was taken – and when it was transferred to its permanent storage graveyard.

I am actually wondering why no one else had this idea before. There could be a solution for the huge pile of information we are creating every day. Natural decomposition can only help to reduce the burden we are placing on succeeding generations. Is this patent application really a thought that is complete? Why stop at digital aging? What about accidental loss of images in, say, statistical fires, floods or any other unfortunate event that can cause property loss?


In all seriousness, I am not quite sure how the average user would, in fact, react were a digital file system to automatically age a digital image or any other document. Of course, I am loosely understanding what the inventors are trying to do with this technology, but then we know that the basic thought already exists and is called sepia filter in your favorite image editing software. Even better, these filters they leave users a choice how much a picture should be aged and don’t force them into accepting the fact that a picture just gets old and degrades over time. Sorry, IBM this patent is a pure waste of energy and space. Perhaps this aging file system could be demonstrated on its patent application page?

Kurt Bakke in Business on November 22

Google’s Sergey Brin Donates $500,000 To Wikipedia

The Wikimedia Foundation announced that the Brin Wojcicki Foundation, launched by Google co-founder Sergey Brin and his wife, 23andMe co-founder Anne Wojcicki, have given half a million dollars to the Wikimedia Foundation, which operates Wikipedia.


Wikipedia finances itself exclusively through donations and does not generate revenue through advertising. Wikimedia said that it currently reaches 477 million visitors globally, which makes Wikipedia the fifth most-popular web site in the world. According to Google’s DoubleClick ad planner, Wikipedia reaches about 23.9% of the Internet population and counts about 7.1 billion page views every month. Wikimedia said that Wikipedia currently holds more than 20 million articles in more than 280 languages. Its volunteer community has more than 100,000 contributors.

“This grant is an important endorsement of the Wikimedia Foundation and its work, and I hope it will send a signal as we kick off our annual fundraising campaign this week,” said Sue Gardner, executive director of the Wikimedia Foundation, in a prepared statement. “This is how Wikipedia works: people use it, they like it, and so they help pay for it, to keep it freely available for themselves and for everyone around the world. I am very grateful to Sergey Brin and Anne Wojcicki for supporting what we do.”

 Ethan McKinney in Business on November 21

Google’s New Chrome Aura Window Manager Surfaces

Google’s Aura hardware accelerated window manager remains a mystery to the outside world, but there is now a video that gives a first impression of a feature in Aura: Translucent windows.
Chromium Logo
François Beaufort, a Chrome developer, posted the note about Aura on Google+ with a link to a YouTube video. There isn’t much to see other than, well, a translucent window frame, which can be activated in the Chromium Aura build via a flag, as well as constrained window dragging. Beaufort notes that “it is definitely work in progress” and there is certainly the indication that Aura will not be ready for a public release anytime soon.


Aura is described by Google as a hardware-accelerated user interface that will enable much richer visuals than Chrome delivers today. The main goal for Google is to depart from Gtk and a Microsoft-dependent user interface as well as Windows-specific elements that are causing headaches in the cross-platform code of Chrome. Aura is designed to work much more seamlessly across all platforms, including Mac and Linux.

Those users who are keeping up with the publicly available developer builds of Chromium recently got several new features in the browser. Most importantly, the chrome://net-internals page got a timeline feature that now paints a graph of incoming and outgoing data traffic, there is a new (and expected) flag for Pointer Lock, which hands over control of the mouse pointer to a web application and there is a nifty tab overview in chrome://sessions, which displays all open tabs - which is especially informative, if the live tab syncing feature is enabled and a user can see which tabs are opened on all synced devices.

 Wolfgang Gruener in Products on November 21

Acer Iconia A501: The Cheapest Way To Go 4G With A Tablet

Review – As Christmas approaches, it appears that more folks are considering a tablet as a gift for a spouse or for their kids and I am frequently asked which tablet I would recommend. The trend appears to be pretty much determined by price, not product: If I can spend $500 or more, the intent clearly is to get an iPad 2 and if it is less than $500, the target price appears to be $200 and the question is: Kindle Fire or something else? However, if it is a non-iPad 2 device, you may want to aim a bit higher and get a more enjoyable device. Example: Acer’s Iconia 501 4G, which is the cheapest way to get the full Android experience in a tablet. Here is what you can expect.

 

I am not exactly convinced that tablets are the post-PC devices some believe they might be and I have yet to come across a compelling product that would convince me to shell out $500 or more for a device I believe is a pure luxury entertainment product that just can’t replace a traditional notebook for typical computing tasks. I can easily see a market for the Kindle Fire or devices such as the Pandigital SuperNova, which are sitting in the $200 segment, which is much easier to accept than $500. However, if you are looking for such a $200 tablet, you have to realize that you will get crippled hardware on one or the other end, which will limit what you can do with a device that is already limited by its form factor – when compared to a regular computer. At $200, don’t expect full access to Android Market and don’t expect to get your hands on Maps apps. If you want to replicate the experience from your Android phone on an Android tablet, you will have to look for a tablet that has a all the necessary hardware, including a GPS chip. Among the cheapest tablets at this time is Acer’s Iconia series, which starts at around $330 for 7-inch tablets and $350 for 10-inch versions. In this article, I will be referring to the top-of-the-line-Iconia A501, which integrates 4G HSPA+ capability.

The basics: What it is
My tester came with 32 GB memory and checked in with a suggested retail price of $550. Based on the already outdated Android 3.0.1 (instead of 3.2 or even 4.0), the tablet integrates Nvidia’s also soon-to-be-old Tegra 2 (1 GHz) 250 processor, a 1280×800, 10.1-inch  pixel TN touch screen panel, dual camera (5MP/2MP), 802.11n Wi-Fi, a full USB port, Micro-HDMI output, Bluetooth 2.1, an accelerometer, a gyroscope, GPS, a digital compass, memory expansion via microSD, as well as a decent 3260 mAh battery pack.

Pricewise, the 32 GB model compares to a $729 32 GB+3G iPad 2. Directly competing Android 3.1 10-inch tablets with 4G tend to be at least $50 higher in price, while class-leading tablets such as the Samsung Galaxy Tab 10.1 32 GB (Wi-Fi only) cost around $500 (or from $570 with 16 GB and 4G). If you are willing to sign a data contract for 3 GB/$35 per month with AT&T, you can get the Iconia 16 GB for $330, plus a $50 gift card at the time of this writing.

So, for the full Android experience plus 4G, the Iconia is about as cheap as it gets right now.

The basics, continued: What it is not
Since I have come across this one so often, I believe it is worth noting. No tablet, neither the Iconia nor any other tablet I am aware of is a replacement for your notebook.

You may be able to do some simple computing tasks with your tablet, such as writing emails, or surfing the web, but that is about it. It’s purpose is to consume content, not to create it, so don’t expect a device that you can use to create or edit, for example, elaborate text documents. Office applications, especially Google Docs are, on tablets, very basic and often frustratingly difficult to use and, excuse me for being frank, they simply suck. If the purpose of this device is to create content, you may be much better off with an ultrabook. Keep in mind that tablets work best for all those things a smartphone is too small for: They are great basic computing and entertainment devices and their primary purpose is to run apps that were designed for touch input.

What works
It is difficult to criticize the Iconia given its price tag. Its strong point is, objectively, its overall feature set, which only lacks an SD card slot, and a removable battery. Subjectively, its design and material choices are among the best in this class.

The back of the device sports an elegant brushed aluminum surface, the 5MP camera plus LED flash, dual speaker grills on the left and right, as well as a docking port. The aluminum surface wraps around to the front and touch the glass screen on the top and bottom.

The Iconia has been hit with lots of criticism about its weight – about 1.7 pounds, which is hefty and well more than half of the weight of a Macbook Air (2.4 pounds). Rival tablets weigh as little as 1.2 pounds. The weight is due to the material choices and it is one of the compromises you have to make: It has some impact on the portability of the device, but I did not mind accepting the weight in exchange for the full feature set and the higher-quality materials.

Usability of the device was without surprises. A highlight was the battery, which allowed me to reach operating times of more than 30 hours on average usage days and more than 5 hours of continued, heavy usage.

What needs work
Tablets are, by default, devices that will require the user to make compromises. To find the tablet you will enjoy, you will have to figure which compromises you are willing to make. The Iconia is no exception. One of those compromises I mentioned already is its weight.

Also, with lower prices, you will typically encounter lower screen quality, which is also the case with the Iconia. Its brightness (322 cd/m2) and black level (0.20 cd/m2) are at the bottom of the field among Android 10-uinch tablets, while its contrast ratio (1610:1) is at the top. While the resolution an image quality in normal viewing is impressive through a resolution of 1280×800 pixel, the twisted nematic display panel looks good only at direct view – the visibility of the content fades quickly as the viewing angle increases. There are clearly better displays out there. Needless to say, in bright daylight, the display shows strong reflections and is nearly unusable outdoors.

A downside is the 5MP camera, which can be considered a snapshot camera for digital viewing at best. The same goes for the video camera.

Also, pay attention to the OS version. The fact that it runs Android 3.0.1 won’t allow you to run some of the newer applications, such as Photoshop Touch.

4G Data access? To buy or not to buy?
This is a difficult one. It simply depends on your user scenario and how often you will use the device out of the home and out of the range of Wi-Fi. Without doubt, it is extremely convenient to use a tablet with 4G connectivity wherever you are: In this case, Android automatically switched from Wi-Fi to 4G when the device was out of Wi-Fi reach – and switched back when Wi-Fi was available again. HSPA+ was also relatively fast in the area I live – and usually hit about 5 Mbps down. The problem, however, was the simply amount of data that was consumed.

Turning the device on for the first time swallowed more than 20 MB right off the bat. Half an hour web browsing and writing emails will easily cause you to consume 50 MB. Sure, you could also be tempted to watch YouTube, download apps or possibly watch a Netflix movie via HSPA+, but then you are looking at hundreds of MB per hour and you better make sure you have than 3 GB per month plan for $35 (each additional GB is billed at $10). Internationally, by the way, each MB is billed at about $19.97 and each GB for $20,450 – so you want to be careful with that. There is no question that AT&T’s data access – and data access in general – is overpriced nationally and internationally. It is plain ridiculous how much carriers charge for a few GB of wireless data transfer.

The upside of 4G access on such a tablet is that it is available when you want it to be, but it requires you, if you are on a budget, to be aware of when the tablet consumes 4G data and when not. If you are not on a subscription plan, there is a handy feature that simply cuts data transmission when your current plan is out of bandwidth and you can then purchase more. If you get a full-features Android tablet, spring for the non-contract 3G/4G version.

The bottom line
Let’s be realistic. $550 is a lot of money for a toy, which a tablet really is for most of us. With heavy 4G data usage it can easily cost you another $500 per year. I personally did not mind the weight of the Iconia and actually liked the substantial feel to it, but it is clearly a device you need to hold in both hands and its weight will cause some fatigue in your hands after a while of holding it.

All Iconia models are offered at the very bottom of their respective competitive price ranges and are worth a look, even if their hardware and software is a bit outdated and Acer is rumored to be offering Tegra 3 models soon. If you are wondering whether to buy a $200 tablet, keep in mind what you are expecting from your tablet: If you want full access to all or most Android apps, the value is in the platform and the additional expense to a device like the Iconia is a reasonable investment.

 Wolfgang Gruener in Products on November 21

Saturday, November 19, 2011

Microsoft Files Patent For A Data Center With A Spin, Literally


How trivial is the operation of a data center today? Does the data center itself, in its common form, still provide room for patents? Would the supply of electricity if it came from a wind turbine that is connected to the data center, in its simplest form, qualify for a patent? Apparently so.


Microsoft’s patent application for a wind-powered data center claims the rights to the invention of a data center that is not connected to an electrical grid, but to its own wind turbine. The turbine itself appears to be a generic model of a wind turbine that is described as a device that “includes blades mounted to the top of a tower that is at least partially hollow, the blades configured to rotate when the wind blows to generate the power.”

Further claims include descriptions of how servers are installed within racks, a battery system, as well as controllers that figure out whether a turbine creates enough, not enough or too much power. In those cases, the system would be able to either draw power from a battery and the servers would be throttled (if there is not enough power) or divert excess power to the better, if there is too much power generated. If there is any idea that could be called new – if we are willing to go that way – it would be the thought that, as the towers of wind turbines are hollow, these towers could be used as chimneys and leveraged to dissipate heat from the data center.


The motivation for such a data center, which, in fact may not be such a far-fetched idea to be used in the future, at least if greenhouse gas emissions truly affect climate change that could favor such a power supply model for an individual data center, is somewhat obvious. In Microsoft’s words: ” Computer data centers, that include network-connected computer servers that receive, process, store, and transmit data, utilize an immense amount of power to operate. Conventionally, therefore, computer data centers are connected to the power grid. As the amount of data stored on and transmitted over the Internet increases, however, more and more computer servers are utilized which is causing the amount of available power to become a scare resource and a resultant increase in the amount of carbon emitted to power servers.”

Makes sense to me. But does this idea call for a patent? I am not so sure. If someone else would build a data center with an attached windmill, would Microsoft, if it is granted this patent, in fact, demand license fees? Someone ought to stop this nonsense.


The conclusion in the patent could be considered amusing, if it was not part of what appears to be a serious submission for a patent:
“This document describes various techniques for powering computer data centers using wind-powered generators. A data center may include network connected servers that are electrically connected to, and powered by, a wind-powered generator that generates power by converting the energy of wind into electricity used to power the data center. The wind-powered generator may include blades mounted on top of a hollow tower. When the wind blows, the blades rotate to convert the energy of wind into kinetic energy. The kinetic energy is then converted to electricity used to power the data center. Server containers, configured to hold the servers, may be mounted to an outer wall at the bottom of the tower to form a supportive base for the tower. In some embodiments the hollow tower of the wind-powered generator may be used as a chimney to cool the servers.

In some embodiments excess power generated by the wind-powered generator may be redistributed to an alternate source, such as a battery storage device. The excess power may then be drawn from the battery storage device, at a later time, to provide power to the data center when the wind-powered generator generates insufficient power for the data center. In other embodiments one or more of the servers may be selectively turned off or throttled down into a lower performing state when the wind-powered generator is generating insufficient power for the data center.”

My take? It seems as if every simple idea is considered to be a candidate for a patent today. It is somewhat obvious that society is screwing itself with this kind of behavior – the thought that you would need a patent to protect an idea that could be viewed as common sense is simply wrong.

Wolfgang Gruener in Business Science & Research on November 18

Firefox Android Native UI Debuts Sans Electrolysis: Mozilla Has Work To Do


Mozilla released the first native UI builds for Firefox Mobile on its Nightly channel. Nightly users will get the update next Tuesday, but the current browser version is far from being usable on some Android devices.
Mozilla recently amped up the expectations for its native UI version of Firefox for Android. Instead of the old XUL-based UI, Mozilla has been working on a native UI that is constructed from widgets for the location bar and the main content window, which promise a faster app startup, a more responsive UI as well as more efficient memory use.

(c)Mike Finkle

A huge change is Mozilla’s departure from the highly anticipated electrolysis strategy, short e10s, which constructed Firefox into a multi-process architecture. However, according to Mozilla developer Mark Finkle, also resulted in substantial memory issues and created performance disadvantages. So Mozilla has changed its goals and is now running Gecko in a separate thread – and not in a separate process.

The first Firefox for Android 11 native UI nightly builds are far from being usable browsers and are, in their best sense, developer software and targeted at those that either have to work with early version browsers or are simply interested in how the next browser generation may look like.

We first ran the builds on a Honeycomb (Android 3.0) tablet and saw the browser in the raw. The build does not feature the recently released lean tablet UI and was surprisingly memory hungry. Compared to the default Android browser, which consumed somewhere between 76 and 97 MB of memory with one open tab (google.com), and Firefox 9 Beta (tablet UI), which ran on 71 to 92 MB, the native UI Nightly build asked for 148 to 160 MB of memory. The interface revealed plenty of rendering errors and slightly revised tabs and benefitted, subjectively, from the asynchronous content rendering as especially zooming appeared to be much smoother than in the current beta. I am stressing that it appeared to be smoother as the nightly build has a high tendency to crash and refused to restart after a crash on Honeycomb – and required a uninstall/reinstall after every crash.

The interface looked better on a vertical smartphone screen (Android 2.3), but lacked the slide screens to the right and left. The native UI is simply not ready for prime time yet and I would recommend that Nightly users remain cautious with the installation to this version at this time, if they intend to use the browser for productive web browsing.

Ethan McKinney in Products on November 18

Friday, November 18, 2011

With Tablets Potentially Being Free, PC Makers Expected To Withdraw From Tablet Market


It is not much more than a rumor, but we believe there is some credibility that PC makers may not be willing to keep investing in a market that may have huge volumes but not deliver any profits. Will the tablet market remain an iPad market and simply birth additional, new Kindle and Nook markets?

Tablets are, just like smartphones, platform devices and not just a collection of hardware pieces. The simple fact that there cannot be an unlimited number of platforms, it appears that major PC vendors may leave the tablet market and focus on other PC opportunities, such as ultrabooks. Digitimes reports that upstream supply chain companies are expecting tablet PC makers to gradually phase out their tablets since they cannot expect any profits in this segment.

Apple iPad 2
Apple iPad 2

Apple is dominating the segment at this time and is predicted to continue doing so at least through 2013, while Amazon and Barnes&Noble are pushing into this segment with $199 and below tablets. The entire business model could be shifting to platform and services profits with hardware being given away for free. While that prediction is standing on thin ice, there is good reason to believe that tablets will need a supporting platform to survive. If Amazon and Barnes&Noble are, for example, able to leverage their service profits against the retail price of their tablets, it will be tough for other tablet makers (without a platform) to compete. The value is clearly in services and software, not in the hardware.

Interestingly enough, Digitimes also indicated that Apple’s iPad 2 is still seeing strong demand, but overall sales were actually lower than those of the original iPad, which leads industry suppliers to believe that the interest in tablets could be fading. Business Insider recently quoted a Goldman Sachs report, which indicated that Apple may be coming under pricing pressure to support iPad 2 demand. Analyst Bill Shope mentioned that the iPad ramped much faster than any other Apple product (especially the iPod and the iPhone) before and that it will need more platform support (in the form of iCloud, Siri) as well as a price drop to sustain its momentum. At the very least, he expects a $399, 8 GB version of the iPad (which may not be such a smart move as 8 GB for apps may not be enough, if there is not flash card expansion slot).

So, could tablets have been just a fad? What about all those predictions that more than 240 million tablets may be sold annually by 2015?

If you have followed our coverage, then it’s no secret to you that we just don’t buy into market forecasts that cover an evolving market four years into the future. Anyone could predict anything at this time. There is, in our opinion, a very high likelihood that tablets are just a temporary hype and possibly transitionary devices to a product that is much closer to a notebook than a tablet. The usage scenario of a tablet is casual computing and being a complementary device to your PC, at best. The industry typically refers to tablets as lean-back devices rather than lean-forward devices (notebooks). You consume content on a tablet, you don’t create. An example would be the task of editing a document on a tablet via Google Docs, which is a pure nightmare as you can’t control the display of the keyboard during certain tasks and the options to edit a document are very limited. I personally have given up trying to replace my notebook with a high-end tablet for content creation purposes.

Earlier today, market researchers from IHS predicted that less than 1 million ultrabooks will be sold this year, but more than 136 million by 2015. Sure, ultrabooks have had a more than problematic start, which was due to a lack of innovation and the simple thought that a thin notebook could be sold to the consumer for twice the price of a regular mainstream notebook. The current notebook is not a representation of what an ultrabook can be. There is much more to this segment and if PC vendors are able to exploit an opportunity to innovate, they will, more than likely, see a growth and profit opportunity.

Wolfgang Gruener in Business on November 17

Thursday, November 17, 2011

GPU-Accelerated Windows For Windows 8?

Microsoft has just been granted a patent that describes a compositing desktop window manager (CDWM) that uses GPU-acceleration as the preferred method to render windows. Could this CDWM debut with Windows 8?

Microsoft patent application for a “compositing desktop window manager” was filed in November of last year, but dates back to another patent with the same title that was granted about a year ago (#7,839,419) and filed back in October of 2003 – or long before we considered general-purpose GPU application an immediate opportunity. However, Microsoft envisioned already back then a technology that “draws the window to a buffer memory for future reference, and takes advantage of advanced graphics hardware and visual effects to render windows based on content on which they are drawn.” In fact, some references in the patent indicate that Microsoft may have intended to use this technology for Windows Vista’s graphics-heavy Aero Glass UI.

Today, leveraging GPU acceleration for drawing a desktop surface and windows is a very important trend as software makers are trying to create richer interfaces without the restrictions of a legacy graphical subsystem. The latest revision of the patent application, which was approved as a patent (#8,059,137), enables application software to directly access the CDWM via an API, which connects the application to a subsystem programming interface as well as an interface object manager and theme manager. A legacy subsystem is provided as a fallback option.

The CDWM is tied to the unified compositing engine (UCE), which acts as communication module between the CDWM and a 3D graphics interface, such as OpenGL or Direct3D. The patent further explains the hybrid-display of windows where the main content may or may not be delivered via legacy (non-accelerated) sources, while the window itself will be entirely GPU-accelerated and could include a texture that is applied to a 2D or 3D mesh. Microsoft explains that a rich interface would feature “advanced textures, lighting, and 3D transformations.”

Could Microsoft be using such a technology for Windows 8? The timing suggests that the technology is not explicitly tied to Windows 8 and the patent mentions window ideas that are long gone – such as window shapes that combine different geometric shapes such as rectangles and ovals.

However, the deployment requirements of Windows 8 surely create a business case for GPU-accelerated windows in the new operating systems. It could help Microsoft lift the performance of the OS especially in ARM systems. The need for GPU-acceleration has, since the original filing of the patent, expanded to the content of Windows as well and it is likely that Microsoft has adjusted this technology accordingly. So, even if the window frame is less important today and content has the priority, GPU acceleration would be a beneficial feature for Windows.

Google recently revealed that it is also working on a GPU-accelerated windows manager called “Aura”.

 Wolfgang Gruener in Business Products on November 17

Electric Cars: Close, But Not There Yet

I’ve been watching electric cars for some time and almost fell in love with a Tesla, but concluded it made little sense, a few months ago. Recently, I had a chance to revisit this topic by living through the experiences of a friend who had bought an electric car. I still don’t think they are ready for prime time.

The other day I was going out for an outing with a friend who drives an electric car. He drives a Nissan Leaf, which is both one of the more popular and more affordable examples in this market and was going to meet me at the location in his new car. About the time I was leaving, he called with a change of plans as his car’s battery was nearly dead and he needed a ride. On the ride over and back, I got a better sense of why the current generation of electric cars isn’t for most drivers yet.

2011 Nissan Leaf
2011 Nissan Leaf

Non-Intuitive Range
We’ve come to know that, for gasoline cars, they get better mileage when driven at freeway speeds and this mileage decreases dramatically for city and especially in high-traffic driving. This poor mileage is because the engine isn’t running at its optimal speed and is just wasting power when stopping, starting, and idling. But an electric vehicle is nearly the exact opposite. It uses no power at idle (except maybe to run the air conditioner and internal electric systems) and the motor actually gets less efficient the faster the vehicle goes.

This is what happened to my friend. Typically, he has bumper to bumper traffic and that day there wasn’t any on his long commute and he arrived with a nearly dead battery. This implies that folks that will do the best with electrics, at least with respect to range, are folks that have short or moderate commutes in heavy traffic or live in cities. That brings up charging.

The Charging Nightmare
Apparently many charging stations have been under specified and run on anemic breakers. In my friend’s case, this means two cars can charge from the 4-station poll, but when an electric bike plugs in, it blows the breaker and everyone arrives to undercharged batteries. If you live in a city, running an extension cord out to the sidewalk parked car is probably not a reasonable option and parking garages likely don’t have charging stations yet. They don’t seem to have them here anyway, so finding a place to plug in can be a nightmare. I have yet to find a gas station with a metered plug and I’ve noticed that many of the store based charging systems either have non-electric cars parked in the related spaces or have damaged charging stations.
So, it is critical to make sure there is a place to reliably, key word being “reliably”, charge your new electric beast.

Hybrids or an Electric Bike May be a Better Choice
In a car the electric hybrid approach just makes more sense to me. You can drive short distances on electric power alone but you can use the engine when needed. That way you don’t have to worry about the running out of electricity and you can always find a gas station. The Chevy Volt is probably the best combination of price and capability in the market currently in an electric hybrid (has a range of up to 35 miles on electricity only). However, I decided to go the electric bike route first and bought two E+ Electric bikes for about $10,000. They have about 20 miles of range and are fine for a nice ride or a quick errand. Even the most expensive and powerful electric bicycle I’ve found is still under $15,000 – it is called the Optibike, but it was a bit too rich for my taste. There are some interesting electric motorcycles by Brammo (also under $15,000): I’ve been tempted, but I have my eye on a Can-Am Spyder Hybrid myself (for some reason I want one painted like the Batcycle).

In any case, if you are thinking about an electric car, you may want to find someone in your area that has one to chat with and start with an electric bicycle or motorcycle instead. That approach could save you a ton of money and pain and you may find, as I did, that the cars just don’t make sense yet.

 Rob Enderle in Business Test Drives on November 17

Chrome Matches Firefox Market Share For The First Time


In the first half of November, Chrome continued to gain market share, while Firefox’ losses accelerate again. Microsoft’s marketing campaign to support Internet Explorer has shown some effect, but is weakening again. To stay relevant, Mozilla will now have to deliver new features such as bookmark migration, silent updates, the Android tablet UI, the home tab app and the new tab page on time.

As expected, Chrome will pass Firefox as the world’s #2 browser this month, according to data provided by StatCounter. As of November 15, Firefox stands at 25.47% market share (-0.92 points from October) and Chrome at 25.45% (+0.46 points from October). Firefox overall market share loss appears to be due to substantial market share declines in Asia, Europe, South America, and, more recently, North America and Oceania as well. Firefox remains strong in Africa, where the software recently surpassed IE as the most popular browser. We expect this trend to continue in the second half of this month.


Chrome, however, has already passed Firefox in Asia and has a lead of 28.76% versus 24.19% in that region. It surpassed both Firefox and IE in South America with a share of 41.48% over 34.04% (IE) and 22.53% (Firefox) for the first half of November. In Europe, Firefox and IE are still in a dead heat, in which Firefox has a slight advantage this month (32.72% versus 32.67%), while Chrome is at 24.18%. North America remain s the weakest market for Chrome, where it has only 18.95% share. Firefox is ahead with 20.72%, while IE has a commanding lead of 48.86%, according to StatCounter.

If the current trend holds up, then Chrome will, in fact, pass Firefox for the first time in market share, at least according to StatCounter data. There isn’t much that Google has to do at this point, it seems, its current strategy to market Chrome simply via its website is enough to move users especially from Firefox to Chrome. Firefox has very little opportunity to win back users or even get users from Chrome, as it lacks the very basic tools that could simplify such a move – including bookmark import tools for Chrome-to-Firefox, which are now expected not to be available until version 11 of the browser.

If Mozilla is able to deliver the migration tools in 11, silent updates in version 10 as well as a new tab page and home tab layout for version 11 – and if we will be experiencing a steady increase in tablet usage, we believe, however, that Firefox could become a much more appealing browser to the general user again. We are especially impressed by the tablet UI of the browser as well as Firefox’ unique capability to nearly live sync data such as open tabs and bookmarks across various platforms (desktop, mobile, tablet), which is a feature Chrome and IE currently lack. Google’s development of a Chrome version for Android has apparently hit some road blocks and we are not aware of any possible release dates at this time. It will be critical for Mozilla to deliver tablet and smartphone browser features well ahead of its rivals to take advantage of a growing opportunity. Once Firefox offers convenient importing of Chrome bookmarks as well as a compelling new tab page/home tab app that provides a consistent experience across all product form factors, it is more than likely that more people will give Firefox a spin again.

In a best case, we believe that Mozilla could see a market share loss that is flattening toward the end of Q1 in 2012 and, if critical features are delivered on time and Boot-to-Gecko has a promising launch, could possibly grow its market share in Q3 2012 again, which would directly impact Chrome’s growth opportunity. Given Mozilla’s difficult competitive situation as it is tangled up in the middle of corporate interests of Google and Microsoft, a turnaround is not something that can be achieved immediately. Changing the current pace into a positive trend is rather unlikely at this time. The history of browser market share trends indicates that strategy and significant feature changes usually take about 4 to 6 months to show their impact in the market. Our chart below assumes a best case scenario for Mozilla.


At this time, however, Firefox’ market share losses are accelerating and are clearly at a pace that needs to be addressed in an aggressive way. Over the past six months, Firefox lost 13.04% of its market share (IE: -7.29%; Chrome: +31.46%) and Firefox is, for the first time, losing market share faster than IE does (-3.82 points over the past six months versus -3.20 points).


IE’s gains this month may be a fluke that are due to a recently launched ad campaign. The last time IE gained monthly market share was in June of 2010, according to StatCounter. In recent months, Microsoft has largely abandoned its focus on the overall browser market and is highlighting its impact on Windows 7. For the release of Windows 8, it will be much more critical for the company to have a substantial share of browser users on IE9 due to the transition to HTML5 apps; IE8 and older browsers that do not support HTML5 are largely irrelevant in that respect to Microsoft. Market share on IE8 and older would be important for Microsoft to attract browser users to its Bing search engine and keep them away from Google as we know that Chrome users are virtually locked in to Google’s core search engine and guarantee advertising revenues for Google.

We haven’t heard much about Bing recently with the exception of a Bing-ified version of Firefox, which means that Microsoft could be changing its strategy – or it could simply try to draw attention away from its older browser versions.

Wolfgang Gruener in Business on November 16

Wednesday, November 16, 2011

Photoshop Touch: Adobe Validates Tablets


Adobe today released its Touch Apps to Android Market and gives Android what Linux has been waiting for since 1991: Photoshop.

Photoshop is not necessarily Photoshop. A $10 Photoshop Touch that is currently offered through Android Market will not replace Photoshop CS5.5 for the PC or Mac, which will cost at least $200 for an upgrade or $699 for the full version. The tablet version, which is offered only for Android 3.1 devices, caters to the creative casual user and does not (and cannot) offer the same precision editing tools of the desktop Photoshop version. Consider it a playful version of Photoshop and an app that goes beyond the already available Photoshop Express.

Adobe Touch Apps also include Ideas (vector based drawings), Kuler (color themes), Proto (sketching and wireframing app), Debut (presentation) and Collage (combine photos, drawings and text).


Photoshop Touch includes some basic Photoshop tools that make the software the most powerful image editing app available for Android today, including the ability to use layers as well as extract a background from certain objects. There are a range of effects available to spruce up your image, in addition to manual adjustment features such as curves, but tablet users have to be realistic about the fact that these are mainly features for the time you are on the road, and not when you have to prep images for publishing or printing. Instead, Photoshop Touch provides the usual suspects of social features, including sharing via Facebook or email.

Photoshop Touch is only available for Android devices for $10 and there is no such app currently available for the iPhone (there is Photoshop Express for iTunes, however). In fact, this is the first time Adobe has expanded the Photoshop brand to another platform ever since Photoshop 1.0 was released for the Mac in 1990 and Photoshop 2.5.1 for Windows in 1993.

For Adobe, this is a big deal as it carefully maintains and expands its most valuable brand and indicates a first major move for the company to earn money with a product that does not run on the desktop, but clearly connects with its traditional products. For the tablet, it is a major sign of confidence of a big application manufacturer as Adobe has generally been viewed as critical software provider for a platform’s success. For Android, it is an even bigger deal as Android tablets now get a creative software package that would traditionally cater to Apple users. Even if it is a casual software package, a Photoshop app that is unique to Android at this time could be a major selling point for Google’s Android OS.

Wolfgang Gruener in Products on November 15

IBM Confirms 100 Petaflop Supercomputer Design


IBM announced BlueGene/Q, its next-generation supercomputer architecture, which will debut in the Sequoia system next year and eventually scale to more than 100 PFlops – more than ten times the performance of today’s fastest supercomputer.

IBM BlueGene/Q
IBM BlueGene/Q

The final version of Sequoia, deployed at Lawrence Livermore National Laboratory (LLNL) is expected to reach about 20 PFlops in next year and become one of the world’s fastest supercomputers as well as the world’s most efficient supercomputer with a performance of 2 GFlops per watt. Sequoia system will integrate 1,572,864 processing cores (98,304 16-core PowerPC A2 processors) in 96 racks.

However, BlueGene/Q is especially interesting as IBM today confirmed that Bluegene/Q is capable of more than 100 PFlops, which was described about two months ago in a patent application submitted to the USPTO. According to that filing, a massive document with 649 pages and 2263 claims, a 100 PFlops BlueGene/Q system could consist of 1024 compute node ASICS in 512 racks – representing a total of 524,288 nodes and 8,388,608 processing cores.

According to IBM, each processor consumes 30 watts of power, which puts Sequoia power consumption including storage and cooling requirements into the neighborhood of about 6 to 8 MW, while a 100 PFlops system will easily exceed 30 MW. In comparison, today’s fastest supercomputer, Japan’s K Computer, delivers 10 PFlops via 705,024 Sparc64 VIIIfx processors for about 12.7 MW.

Wolfgang Gruener in Business Products on November 15

Tuesday, November 15, 2011

Firefox 10: Can Mozilla Afford To Miss Silent Updates?


Mozilla released the downloads of Firefox 9 Beta, which will be released just before Christmas as final, as well as Firefox 10 Aurora, the developer version of Firefox. But even with six new versions within one year, Mozilla may not have accomplished what the rapid release process promised: Most notably, Mozilla released substantial memory improvements this year, but it will miss some features it so desperately needs to compete with Chrome.

mozilla firefox

Mozilla announced the availability of Firefox 10 Aurora a bit earlier today and those who are following Firefox development may have been surprised by the feature set that is currently laid out. The details are mentioned here and there are plenty additions in this release, albeit not those the average user may get excited about. So, the explanation is that Firefox 10 focuses on HTML5 enhancements, which would include the HTML5 Visibility API as well as 3D Transforms as well as WebGL anti-aliasing (which is not part of HTML5). Despite those 40 or so additions and modifications, I am not sure if it will be enough.

What surprised me quite a bit is that silent updates is not part of the list. Perhaps it was just an accidental omission, but, strangely enough, silent updates have been marked as “at risk” for deployment in the release tracker for Firefox. One critical feature – the removal of the security dialog in Windows – depends on the resolution of a critical bug, and the actual background updates are generally “at risk” as well. As controversial as the silent updates are, it is somewhat obvious that they have been working very well for Google and Chrome and enabled Google to transition more than 90% of its user base from one version to another within a week, while Mozilla and Microsoft rely on users to actually make a choice for an update and actively initiate it or grant permission. Common sense would suggest that silent updates are more convenient for browser users and would make them stay with Mozilla.

Missing silent updates and moving them to yet another version is a big risk for Mozilla in a time when Chrome will be overtaking Firefox in market share. Mozilla simply can’t afford such delays and needs to stay nimble and execute perfectly if it wants to compete effectively.

For the first 13 days of this month, Chrome has passed Firefox market share at 25.47% versus 25.32%, according to StatCounter. Over the past weekend, Chrome climbed to 27.22% market share, while Firefox dropped as low as 25.05%. Over the past weeks, Mozilla has lost its #2 position in Asia to Chrome, it is about to lose #2 to Chrome in North America and Oceania, while Chrome has climbed to become the #1 browser in South America. There isn’t much Mozilla can do to change this situation now, but there needs to be a strategic feature plan and those features will need to arrive in time.

The current problems do not end with Firefox 10 and silent updates. It appears that Firefox 11 will get the much anticipated new tab page, but Mozilla just listed the home tab as well as Chrome settings migration “at risk”. Firefox 11 is scheduled for release on March 13, 2012. However, by that time, Mozilla may have fallen into the 22% neighborhood and Chrome may be at 28%, if the current trend holds up.
What do you think? Can Mozilla afford to miss those features?

Daniel Bailey in Products on November 14