Saturday, January 9, 2010

Google Takes Search Real-Time


Gradually, over the past decade, Google has compressed the gap between fresh indexing of the Web from months to mere minutes. On Monday the search giant upped the ante in time-sensitive search, saying that within a few days it will offer search results--including headlines, blogs, tweets, and feeds from Facebook and MySpace--that are just seconds old.

At the same press event, the company unveiled new search features for mobile devices. These include a prototype visual search technology, which allows snapshots of real objects, like signs and buildings, to be used as search "terms." It also tweaked its geographic search--your GPS-derived position now causes Google to offer different search results based on location. For example: if you start a search with the letters "R" and "E" in Boston, the service will suggest various "Red Sox" search results, while the same two letters typed in San Francisco suggest the retailer REI.

However, Google clearly sees up-to-the-second search results as its most important new offering. The search giant has recently come under unfamiliar pressure from Microsoft's revamped search engine, Bing, which also provides some "real-time" search results.

"This is the first time, ever, that any search engine has integrated the real-time Web into the results page," Amit Singhal, a Google fellow, said yesterday in making the announcement.

"Information is being created at a pace I have never seen before--and in this environment, seconds matter," Singhal added. "I cannot emphasize enough--relevance is the foundation of this product. It is relevance, relevance, relevance. There is so much info being generated out there, getting you relevant information is the key to success of a product like this."

The arrival of Twitter, in particular, has focused the attention of many Internet companies on the value of real-time information on the Web. By tapping into customers' interest in time-sensitive information--from Twitter posts to breaking news stories--Google stands to build its audience and, ultimately, its advertising revenues.

The new feature will be available when a user clicks the "Latest results" tab on Google searches. It will be available immediately in English-language countries, but will soon be expanded to other languages, the company says. Searchers will see updates from popular social sites such as Twitter and Friendfeed, and headlines from news sites. Visiting Google Trends and clicking on a "hot topic" will reveal a search results page showing the most popular real-time information.

Other search engines are working to make their results just as fresh. Bing includes some recent results in its search returns, and the newcomer Cuil launched streaming results last month. "It is a good thing to see Google innovate on their search page thanks to competition brought on by other search engines like Bing and Cuil," said Seval Oz Ozveren, VP for business development at Cuil.

The visual search tool, released in Google Labs, lets users take a photo of a landmark or a store sign, for example, and then searches billions of images for matches, and for Web pages providing relevant information. However, this feature will not include face-recognition software until Google devises a system to protect privacy. "We have decided to delay that until we have more safeguards in place," says Vic Gundotra, Google's vice president for engineering.

Dan Weld, a computer scientist and search researcher at the University of Washington, tested the visual search technology and pronounced it "pretty darn cool." He says that it recognized a can of Diet Dr Pepper and found relevant search returns. And, after initially drawing a blank on a bottle of Lipton Iced Tea, it recognized it with a closer-up shot, and delivered good search results.

Weld suggests that the technology works by doing optical-character recognition of the words, rather than of the labels itself, since at one point it caught the letters "API" from a label and gave him search results for "application programming interface". The technology also recognized the Seattle space needle and gave him tourist websites. "Not a formal evaluation, but it's pretty neat," he says. "And it seems like it has the potential to be a huge opportunity for them if it takes off."

With the convergence of billions of mobile networked devices, powerful cloud computing resources, and ubiquitous sensors like cameras and GPS chips, "it could be that we are on the cusp of a new computing era," Gundotra added. "Take the camera and connect it to the cloud, it becomes an eye. The microphone connected to the cloud becomes an ear. Search by site, search by location, search by voice."

Making Money with Social Media


In retrospect, 2009 may be viewed as the year "social media" came of age: Facebook passed 350 million active users, Oprah made Twitter mainstream, and LinkedIn introduced a service to help recruiting agencies search the site for job candidates. But using microblogs, photoblogs, user-generated content, and even traditional blogs to interact with customers takes time and money, and some companies still question whether all that effort is doing them any good. So how does a company not only measure the results of its social media efforts but also effectively manage them?

Early in December, Social Agency, a five-person startup based in Austin, TX, launched a Web-based software package called Spredfast that helps companies manage their social media campaigns. The software not only measures audience size and engagement but also allows coordinated planning and automated posting across multiple social media platforms.

Specifically, the Web-based software counts how many people view a company's Twitter, LinkedIn, Facebook, YouTube, and Flickr updates, as well as posts managed by several popular blogging platforms, such as Moveable Type, WordPress, Blogger, Lotus Live, and Drupal. It also measures how the audience is interacting with all this content--for instance, how much they are commenting on posts, clicking on links, or retweeting updates.

The goal, says Social Agency cofounder Scott McCaskill, is to let companies see "whether all the time put into doing those things is really helping build brand or product awareness, which kinds of content are most successful, what days and even times of day result in the most traffic or new followers/friends."

A free version allows a company to manage a single identity or "voice" across each platform. Paid versions let companies coordinate multiple users and voices, and provide a longer data history. McCaskill says the software has had the most success with units of large companies and marketing agencies.

Media metrics: With Spredfast, companies can evaluate how people read, pass along, or comment on content on social media websites over time.
Credit: Social Agency

Spredfast gives companies a way to plan and manage content deployment. For instance, users can write blog entries, tweets, or Facebook updates ahead of time and then schedule when they will be posted. A store that might offer an online coupon code or one-day sale could, with Spredfast, have Twitter push that code out several times a day to increase the number of site visitors. The software's metrics, McCaskill says, let marketers figure out the best times to post updates. Spredfast also makes it easy for them to test different strategies.

The company launched a year ago as a maker of custom Facebook applications. When Facebook redesigned its home page, says McCaskill, Social Agency's business model was effectively torpedoed. As part of its sales strategy, the company had spent a lot of time helping clients plan their social media strategies. So the founders retooled and used their expertise to start building Spredfast about nine months ago. The software launched in private beta in September, public beta in October, and had its "official" launch on December 2.

Social Agency plans to introduce a feature by the end of January that will help users design a social media campaign based on their objectives. McCaskill says that Spredfast will most likely present users with a list of common marketing goals that they can check off. The software will suggest a template for a campaign based on what's worked best for clients with similar goals.

Upgrading the Laptop's Touch Pad

New software promises to let laptop users accomplish complicated tasks without lifting their fingers from the touch pad. The software, called Scrybe, is made by Synaptics, a Santa Clara, CA, company that already provides touch pads for 70 percent of notebooks on the market and 90 percent of their smaller netbook cousins. The software is currently available to a limited number of beta users.

Command control: The company that makes most laptop touch pads has developed software that lets users accomplish complicated tasks with the touch pad alone, as shown above.
Credit: Synaptics

With Scrybe, users can perform tasks, such as performing a search on Wikipedia, by tracing one of a number of predetermined shapes on a touch pad. They can also create custom gestures for specific custom tasks.

Ted Theocheung, head of the Scrybe program and Synaptics's PC and digital home business unit, says the software is built around the idea of "gestural workflows," which accomplish fairly complex tasks, such as conducting online research, shopping, or using multimedia, with the touch pad alone. Theocheung says that gestures can eliminate the need to type and shorten the number of steps needed to complete a task. Some laptops already feature simpler multifinger gestural controls, such as two-finger scrolling.

A user initiates Scrybe by tapping three fingers against the touch pad. This activates a mode in which the user can draw commands with a single finger that set off strings of actions. For example, a "W" opens Wikipedia and searches for a phrase the user highlighted previously. When the user is ready to go back to using a single finger, another three-fingered tap exits Scrybe's command mode.

"I think there's definitely value in being able to maintain focus on the track pad," says Gabriel White, interaction design director at Punchcut, a user-interaction design firm based in San Francisco. White notes that the Synaptics software includes multitouch gestures, but suggests that many aspects of Scrybe's gestural workflows are similar to keyboard shortcuts--useful and appealing to an advanced user, but likely to overwhelm a more casual user.

More complex gestural interactions are possible because of the underlying technology of touch pads, says Theocheung. Newer touch pads use "image sensors" that gather "pixels" of touch data from the pad and use that to build up an image of how the user is in contact with the device. "Some of the Scrybe technology has been in our labs a long time, but we needed these new sensors to make it a reality," Theocheung says.

Those who own older devices can use a simpler version of Scrybe that lacks multitouch but still supports command symbols drawn with a single finger. Synaptics will soon include Scrybe in the software packages it delivers to manufacturers, which then sell the software along with new laptops.

However, White says the success of gestural interfaces may depend on developing a vocabulary that can be transferred from one product to another. "If you give someone a touchscreen phone, they immediately start doing the gestures they have learned from However, White says the success of gestural interfaces may depend on developing a vocabulary that can be transferred from one product to another. "If you give someone a touchscreen phone, they immediately start doing the gestures they have learned from iPhone," he says.

The Future of Human Spaceflight


The International Space Station (ISS) is one of the most complex and expensive engineering projects ever undertaken. When it is completed in 2011, it will have cost nearly $100 billion. And then, just five years later, the space station will be destroyed when NASA deliberately takes it out of orbit and plunges it into Earth's atmosphere.

That, at least, is NASA's current plan. The agency would like to keep the station running, but funding for it is projected only through 2015, much to the consternation of researchers who are just beginning to use it and international partners who have invested billions of dollars in the project. Extending the life of the station would cost $2 billion to $3 billion a year. Even "deorbiting" it--dumping its remains safely into the ocean--will not be cheap, costing at least $2 billion.

The 2015 deadline means that after decades of largely directionless space policy, Congress will be forced to make at least one clear decision: it must allocate funds for either the space station's continued operation or its destruction. And that is just one of a number of urgent issues facing the country's human spaceflight program. The space shuttle is due to be retired by late 2010 or early 2011, leaving NASA without a means of sending astronauts anywhere for several years. And the key elements of NASA's exploration program, the Ares I rocket that will launch astronauts into orbit and the Orion capsule that will ferry them around in space, are several years behind schedule.

In October, the Augustine Committee, a panel chartered by the White House and chaired by former Lockheed Martin CEO Norman Augustine, issued its report on the future of space travel. The committee examined NASA's plans and explored alternatives. Much of the report discussed the merits of different destinations in space and the rocket and spacecraft technologies that could be used to reach those destinations. But embedded in the report is a rationale for why there should be a human spaceflight program at all. "The Committee concluded that the ultimate goal of human exploration is to chart a path for human expansion into the solar system," it states.

Over the years, NASA and space advocates have put forward many reasons to justify sending astronauts into space. They have garnered support by offering something for everybody, especially the military and scientific communities; scientific progress, strategic superiority, and international prestige have been foremost among the promised benefits. On closer inspection, though, these justifications don't hold up or are no longer relevant. For example, robotic missions are increasingly capable of scientific work in space, and they cost far less than human crews. Satellites launched on expendable boosters allowed the United States to achieve strategic dominance in space. And Cold War motives disappeared with the collapse of the Soviet Union.

Consequently, some have concluded that there is no longer any reason for human space exploration. A longtime critic of human spaceflight was the late James Van Allen, who in 1958 made the first major scientific discovery of the space age: the radiation belts around Earth that bear his name. In a 2004 essay, Van Allen wondered whether robotic spacecraft had made human spaceflight "obsolete." "At the end of the day," he wrote, "I ask myself whether the huge national commitment of technical talent to human spaceflight and the ever-present potential for the loss of precious human life are really justifiable."

But for most of the engineers and astronauts involved in the space program, astronauts can never be rendered obsolete by robots, because human spaceflight is an end in itself. They share the committee's belief that the purpose of these manned missions is to allow people to expand into, and ultimately settle, outer space.

For taxpayers who may well consider that prospect a pipe dream or the stuff of science fiction, the question is why their money should be spent to support it. The argument for funding human space exploration becomes similar to the argument for funding fundamental research: that doing so sometimes pays off big, usually in unexpected ways. By definition, high-risk ventures such as space exploration or curiosity-driven science seem unlikely to succeed and have unpredictable outcomes, but just such ventures have led to many inventions and discoveries with vast economic and historic significance.

Those who want a consistent long-term policy must reconcile their agendas, either supporting the rationale of settling space or coming up with an even better unifying purpose of their own. This must happen soon, or NASA's human space program will sputter to a halt. The committee put it bluntly: "The U.S. human spaceflight program appears to be on an unsustainable trajectory."

That has been true for some time. In early 2004, President Bush unveiled his strategy for continuing the U.S. space program. Key milestones included completing the ISS and retiring the space shuttle by 2010, developing what would become known as the Orion and Ares I by 2014, and returning humans to the moon by 2020, with long-term but undefined plans beyond that for human missions to Mars.

But Bush failed to provide a clear, unifying rationale for these plans, and they never received full funding. Under a constrained budget, the projects outlined by Bush will take years longer than originally planned. An example is the Ares V heavy-lift rocket needed for human missions to the moon. The current plan calls for it to be ready in the late 2010s, but the committee found that it could not be completed before the late 2020s--and even then there would be no money to develop the necessary lander spacecraft.

Using the Augustine Committee's rationale, however, we can make a reasonable plan based on the fundamental goal of human expansion into the solar system. With the goal of the space program clarified, money can be better spent and performance can be measured in concrete terms; Congress is far more likely to provide sufficient funding over the long term if it can see along the way that judiciously spent money is yielding tangible results. One of the first, and easiest, decisions to make is to extend the life of the ISS until 2020. If people are going to live and work in space for prolonged periods, we must test technologies and evaluate human performance under those conditions, and the ISS would be the ideal laboratory. Moreover, keeping the station operating will preserve an important international partnership for future missions.

One of the challenges in extending the life of the space station is that once the shuttle is retired, the Russian Soyuz spacecraft will be the only means of transporting crews to and from orbit until Ares I and Orion are ready, theoretically in 2015 (the committee believes that 2017 is more likely). The Augustine report suggests that NASA should get out of the business of shuttling astronauts back and forth and let the commercial sector provide transport to the station. The hope is that companies, serving NASA and other customers (such as space tourists and even other governments), can replace the shuttle sooner and at lower cost than NASA could, freeing up money for exploration.

The report also strongly endorses technology that NASA has largely overlooked to date: in-space refueling. With that capability, we wouldn't have to develop extremely expensive rockets, like the Ares V, that would be large enough to carry all the propellant needed for a trip to the moon. Fuel tanks--and thus the rockets themselves--could be smaller. Commercial operators could transport propellant and even maintain in-orbit fuel depots. The necessary technologies, the committee found, could be demonstrated in space within a few years.

If America's space community can't agree on this approach and thus secure the needed funding, the Augustine Committee concludes, it would be better to stop sending humans into space rather than wasting money and perhaps lives on a program that has no chance of success: "The human spaceflight program ... is at a tipping point where either additional funds must be provided or the exploration program first instituted by President Kennedy must be abandoned, at least for the time being."

Jeff Foust is the editor and publisher of The Space Review.

Bringing Color to E-Readers

One of the hot topics at the Consumer Electronics Show (CES) this week in Las Vegas is color e-readers, with several companies showcasing new products. While E Ink has been a leader in e-reader display technology, the company has yet to produce a color display capable of showing video, and the next generation of devices could threaten E Ink's dominance.

Colorful technology: Interferometric Modulator elements (above) are the key to Qualcomm’s Mirasol color screens. The elements allow for low-power displays capable of displaying video.
Credit: Qualcomm MEMS Technologies

E Ink's monochrome screens are made up of microcapsules full of positively charged white particles and negatively charged black particles. Applying a negative charge causes a pixel containing the particles to appear white, while a positive charge results in a black appearance. Color versions use the same basic technology, but with colored filters added. Unfortunately, these filters tend to reduce the brightness of the display, leading to a washed out appearance.

Companies such as Pixel Qi, Qualcomm MEMS Technologies, Liquavista, and Kent Displays all have new ideas about the best way to make a good color screen for an e-reader, and they are eager to get in the game.

This morning at the CES, Pixel Qi demonstrated its new display technology, targeted for use in netbooks, e-readers, and tablets. In high-power mode, the 10.1-inch display acts like a traditional LCD screen: a backlight provides light that is filtered by red, green, and blue sub-pixels to create desired colors. However, the display also has a low-power mode. In this mode the backlight is turned off, and reflective, mirror-like, elements--placed alongside the red, green, and, blue subpixels--take over the job of displaying the image, now in black and white. (How these elements are operated and distributed across the screen is being kept secret by Pixel Qi.)

Switching from the backlit mode, to the reflective one drops the display's power consumption from 2.5 Watts to 0.5 Watts. This is for a refresh rate of 60 Hz--fast enough to display video. Pixel Qi claims that using software to put the display into an e-reader mode--suitable for reading text, where the screen might only update ten times a second--could drop the power consumption to as low as 100 milliwatts. The displays are currently in mass production and a number of device manufacturers are expected to announce products incorporating Pixel Qi's display shortly.

"This is the year where you're going to see some very interesting designs come to market," says Jim Cathey, vice president of business development for Qualcomm MEMS Technologies. "I don't think they'll even be called e-readers in the near future." With a myriad of features such as Web access, e-mail, and e-reader programs, these products will be known as smart devices, he says.

Qualcomm's Mirasol screens can handle all of those applications and even display video. Much like E Ink screens, Marisol displays are reflective and require little to no power until the on-screen content needs to change. A little ambient light is also all that's needed to see the screen. These displays are consequently ideal for a task such as reading, when the screen doesn't have to change very often. But the Qualcomm device differs greatly when it comes to other applications, such as video or text messaging, that require frequent changes on screen. In those scenarios, Cathey says, Marisol's displays perform much better than E Ink's because they require less power per screen change. "As the content changes, the user experience changes and so do the requirements," he says.

Mirasol screens, which are expected to appear in e-readers later this year, are composed of Interferometric Modulator (IMOD) elements. Each element is made of two conductive plates. One is a thin film stack on top of a glass substrate, and the other has a reflective membrane. The height of the air gap between the plates determines the color of light that is reflected from the IMOD. When a voltage is applied, the plates are drawn together by electrostatic forces and the element goes black. When the voltage is removed, the plates separate and color is reflected off the IMOD. A single pixel is made up of several IMODs; adjusting the height of each affects the overall color of the pixel. The plates stay in place, using almost no energy, until the color needs to change again. A plate only has to move a few hundred nanometers to change color and can do it in tens of microseconds--fast enough to show video.

The LCD-based screens from Kent Displays feature technology that is very different. "Our material is transparent, so we can put three layers on top of each other," explains Asad Kahn, the company's chief technology officer. "One is red, one is green, and one is blue." In contrast, IMOD elements have to be placed side by side. Kahn says the layering approach ultimately leads to a brighter display. And unlike Qualcomm, Kent's technology is already on sale. The Fujitsu FLEPia color e-reader, released last spring, features the screens. Unfortunately, the refresh rates aren't yet fast enough for video.

Liquavista announced two color e-reader screens of its own this week. Both the LiquavistaColor and the LiquavistaVivid are readable in sunlight, but the latter will also include a backlight for more vibrant hues. The screens are slated for release in 2010 and 2011, respectively. The LCD devices are based on a technique called electrowetting, in which a voltage is used to modify the surface tension of colored oil on a solid substrate. In the absence of a voltage, the oil forms a film over the substrate and is visible to the viewer. When a voltage is applied, the pixel becomes transparent. By controlling the voltage of each pixel independently, a picture can be displayed. Unlike E Ink's technology, electrowetting pixels can be switched in a few milliseconds, making them suitable for showing video.

With so many video-capable e-reader screens on the horizon, E Ink has decided to focus solely on one application: reading. But its upcoming devices will feature color screens. Sri Peruvemba, the vice president of marketing at E Ink, says the company will have color devices out by the end of next year. Unfortunately, the refresh rates are too slow for video. "We have animation that we can do today, but we can't do full video speed," Peruvemba says.

So while their competitors will likely slice up the market for smart devices with Internet and video capabilities, E Ink plans to go after the education market. The company will make "dedicated" e-readers for computer textbooks, Peruvemba says, adding that the color should add to the experience. But the devices will intentionally omit any distracting applications, such as a phone or Web browser.

"If I give one of these devices to my daughter and I know she's going to make phone calls on it and surf the Internet on it, I'm not going to be motivated to buy it for her," he says.

Google Reveals Its New Phone

The Web giant launches an online store to distribute Android-based cell phones.

Google launched its own cell phone, a device called the Nexus One, at a press conference in Mountain View, CA, on Tuesday. Designed and built by the Taiwanese handheld-device company HTC in partnership with Google, the phone is being sold through a new online store that will sell not only Nexus One but also future devices based on Android, Google's mobile operating system. Consumers can buy the Nexus One on its own, or with a service plan on T-Mobile's network.


Voice mail: The Nexus One features the latest software for the Android operating system, including voice recognition for every text field and sophisticated 3-D graphics.
Credit: Google

Calling the device a "superphone," Mario Queiroz, a vice president of product management at Google, said the company wanted to create a phone to demonstrate "what's possible on mobile phones through the Android platform."

Stressing that the Nexus One is actually the first in a series, Andy Rubin, Google's vice president of mobile platforms, said that devices sold through Google's online store will always demonstrate "the best possible Google experience."

The Nexus One includes a one-gigahertz processor that's faster than that of most smart phones on the market today (Verizon's Droid, for example, has a 550-megahertz processor, and the iPhone's processor is estimated to be around 600 megahertz). Other hardware specifications include a 3.7-inch display, a five-megapixel camera, light and proximity sensors, and dual microphones that allow for noise cancellation.

"With that hardware, we've think we've got half the story," said senior product manager Eric Tseng. "With the Nexus One, it's not just hardware alone." Tseng noted that the Nexus One's processor allows the phone to run multiple applications simultaneously without slowing down, and to support a new 3-D framework that comes with the 2.1 version of Android, which was also announced at the event.

Tseng demonstrated several applications that showcase the 3-D graphics of the Nexus One, including a full-featured version of Google Earth. The phone let him navigate through the popular mapping software in three dimensions, flying over areas and zooming in. "We really wanted to push the 3-D capabilities that you get with these high-end chips to their limits," he said.

Tseng also showed off some sophisticated voice capabilities, building on voice software that Google has offered previously. In Android 2.1, any text field can accept voice input, which will allow users to compose e-mails, text messages, and Twitter and Facebook updates without touching the device. These tasks are handled by Google's servers. Tseng added that the voice software becomes more accurate with each use.

Kevin Burden, head of ABI Research's mobile-devices group, says that exciting software that takes full advantage of the one-gigahertz processor will be very important to the success of the Nexus One. "You have to think the reason Google is [launching its own phone] is that it has certain services in its own lab that need this type of processor." For the Nexus One to take off, Burden says, "it has to be more than just a phone."

Though the Google Earth application looks nice, Burden doesn't believe it is substantially different from what's already available for the iPhone.

Google's online store now offers the phone for $529 without service, or for $179 with a T-Mobile contract. The company says it plans to add more devices and carriers as soon as possible. In particular, Verizon and Vodafone contracts will be available beginning in spring 2010, as will a version of Nexus One that runs on Verizon's network.

Though T-Mobile is the only current official service plan for the Nexus One, Queiroz said a user could insert the SIM card from any network that uses the Global System for Mobile Communications (GSM), including AT&T. The catch, however, is that the phone doesn't support the frequencies that AT&T uses for its high-speed 3G network, so a user would only be able to use the Nexus One on AT&T's slower EDGE network.

In addition to executives from Google and HTC, Sanjay Jha, co-CEO of Motorola, which makes the Droid, appeared at the press conference. Jha said that the Droid will upgrade to the software that's available for the Nexus One.

Monday, November 23, 2009

Internet Explorer 9 with GPU acceleration and HTML 5

A new generation of Internet Explorer web browser, version 9, will use the GPU for processing web pages, and that will lead to less use of main processor.

If everything completes properly IE 9 should get the higher performance, by sending images and text to the GPU processor for processing through Direct2D and DirectWrite, ie using DirectX functions. This means that content rich web sites will be processed faster and with a lot less use of main processor.

IE 9 will bring support for HTML 5 that should in the coming period replace HTML 4. Demo of using GPU in IE 9 can be seen on the following link, but it is necessary to install Silverlight to see the video: IE 9 GPU Demo