How Google has made search faster in 10 months




Google search has expanded its network, serving its users from 700% more locations than a year ago, a research team including an Indian-origin scientist has found. 

Over the past 10 months, Google search has dramatically increased the number of sites around the world from which it serves client queries, repurposing existing infrastructure to change the physical way that Google processes web searches, according to researchers from University of Southern California (USC). 

From October 2012 to late July 2013, the number of locations serving Google's search infrastructure increased from a little less than 200 to a little more than 1,400, and the number of ISPs grew from just over 100 to more than 850, according to the study. 

Most of this expansion reflects Google utilizing client networks that it already relied on for hosting content like videos on YouTube, and reusing them to relay - and speed up - user requests and responses for search and ads, researchers said. 

"Google already delivered YouTube videos from within these client networks," said Matt Calder, lead author of the study. 

"But they've abruptly expanded the way they use the networks, turning their content-hosting infrastructure into a search infrastructure as well," said Calder. 

Previously, if you submitted a search request to Google, your request would go directly to a Google data centre. Now, your search request will first go to the regional network, which relays it to the Google data centre. While this might seem like it would make the search take longer by adding in another step, the process actually speeds up searches. 

Data connections typically need to "warm up" to get to their top speed - the continuous connection between the client network and the Google data centre eliminates some of that warming up lag time. 

In addition, content is split up into tiny packets to be sent over the internet - and some of the delay that users may experience is due to the occasional loss of some of those packets, researchers said. 

By designating the client network as a middleman, lost packets can be spotted and replaced much more quickly. 

Calder worked with Ramesh Govindan and Ethan Katz-Bassett of USC Viterbi, as well as John Heidemann, Xun Fan, and Zi Hu of USC Vierbi's Information Sciences Institute. 

The team developed a new method of tracking down and mapping servers, identifying both when they are in the same data centre and estimating where that data centre is. They also identify the relationships between servers and clients.


Source : Times of India and others 
Related Posts Plugin for WordPress, Blogger...