Something about Google

Tweet about this on TwitterShare on FacebookShare on Google+Flattr the authorBuffer this pageDigg thisShare on TumblrPin on PinterestShare on LinkedIn

Google began in March 1995 as a research project by Larry Page and Sergey Brin, Ph.D. students at Stanford University.

In search of a dissertation theme, Page had been considering—among other things—exploring the mathematical properties of the World Wide Web, understanding its link structure as a huge graph. His supervisor, Terry Winograd, encouraged him to pick this idea (which Page later recalled as “the best advice I ever got”) and Page focused on the problem of finding out which web pages link to a given page, based on the consideration that the number and nature of such backlinks was valuable information for an analysis of that page (with the role of citations in academic publishing in mind).

download (5)

In his research project, nicknamed “BackRub”, Page was soon joined by Brin, who was supported by a National Science Foundation Graduate Fellowship. Brin was already a close friend, whom Page had first met in the summer of 1995—Page was part of a group of potential new students that Brin had volunteered to show around the campus. Both Brin and Page were working on the Stanford Digital Library Project (SDLP). The SDLP’s goal was “to develop the enabling technologies for a single, integrated and universal digital library” and it was funded through the National Science Foundation, among other federal agencies.

Page’s web crawler began exploring the web in March 1996, with Page’s own Stanford home page serving as the only starting point. To convert the backlink data that it gathered for a given web page into a measure of importance, Brin and Page developed the PageRank algorithm.While analyzing BackRub’s output—which, for a given URL, consisted of a list of backlinks ranked by importance—the pair realized that a search engine based on PageRank would produce better results than existing techniques (existing search engines at the time essentially ranked results according to how many times the search term appeared on a page).

Convinced that the pages with the most links to them from other highly relevant Web pages must be the most relevant pages associated with the search, Page and Brin tested their thesis as part of their studies, and laid the foundation for their search engine.

BackRub is written in Java and Python and runs on several Sun Ultras and Intel Pentiums running Linux. The primary database is kept on a Sun Ultra II with 28GB of disk. Scott Hassan and Alan Steremberg have provided a great deal of very talented implementation help. Sergey Brin has also been very involved and deserves many thanks.

Is It Necessary to Protect Your Confidential Data?

Tweet about this on TwitterShare on FacebookShare on Google+Flattr the authorBuffer this pageDigg thisShare on TumblrPin on PinterestShare on LinkedIn

Countries with nuclear aspirations would love to get their hands on Silicon Graphics International’s (SGI) supercomputer technology, says Franz Aman, the company’s chief marketing officer.

There are export controls to block a sale of such information, of course. But, Aman says, product designs, financial information, and communications with customers are all valuable to someone. A determined rogue state could always try to steal designs by hacking into SGI’s network.

Keeping trade secrets from falling into the wrong hands is therefore a big focus for SGI, which also makes servers. The company uses an array of technology to help do the job, but also resists the temptation of tightening the security screws so much that it undermines productivity. “I could build the most secure network in the world and no one would be able to do their work,” says Dominic Martinelli, SGI’s chief information officer. “So you have to strike a balance.”

Many corporate networks simply aren’t secure enough. Thieves routinely infiltrate them on behalf of unscrupulous businesses, foreign governments, and as part of activist groups seeking to embarrass a company. Last year, for example, foreign hackers stole 24,000 documents related to a weapons system under development by a U.S. defense contractor, according to the Department of Defense. In another case, an individual traced to China stole confidential information from 29 chemical companies and 19 other firms, according to Symantec (SYMC), the computer security company. Meanwhile, hackers affiliated with the group Anonymous copied sensitive documents from HBGary, a computer security company, and then posted them online.

To get access to corporate networks, thieves use a variety of techniques. Phishing e-mails entice employees to click on links that surreptitiously load malware onto their computers, for instance, opening the door to corporate networks. Then there’s a relatively recent and increasingly common technique known as an Advanced Persistent Threat, a highly sophisticated attack typically aimed at companies and government agencies to obtain high-value information like trade and military secrets. Unlike other hacker attacks that tend to be single, quick guerrilla strikes, these are long-term offenses that can involve a combination of tactics including installing malware and looking for software vulnerabilities. To carry out a successful attack, perpetrators must have an uncommon ability to avoid detection. “If they’re good at getting intellectual property, you won’t even know they were ever there,” said Deb Radcliff, executive editor for the Sans Institute, which trains computer security specialists.

Corporate insiders also pose a major risk. Employees have easy access to confidential information and can steal it ostensibly in the course of doing their jobs. Studies show that workers are far more likely to swipe secrets just before leaving to join another company or starting their own firm. Of those insiders caught taking confidential data, 70 percent did so within a month of submitting their resignation, according to a survey of 700 insider theft cases by CERT, the cybersecurity program at Carnegie Mellon University.

Fremont (Calif.)-based SGI, which has about 1,500 employees in offices around the world, has more than 500 patents and a product portfolio that cost a huge amount of time and money to develop. Customers and partners entrust SGI with sensitive information in their dealings. Some employees have government security clearances so they can work on contracts that require secrecy. In keeping with its security emphasis, SGI has secure conference rooms in its offices that are encased in steel so they can’t be bugged from the outside, the company says. Inside the rooms, employees can make secure phone and video conference calls.


Do You Want to Know the History of Computer Virus?

Tweet about this on TwitterShare on FacebookShare on Google+Flattr the authorBuffer this pageDigg thisShare on TumblrPin on PinterestShare on LinkedIn

History of Viruses

The term “computer virus” was formally defined by Fred Cohen in 1983, while he performed academic experiments on a Digital Equipment Corporation VAX system. Viruses are classified as being one of two types: research or “in the wild.” A research virus is one that has been written for research or study purposes and has received almost no distribution to the public. On the other hand, viruses which have been seen with any regularity are termed “in the wild.” The first computer viruses were developed in the early 1980s. The first viruses found in the wild were Apple II viruses, such as Elk Cloner, which was reported in 1981 [Den90]. Viruses have now been found on the following platforms:


download (2)
Note that all viruses found in the wild target personal computers. As of today, the overwhelming number of virus strains are IBM PC viruses. However, as of August 1989, the number of PC, Atari ST, Amiga, and Macintosh viruses were almost identical (21, 22, 18, and 12 respectively [Den90]). Academic studies have shown that viruses are possible for multi-tasking systems, but they have not yet appeared. This point will be discussed later.
Viruses have “evolved” over the years due to efforts by their authors to make the code more difficult to detect, disassemble, and eradicate. This evolution has been especially apparent in the IBM PC viruses; since there are more distinct viruses known for the DOS operating system than any other.

The first IBM-PC virus appeared in 1986 [Den90]; this was the Brain virus. Brain was a boot sector virus and remained resident. In 1987, Brain was followed by Alameda (Yale), Cascade, Jerusalem, Lehigh, and Miami (South African Friday the 13th). These viruses expanded the target executables to include COM and EXE files. Cascade was encrypted to deter disassembly and detection. Variable encryption appeared in 1989 with the 1260 virus. Stealth viruses, which employ various techniques to avoid detection, also first appeared in 1989, such as Zero Bug, Dark Avenger and Frodo (4096 or 4K). In 1990, self-modifying viruses, such as Whale were introduced. The year 1991 brought the GP1 virus, which is “network-sensitive” and attempts to steal Novell NetWare passwords. Since their inception, viruses have become increasingly complex.

Examples from the IBM-PC family of viruses indicate that the most commonly detected viruses vary according to continent, but Stoned, Brain, Cascade, and members of the Jerusalem family, have spread widely and continue to appear. This implies that highly survivable viruses tend to be benign, replicate many times before activation, or are somewhat innovative, utilizing some technique never used before in a virus.

Personal computer viruses exploit the lack of effective access controls in these systems. The viruses modify files and even the operating system itself. These are “legal” actions within the context of the operating system. While more stringent controls are in place on multi-tasking, multi-user operating systems, configuration errors, and security holes (security bugs) make viruses on these systems more than theoretically possible.