Bing Webmaster-seo-tools

Bing Webmaster service is the basic tools that help you diagnose and track your site. Registration is
free.

Black hat
Black hat is a label used to describe the use of SEO techniques that are illegal,
unethical, or of questionable propriety.

Bot (also known as Robot, Spider, or Crawler)
A robot, or bot for short, is a software agent that indexes web pages. It is also called
a spider or crawler.

Canonical URLs
Canonical URLs are URLs that have been standardized into a consistent form. For
the search engines, this typically implies making sure all your pages use consistent
URL structures, for example, making sure all your URLs start with www.

Cloaking
Cloaking is a black hat SEO technique that involves presenting the search engine
spider with different content than you show a normal site visitor.

Crawl depth
search engine spider has indexed a
website. This is typically an issue relevant for sites with a complex hierarchy of
pages. The deeper the spider indexes the site, the better.

pass information from page to page-best seo

Adding information to the URL: You can add certain information to the
end of the URL of the new page, and PHP puts the information into builtin
arrays that you can use in the new page. This method is most appropriate
when you need to pass only a small amount of information.

1. Storing information via cookies: You can store cookies — small
amounts of information containing variable=value pairs — on the
user’s computer. After the cookie is stored, you can get it from any
Web page. However, users can refuse to accept cookies. Therefore, this
method works only in environments where you know for sure that the
user has cookies turned on.

2. Passing information using HTML forms: You can pass information to a
specific program by using a form tag. When the user clicks the submit
button, the information in the form is sent to the next program. This
method is useful when you need to collect information from users.

3. Using PHP session functions: Beginning with PHP 4, PHP functions
are available that set up a user session and store session information
on the server; this information can be accessed from any Web page.
This method is useful when you expect users to view many pages in a
session.

Add information to the URL

A simple method  to move information from one page to the next is to add the
information to the URL. Put the information in the following format:
variable=value
The variable is a variable name, but do not use a dollar sign ($) in
it. The value is the value to be stored in the variable. You can add the
variable=value pair anywhere that you use a URL. You signal the start of
the information with a question mark (?). The following statements are all
valid ways of passing information in the URL:
<form action=”page.php?price=4545” method=”POST”>
<a href=”page.php?price=654”>page</a>
header(“Location: page.php?tips=SEO”);









Drupal-Optimizations

While most optimizations to Drupal are done within other layers of the software stack, there are a few
buttons and levers within Drupal itself that yield significant performance gains.

Page Caching
Sometimes it’s the easy things that are overlooked, which is why they’re worth mentioning again. Drupal
has a built-in way to reduce the load on the database by storing and sending compressed cached pages
requested by anonymous users. By enabling the cache, you are effectively reducing pages to a single
database query rather than the many queries that might have been executed otherwise. Drupal caching
is disabled by default and can be configured at Configuration -> Performance.

Bandwidth Optimization
There is another performance optimization on the Configuration -> Performance page to reduce the
number of requests made to the server. By enabling the “Aggregate and compress CSS files into one”
feature, Drupal takes the CSS files created by modules, compresses them, and rolls them into a single file
inside a css directory in your “File system path.” The “Aggregate JavaScript files into one file” feature
concatenates multiple JavaScript files into one and places that file inside a js directory in your “File
system path.” This reduces the number of HTTP requests per page and the overall size of the
downloaded page.

Dedicated Servers vs. Virtual Servers

Dedicated physical servers are going to outperform virtual servers when it comes to network I/O, disk
I/O, and memory I/O, even in situations where the virtual server supposedly has been allocated more
resources (CPU, disk, and memory) than a dedicated server of similar specs. An important factor to
consider is that in a virtualized server environment, the CPU, disk I/O, memory I/O, and network I/O
have added I/O routing layers between the server OS and the actual hardware. And, therefore, all I/O
operations are subject to task scheduling whims of the host hypervisor as well as the demands of
neighboring virtual machines on the same physical host.

As a real example, a virtual server hosting a database server may have twice as much CPU power as a
cheaper physical dedicated server; however, the virtual server may also have an added 1ms network
latency (a very real example from an actual Xen virtualized environment), even between neighboring
virtual machines. Now, 1ms network latency doesn’t seem like enough latency to care about, until you
consider that a logged-in Drupal page request may involve hundreds of serialized MySQL queries; thus
the total network latency overhead can amount to a full second of your page load time. An added latency
of just one second per page request may also seem affordable; however, also consider the rate of
incoming page requests and whether this one-second delay will cause PHP processes to pile up in heavy
traffic, thus driving up your server load. Adding more and bigger virtual servers to your stack does not
make this I/O latency factor disappear either. The same can be said for disk I/O: virtual disks will always
be slower than physical local physical disks, no matter how much CPU and memory the virtual server
has been allocated.

Apache Pool Size

When using Apache prefork, you want to size your Apache child process pool to avoid process pool
churning. In other words, when the Apache server starts, you want to immediately prefork a large pool of
Apache processes (as many as your web server memory can support) and have that entire pool of child
processes present and waiting for requests, even if they are idle most of the time, rather than constantly
incurring the performance overhead of killing and re-spawning Apache child processes in response to
the traffic level of the moment.
Here are example Apache prefork settings for a Drupal web server running mod_php.
StartServers 40
MinSpareServers 40
MaxSpareServers 40
MaxClients 80
MaxRequestsPerChild 20000
This is telling Apache to start 40 child processes immediately, and always leave it at 40 processes
even if traffic is low, but if traffic is really heavy, then burst up to 80 child processes. (You can raise the 40
and 80 limits according to your own server dimensions.)
You may look at this and ask, “Well, isn’t that a waste of memory to have big fat idle Apache
processes hanging about?” But remember this: the goal is to have fast page delivery, and there is no prize
for having a lot of free memory.

Moving Directives from .htaccess to httpd.conf

Drupal ships with two .htaccess files: one is at the Drupal root, and the other is automatically generated
after you create your directory to store uploaded files and visit Configuration -> File system to tell Drupal
where the directory is. Any .htaccess files are searched for, read, and parsed on every request. In
contrast, httpd.conf is read only when Apache is started. Apache directives can live in either file. If you
have control of your own server, you should move the contents of the .htaccess files to the main Apache
configuration file (httpd.conf) and disable .htaccess lookups within your web server root by setting
AllowOverride to None:

<Directory />
AllowOverride None
...
</Directory>
This prevents Apache from traversing up the directory tree of every request looking for the
.htaccess file to execute. Apache will then have to do less work for each request, giving it more time to
serve more requests.

Defining a Block

Blocks are defined within modules by using hook_block_info(), and a module can implement multiple
blocks within this single hook. Once a block is defined, it will be shown on the block administration
page. Additionally, a site administrator can manually create custom blocks through the web interface. In
this section, we’ll mostly focus on programmatically creating blocks.

The following properties are defined within the columns of the block table:
bid: This is the unique ID of each block.

module: This column contains the name of the module that defined the block.
The user login block was created by the user module, and so on. Custom blocks
created by the administrator at Structure -> Blocks -> Add Blocks are
considered to have been created by the block module.

delta: Because modules can define multiple blocks within hook_block_info(),
the delta column stores a key for each block that’s unique only for each
implementation of hook_block_info(), and not for all blocks across the board. A
delta should be a string.

theme: Blocks can be defined for multiple themes. Drupal therefore needs to
store the name of the theme for which the block is enabled. Every theme for
which the block is enabled will have its own row in the database. Configuration
options are not shared across themes.

status: This tracks whether the block is enabled. A value of 1 means that it’s
enabled, while 0 means it’s disabled. When a block doesn’t have a region
associated with it, Drupal sets the status flag to 0.

weight: The weight of the block determines its position relative to other blocks
within a region.

region: This is the name of the region in which the block will appear, for
example, footer.

XML-RPC

A remote procedure call is when one program asks another program to execute a function. XML-RPC is a
standard for remote procedure calls where the call is encoded with XML and sent over HTTP. The XMLRPC
protocol was created by Dave Winer of UserLand Software in collaboration with Microsoft (see
www.xmlrpc.com/spec). It’s specifically targeted at distributed web-based systems talking to each other,
as when one Drupal site asks another Drupal site for some information.
There are two players when XML-RPC happens. One is the site from which the request originates,
known as the client. The site that receives the request is the server.

If your site will be acting only as a server, there’s nothing to worry about because incoming XML-RPC
requests use the standard web port (usually port 80). The file xmlrpc.php in your Drupal installation
contains the code that’s run for an incoming XML-RPC request. It’s known as the XML-RPC endpoint.


XML-RPC Clients
The client is the computer that will be sending the request. It sends a standard HTTP POST request to
the server. The body of this request is composed of XML and contains a single tag named <methodCall>.
Two tags, <methodName> and <params>, are nested inside the <methodCall> tag.

Drupal Bootstrap Process

Drupal Bootstrap Process?
Drupal bootstraps itself on every request by going through a series of bootstrap phases. These phases are
defined in bootstrap.inc and proceed.

Configuration Sets global variables used throughout the bootstrap process.
Database Initializes the database system and registers autoload functions.
Variables Loads system variables and all enabled bootstrap modules.
Session Initializes session handling.
Page Header Invokes hook_boot(), initializes the locking system, and sends the default HTTP
headers.
Language Initializes all the defined language types.
Full The final phase: Drupal is fully loaded by now. This phase validates and fixes the input
data.
Processing a Request
The callback function does whatever work is required to process and accumulate data needed to fulfill
the request. For example, if a request for content such as http://example.com/q=node/3 is received, the
URL is mapped to the function node_page_view() in node.module. Further processing will retrieve the
data for that node from the database and put it into a data structure. Then, it’s time for theming.

Requiring Cookies

If the browser doesn’t accept cookies, a session cannot be established because the PHP directive
sessions_use_only_cookies has been set to 1 and the alternative (passing the PHPSESSID in the query
string of the URL) has been disabled by setting sessions.use_trans_sid to 0. This is a best practice, as
recommended by Zend see http://php.net/session.configuration:

URL-based session management has additional security risks compared to cookiebased
session management. Users may send a URL that contains an active session ID
to their friends by e-mail or users may save a URL that contains a session ID to their
bookmarks and access your site with the same session ID always, for example.


When PHPSESSID appears in the query string of a site, it’s typically a sign that the hosting provider
has locked down PHP and doesn’t allow the ini_set() function to set PHP directives at runtime.
Alternatives are to move the settings into the .htaccess file (if the host is running PHP as an Apache
module) or into a local php.ini file (if the host is running PHP as a CGI executable).