What is ITIL (IT Infrastructure Library)?

While the ability to distribute technology afforded organizations more flexibility, the side effect was inconsistent application of processes for technology delivery and support. The UKs Office of Government Commerce recognized that utilizing consistent practices for all aspects of a service lifecycle could assist in driving organizational effectiveness and efficiency as well as predictable service levels and thus, ITIL was
born. ITIL guidance has since been a successful mechanism to drive consistency, efficiency and excellence into the business of managing IT services.

Since ITIL is an approach to IT “service” management”, the concept of a service must be discussed. A service is something that provides value to customers. Services that customers can directly utilize or consume are known as “business” services. An example of a business service that has common applicability across industries would be Payroll. Payroll is an IT service that is used to consolidate information, calculate compensation and
generate paychecks on a regular periodic basis. Payroll may rely on other “business” services such as “Time Tracking” or “Benefits Administration” for information necessary to calculate the correct compensation for an employee during a given time period. In order for Payroll to run, it is supported by a number of technology or “infrastructure” services. An infrastructure service does its work in the background, such that the business
does not directly interact with it, but technology services are necessary as part of the overall value chain of the business service. “Server Administration”, “Database Administration”, “Storage Administration” are all examples of technology services required for the successful delivery of the Payroll business service.

See Figure 1.
IT has traditionally been focused on the “infrastructure” services and managing the technology silos. IT Service  Management guidance in ITIL suggests a more holistic approach to managing services from end-to-end. Managing the entire business service along with its underlying components cohesively assures that
we are considering every aspect of a service (and not just the individual technology silos) – to assure that we are delivering the required functionality (or utility – accurate paychecks for all employees) and service levels (or warranty – delivered within a certain timeframe, properly secured, available when necessary) to the business customer.

end to end service

ITIL is typically used in conjunction with one or more other

good practices to manage information technology such as:
• COBIT (a framework for IT G • overnance and Controls)
• Six Sigma ( a quality methodology)
• TOGAF (a framework for IT architecture)
• ISO 27000 (a standard for IT security)

The Service Lifecycle
ITIL is organized around a Service Lifecycle: which includes: Service Strategy, Service Design, Service Transition, Service Operation and Continual Service Improvement. The lifecycle starts with Service Strategy – understanding who the IT customers are, the service offerings that are required to meet the customers’ needs, the IT capabilities and resource that are required to develop these offerings and the requirements
for executing successfully. Driven through strategy and throughout the course of delivery and support of the service, IT must always try to assure that cost of delivery is consistent with the value delivered to the customer.
Service Design assures that new and changes services are designed effectively to meet customer expectations. The technology and architecture required to meet customer needs cost effectively is an integral part of Service Design. Additionally, processes required to manage services are also part of the design phase.
Service management systems and tools that are necessary to adequately monitor and support new or modified services must be considered as well as mechanisms for measuring service levels, technology and process efficiency and effectiveness. Through the Service Transition phase of the lifecycle the design is built, tested and moved into production to assure that the business customer can achieve the desired value. This phase
addresses managing changes, controlling the assets and configuration items (underlying components – hardware, software, etc) associated with new and changed systems,

service validation and testing and transition planning to assure that users, support personnel and the production environment has been prepared for the release to production.

Once transitioned, Service Operation then delivers the service on an ongoing basis, overseeing the daily overall health of the service. This includes managing disruptions to service through rapid restoration of incidents, determining the root cause of problems and detecting trends associated with recurring issues, handling
daily routine end user requests and managing service access. Enveloping the Service Lifecycle is Continual Service Improvement (CSI). CSI offers a mechanism for IT to measure and improve the service levels, the technology and the efficiency and effectiveness or processes used in the overall management of services.

Why would an organization be interested in ITIL?

Although today’s technologies allow us to be able to provide robust capabilities and afford significant flexibility, they are very complex. The global reach available to companies via the internet provides tremendous business opportunity while presenting additional challenges regarding the confidentiality, integrity and availability or our services and our data. Additionally, IT organizations need to continue to be able to meet or exceed
service expectations while working as efficiently as possible. Consistent repeatable processes are the key to efficiency, effectiveness and the ability to improve services. These consistent, repeatable processes are outlined in the ITIL framework.

What are the benefits of ITIL?

The main benefits of ITIL include:
Alignment with business needs.

I • TIL becomes an asset to the business when IT can proactively recommend solutions as a
response to one or more business needs. The IT Strategy Group recommended in Service Strategy and the
implementation of Service Portfolio Management gives IT the opportunity to understand the business’ current and future needs and develop service offerings that can address them.

• Negotiated achievable service levels. Business and IT become true partners when they can agree upon realistic service levels that deliver the necessary value at an acceptable cost.

• Predictable, consistent processes. Customer expectations can be set and are easier to meet with through the use of predictable processes that are consistently used. As well, good practice processes are foundational and can assist in laying the groundwork to meet regulatory compliance requirements.

• Efficiency in service delivery. Well-defined processes with clearly documented accountability for each activity as recommended through the use of a RACI matrix can significantly increase the efficiency of processes. Inconjunction with the evaluation of efficiency metrics that indicate the time required to perform each activity, service delivery tasks can be optimized.

• Measurable, improvable services and processes. The adage that you can’t manage what you cannot measure rings true here. Consistent, repeatable processes can be measured and therefore can be better tuned for accurate delivery and overall effectiveness. For example, presume that a critical success factor for incident management is to reduce the time to restore service. When predictable, consistent processes are
used key performance indicators such as Mean Time To Restore Service can be captured to determine whether this KPI is trending in a positive or negative direction so that the appropriate adjustments can be made. Additionally, under ITIL guidelines, services are designed to be measurable. With the proper metrics and monitoring in place, IT organizations can monitor SLAs and make improvements as necessary.

• A common language – terms are defined.

Which companies use ITIL?
Literally thousands of companies world-wide and of all industries and sizes have adopted ITIL. These include:
• Large technology companies such as Microsoft, HP, Fujitsu, IBM;
• Retailers such as Target, Walmart and Staples
• Financial services organizations such as Citi, Bank of America, Barclay’s Bank;
• Entertainment entities such as Sony, Disney
• Manufacturers such as Boeing, Toyota, Bombardier
• Life Sciences compa

Reference : http://www.itil-officialsite.com/WhatisITIL.aspx

Inversion of Control Containers and the Dependency Injection pattern

I really like this article as so far I never understand about all of these terms.
Really thanks to Martin Fowler for his good article.

————————————————————————————————————————————————————–

In the Java community there’s been a rush of lightweight containers that help to assemble components from different projects into a cohesive application. Underlying these containers is a common pattern to how they perform the wiring, a concept they refer under the very generic name of “Inversion of Control“. In this article I dig into how this pattern works, under the more specific name of “Dependency Injection“, and contrast it with the Service Locator alternative. The choice between them is less important than the principle of separating configuration from use.

One of the entertaining things about the enterprise Java world is the huge amount of activity in building alternatives to the mainstream J2EE technologies, much of it happening in open source. A lot of this is a reaction to the heavyweight complexity in the mainstream J2EE world, but much of it is also exploring alternatives and coming up with creative ideas. A common issue to deal with is how to wire together different elements: how do you fit together this web controller architecture with that database interface backing when they were built by different teams with little knowledge of each other.A number of frameworks have taken a stab at this problem, and several are branching out to provide a general capability to assemble components from different layers. These are often referred to as lightweight containers, examples include PicoContainer, and Spring.

Underlying these containers are a number of interesting design principles, things that go beyond both these specific containers and indeed the Java platform. Here I want to start exploring some of these principles. The examples I use are in Java, but like most of my writing the principles are equally applicable to other OO environments, particularly .NET.


Components and Services

The topic of wiring elements together drags me almost immediately into the knotty terminology problems that surround the terms service and component. You find long and contradictory articles on the definition of these things with ease. For my purposes here are my current uses of these overloaded terms.

I use component to mean a glob of software that’s intended to be used, without change, by an application that is out of the control of the writers of the component. By ‘without change’ I mean that the using application doesn’t change the source code of the components, although they may alter the component’s behavior by extending it in ways allowed by the component writers.

A service is similar to a component in that it’s used by foreign applications. The main difference is that I expect a component to be used locally (think jar file, assembly, dll, or a source import). A service will be used remotely through some remote interface, either synchronous or asynchronous (eg web service, messaging system, RPC, or socket.)

I mostly use service in this article, but much of the same logic can be applied to local components too. Indeed often you need some kind of local component framework to easily access a remote service. But writing “component or service” is tiring to read and write, and services are much more fashionable at the moment.

A Naive Example

To help make all of this more concrete I’ll use a running example to talk about all of this. Like all of my examples it’s one of those super-simple examples; small enough to be unreal, but hopefully enough for you to visualize what’s going on without falling into the bog of a real example.

In this example I’m writing a component that provides a list of movies directed by a particular director. This stunningly useful function is implemented by a single method.

class MovieLister...
    public Movie[] moviesDirectedBy(String arg) {
        List allMovies = finder.findAll();
        for (Iterator it = allMovies.iterator(); it.hasNext();) {
            Movie movie = (Movie) it.next();
            if (!movie.getDirector().equals(arg)) it.remove();
        }
        return (Movie[]) allMovies.toArray(new Movie[allMovies.size()]);
    }

The implementation of this function is naive in the extreme, it asks a finder object (which we’ll get to in a moment) to return every film it knows about. Then it just hunts through this list to return those directed by a particular director. This particular piece of naivety I’m not going to fix, since it’s just the scaffolding for the real point of this article.

The real point of this article is this finder object, or particularly how we connect the lister object with a particular finder object. The reason why this is interesting is that I want my wonderful moviesDirectedBy method to be completely independent of how all the movies are being stored. So all the method does is refer to a finder, and all that finder does is know how to respond to the findAll method. I can bring this out by defining an interface for the finder.

public interface MovieFinder {
    List findAll();
}

Now all of this is very well decoupled, but at some point I have to come up with a concrete class to actually come up with the movies. In this case I put the code for this in the constructor of my lister class.

class MovieLister...
  private MovieFinder finder;
  public MovieLister() {
    finder = new ColonDelimitedMovieFinder("movies1.txt");
  }

The name of the implementation class comes from the fact that I’m getting my list from a colon delimited text file. I’ll spare you the details, after all the point is just that there’s some implementation.

Now if I’m using this class for just myself, this is all fine and dandy. But what happens when my friends are overwhelmed by a desire for this wonderful functionality and would like a copy of my program? If they also store their movie listings in a colon delimited text file called “movies1.txt” then everything is wonderful. If they have a different name for their movies file, then it’s easy to put the name of the file in a properties file. But what if they have a completely different form of storing their movie listing: a SQL database, an XML file, a web service, or just another format of text file? In this case we need a different class to grab that data. Now because I’ve defined a MovieFinder interface, this won’t alter mymoviesDirectedBy method. But I still need to have some way to get an instance of the right finder implementation into place.

Figure 1

Figure 1: The dependencies using a simple creation in the lister class

Figure 1 shows the dependencies for this situation. The MovieLister class is dependent on both the MovieFinder interface and upon the implementation. We would prefer it if it were only dependent on the interface, but then how do we make an instance to work with?

In my book P of EAA, we described this situation as a Plugin. The implementation class for the finder isn’t linked into the program at compile time, since I don’t know what my friends are going to use. Instead we want my lister to work with any implementation, and for that implementation to be plugged in at some later point, out of my hands. The problem is how can I make that link so that my lister class is ignorant of the implementation class, but can still talk to an instance to do its work.

Expanding this into a real system, we might have dozens of such services and components. In each case we can abstract our use of these components by talking to them through an interface (and using an adapter if the component isn’t designed with an interface in mind). But if we wish to deploy this system in different ways, we need to use plugins to handle the interaction with these services so we can use different implementations in different deployments.

So the core problem is how do we assemble these plugins into an application? This is one of the main problems that this new breed of lightweight containers face, and universally they all do it using Inversion of Control.


Inversion of Control

When these containers talk about how they are so useful because they implement “Inversion of Control” I end up very puzzled. Inversion of control is a common characteristic of frameworks, so saying that these lightweight containers are special because they use inversion of control is like saying my car is special because it has wheels.

The question, is what aspect of control are they inverting? When I first ran into inversion of control, it was in the main control of a user interface. Early user interfaces were controlled by the application program. You would have a sequence of commands like “Enter name”, “enter address”; your program would drive the prompts and pick up a response to each one. With graphical (or even screen based) UIs the UI framework would contain this main loop and your program instead provided event handlers for the various fields on the screen. The main control of the program was inverted, moved away from you to the framework.

For this new breed of containers the inversion is about how they lookup a plugin implementation. In my naive example the lister looked up the finder implementation by directly instantiating it. This stops the finder from being a plugin. The approach that these containers use is to ensure that any user of a plugin follows some convention that allows a separate assembler module to inject the implementation into the lister.

As a result I think we need a more specific name for this pattern. Inversion of Control is too generic a term, and thus people find it confusing. As a result with a lot of discussion with various IoC advocates we settled on the name Dependency Injection.

I’m going to start by talking about the various forms of dependency injection, but I’ll point out now that that’s not the only way of removing the dependency from the application class to the plugin implementation. The other pattern you can use to do this is Service Locator, and I’ll discuss that after I’m done with explaining Dependency Injection.


Forms of Dependency Injection

The basic idea of the Dependency Injection is to have a separate object, an assembler, that populates a field in the lister class with an appropriate implementation for the finder interface, resulting in a dependency diagram along the lines of Figure 2

Figure 2

Figure 2: The dependencies for a Dependency Injector

There are three main styles of dependency injection. The names I’m using for them are Constructor Injection, Setter Injection, and Interface Injection. If you read about this stuff in the current discussions about Inversion of Control you’ll hear these referred to as type 1 IoC (interface injection), type 2 IoC (setter injection) and type 3 IoC (constructor injection). I find numeric names rather hard to remember, which is why I’ve used the names I have here.

Constructor Injection with PicoContainer

I’ll start with showing how this injection is done using a lightweight container calledPicoContainer. I’m starting here primarily because several of my colleagues at ThoughtWorks are very active in the development of PicoContainer (yes, it’s a sort of corporate nepotism.)

PicoContainer uses a constructor to decide how to inject a finder implementation into the lister class. For this to work, the movie lister class needs to declare a constructor that includes everything it needs injected.

class MovieLister...
    public MovieLister(MovieFinder finder) {
        this.finder = finder;       
    }

The finder itself will also be managed by the pico container, and as such will have the filename of the text file injected into it by the container.

class ColonMovieFinder...
    public ColonMovieFinder(String filename) {
        this.filename = filename;
    }

The pico container then needs to be told which implementation class to associate with each interface, and which string to inject into the finder.

    private MutablePicoContainer configureContainer() {
        MutablePicoContainer pico = new DefaultPicoContainer();
        Parameter[] finderParams =  {new ConstantParameter("movies1.txt")};
        pico.registerComponentImplementation(MovieFinder.class, ColonMovieFinder.class, finderParams);
        pico.registerComponentImplementation(MovieLister.class);
        return pico;
    }

This configuration code is typically set up in a different class. For our example, each friend who uses my lister might write the appropriate configuration code in some setup class of their own. Of course it’s common to hold this kind of configuration information in separate config files. You can write a class to read a config file and set up the container appropriately. Although PicoContainer doesn’t contain this functionality itself, there is a closely related project called NanoContainer that provides the appropriate wrappers to allow you to have XML configuration files. Such a nano container will parse the XML and then configure an underlying pico container. The philosophy of the project is to separate the config file format from the underlying mechanism.

To use the container you write code something like this.

    public void testWithPico() {
        MutablePicoContainer pico = configureContainer();
        MovieLister lister = (MovieLister) pico.getComponentInstance(MovieLister.class);
        Movie[] movies = lister.moviesDirectedBy("Sergio Leone");
        assertEquals("Once Upon a Time in the West", movies[0].getTitle());
    }

Although in this example I’ve used constructor injection, PicoContainer also supports setter injection, although its developers do prefer constructor injection.

Setter Injection with Spring

The Spring framework is a wide ranging framework for enterprise Java development. It includes abstraction layers for transactions, persistence frameworks, web application development and JDBC. Like PicoContainer it supports both constructor and setter injection, but its developers tend to prefer setter injection – which makes it an appropriate choice for this example.

To get my movie lister to accept the injection I define a setting method for that service

class MovieLister...
    private MovieFinder finder;
  public void setFinder(MovieFinder finder) {
    this.finder = finder;
  }

Similarly I define a setter for the filename.

class ColonMovieFinder...
    public void setFilename(String filename) {
        this.filename = filename;
    }

The third step is to set up the configuration for the files. Spring supports configuration through XML files and also through code, but XML is the expected way to do it.

    <beans>
        <bean id="MovieLister">
            <property name="finder">
                <ref local="MovieFinder"/>
            </property>
        </bean>
        <bean id="MovieFinder">
            <property name="filename">
                <value>movies1.txt</value>
            </property>
        </bean>
    </beans>

The test then looks like this.

    public void testWithSpring() throws Exception {
        ApplicationContext ctx = new FileSystemXmlApplicationContext("spring.xml");
        MovieLister lister = (MovieLister) ctx.getBean("MovieLister");
        Movie[] movies = lister.moviesDirectedBy("Sergio Leone");
        assertEquals("Once Upon a Time in the West", movies[0].getTitle());
    }

Interface Injection

The third injection technique is to define and use interfaces for the injection. Avalon is an example of a framework that uses this technique in places. I’ll talk a bit more about that later, but in this case I’m going to use it with some simple sample code.

With this technique I begin by defining an interface that I’ll use to perform the injection through. Here’s the interface for injecting a movie finder into an object.

public interface InjectFinder {
    void injectFinder(MovieFinder finder);
}

This interface would be defined by whoever provides the MovieFinder interface. It needs to be implemented by any class that wants to use a finder, such as the lister.

class MovieLister implements InjectFinder...
    public void injectFinder(MovieFinder finder) {
        this.finder = finder;
    }

I use a similar approach to inject the filename into the finder implementation.

public interface InjectFinderFilename {
    void injectFilename (String filename);
}

class ColonMovieFinder implements MovieFinder, InjectFinderFilename......
    public void injectFilename(String filename) {
        this.filename = filename;
    }

Then, as usual, I need some configuration code to wire up the implementations. For simplicity’s sake I’ll do it in code.

class Tester...
    private Container container;

     private void configureContainer() {
       container = new Container();
       registerComponents();
       registerInjectors();
       container.start();
    }

This configuration has two stages, registering components through lookup keys is pretty similar to the other examples.

class Tester...
  private void registerComponents() {
    container.registerComponent("MovieLister", MovieLister.class);
    container.registerComponent("MovieFinder", ColonMovieFinder.class);
  }

A new step is to register the injectors that will inject the dependent components. Each injection interface needs some code to inject the dependent object. Here I do this by registering injector objects with the container. Each injector object implements the injector interface.

class Tester...
  private void registerInjectors() {
    container.registerInjector(InjectFinder.class, container.lookup("MovieFinder"));
    container.registerInjector(InjectFinderFilename.class, new FinderFilenameInjector());
  }

public interface Injector {
  public void inject(Object target);

}

When the dependent is a class written for this container, it makes sense for the component to implement the injector interface itself, as I do here with the movie finder. For generic classes, such as the string, I use an inner class within the configuration code.

class ColonMovieFinder implements Injector......
  public void inject(Object target) {
    ((InjectFinder) target).injectFinder(this);        
  }

class Tester...
  public static class FinderFilenameInjector implements Injector {
    public void inject(Object target) {
      ((InjectFinderFilename)target).injectFilename("movies1.txt");      
    }
    }

The tests then use the container.

class IfaceTester...
    public void testIface() {
      configureContainer();
      MovieLister lister = (MovieLister)container.lookup("MovieLister");
      Movie[] movies = lister.moviesDirectedBy("Sergio Leone");
      assertEquals("Once Upon a Time in the West", movies[0].getTitle());
    }

The container uses the declared injection interfaces to figure out the dependencies and the injectors to inject the correct dependents. (The specific container implementation I did here isn’t important to the technique, and I won’t show it because you’d only laugh.)


Using a Service Locator

The key benefit of a Dependency Injector is that it removes the dependency that theMovieLister class has on the concrete MovieFinder implementation. This allows me to give listers to friends and for them to plug in a suitable implementation for their own environment. Injection isn’t the only way to break this dependency, another is to use aservice locator.

The basic idea behind a service locator is to have an object that knows how to get hold of all of the services that an application might need. So a service locator for this application would have a method that returns a movie finder when one is needed. Of course this just shifts the burden a tad, we still have to get the locator into the lister, resulting in the dependencies of Figure 3

Figure 3

Figure 3: The dependencies for a Service Locator

In this case I’ll use the ServiceLocator as a singleton Registry. The lister can then use that to get the finder when it’s instantiated.

class MovieLister...
    MovieFinder finder = ServiceLocator.movieFinder();

class ServiceLocator...
    public static MovieFinder movieFinder() {
        return soleInstance.movieFinder;
    }
    private static ServiceLocator soleInstance;
    private MovieFinder movieFinder;

As with the injection approach, we have to configure the service locator. Here I’m doing it in code, but it’s not hard to use a mechanism that would read the appropriate data from a configuration file.

class Tester...
    private void configure() {
        ServiceLocator.load(new ServiceLocator(new ColonMovieFinder("movies1.txt")));
    }

class ServiceLocator...
    public static void load(ServiceLocator arg) {
        soleInstance = arg;
    }

    public ServiceLocator(MovieFinder movieFinder) {
        this.movieFinder = movieFinder;
    }

Here’s the test code.

class Tester...
    public void testSimple() {
        configure();
        MovieLister lister = new MovieLister();
        Movie[] movies = lister.moviesDirectedBy("Sergio Leone");
        assertEquals("Once Upon a Time in the West", movies[0].getTitle());
    }

I’ve often heard the complaint that these kinds of service locators are a bad thing because they aren’t testable because you can’t substitute implementations for them. Certainly you can design them badly to get into this kind of trouble, but you don’t have to. In this case the service locator instance is just a simple data holder. I can easily create the locator with test implementations of my services.

For a more sophisticated locator I can subclass service locator and pass that subclass into the registry’s class variable. I can change the static methods to call a method on the instance rather accessing instance variables directly. I can provide thread specific locators by using thread specific storage. All of this can be done without changing clients of service locator.

A way to think of this is that service locator is a registry not a singleton. A singleton provides a simple way of implementing a registry, but that implementation decision is easily changed.

Using a Segregated Interface for the Locator

One of the issues with the simple approach above, is that the MovieLister is dependent on the full service locator class, even though it only uses one service. We can reduce this by using a segregated interface. That way, instead of using the full service locator interface, the lister can declare just the bit of interface it needs.

In this situation the provider of the lister would also provide a locator interface which it needs to get hold of the finder.

public interface MovieFinderLocator {
    public MovieFinder movieFinder();

The locator then needs to implement this interface to provide access to a finder.

    MovieFinderLocator locator = ServiceLocator.locator();
    MovieFinder finder = locator.movieFinder();

   public static ServiceLocator locator() {
        return soleInstance;
    }
    public MovieFinder movieFinder() {
        return movieFinder;
    }
    private static ServiceLocator soleInstance;
    private MovieFinder movieFinder;

You’ll notice that since we want to use an interface, we can’t just access the services through static methods any more. We have to use the class to get a locator instance and then use that to get what we need.

A Dynamic Service Locator

The above example was static, in that the service locator class has methods for each of the services that you need. This isn’t the only way of doing it, you can also make a dynamic service locator that allows you to stash any service you need into it and make your choices at runtime.

In this case, the service locator uses a map instead of fields for each of the services, and provides generic methods to get and load services.

class ServiceLocator...
    private static ServiceLocator soleInstance;
    public static void load(ServiceLocator arg) {
        soleInstance = arg;
    }
    private Map services = new HashMap();
    public static Object getService(String key){
        return soleInstance.services.get(key);
    }
    public void loadService (String key, Object service) {
        services.put(key, service);
    }

Configuring involves loading a service with an appropriate key.

class Tester...
    private void configure() {
        ServiceLocator locator = new ServiceLocator();
        locator.loadService("MovieFinder", new ColonMovieFinder("movies1.txt"));
        ServiceLocator.load(locator);
    }

I use the service by using the same key string.

class MovieLister...
    MovieFinder finder = (MovieFinder) ServiceLocator.getService("MovieFinder");

On the whole I dislike this approach. Although it’s certainly flexible, it’s not very explicit. The only way I can find out how to reach a service is through textual keys. I prefer explicit methods because it’s easier to find where they are by looking at the interface definitions.

Using both a locator and injection with Avalon

Dependency injection and a service locator aren’t necessarily mutually exclusive concepts. A good example of using both together is the Avalon framework. Avalon uses a service locator, but uses injection to tell components where to find the locator.

Berin Loritsch sent me this simple version of my running example using Avalon.

public class MyMovieLister implements MovieLister, Serviceable {
    private MovieFinder finder;    public void service( ServiceManager manager ) throws ServiceException {
        finder = (MovieFinder)manager.lookup("finder");
    }

The service method is an example of interface injection, allowing the container to inject a service manager into MyMovieLister. The service manager is an example of a service locator. In this example the lister doesn’t store the manager in a field, instead it immediately uses it to lookup the finder, which it does store.


Deciding which option to use

So far I’ve concentrated on explaining how I see these patterns and their variations. Now I can start talking about their pros and cons to help figure out which ones to use and when.

Service Locator vs Dependency Injection

The fundamental choice is between Service Locator and Dependency Injection. The first point is that both implementations provide the fundamental decoupling that’s missing in the naive example – in both cases application code is independent of the concrete implementation of the service interface. The important difference between the two patterns is about how that implementation is provided to the application class. With service locator the application class asks for it explicitly by a message to the locator. With injection there is no explicit request, the service appears in the application class – hence the inversion of control.

Inversion of control is a common feature of frameworks, but it’s something that comes at a price. It tends to be hard to understand and leads to problems when you are trying to debug. So on the whole I prefer to avoid it unless I need it. This isn’t to say it’s a bad thing, just that I think it needs to justify itself over the more straightforward alternative.

The key difference is that with a Service Locator every user of a service has a dependency to the locator. The locator can hide dependencies to other implementations, but you do need to see the locator. So the decision between locator and injector depends on whether that dependency is a problem.

Using dependency injection can help make it easier to see what the component dependencies are. With dependency injector you can just look at the injection mechanism, such as the constructor, and see the dependencies. With the service locator you have to search the source code for calls to the locator. Modern IDEs with a find references feature make this easier, but it’s still not as easy as looking at the constructor or setting methods.

A lot of this depends on the nature of the user of the service. If you are building an application with various classes that use a service, then a dependency from the application classes to the locator isn’t a big deal. In my example of giving a Movie Lister to my friends, then using a service locator works quite well. All they need to do is to configure the locator to hook in the right service implementations, either through some configuration code or through a configuration file. In this kind of scenario I don’t see the injector’s inversion as providing anything compelling.

The difference comes if the lister is a component that I’m providing to an application that other people are writing. In this case I don’t know much about the APIs of the service locators that my customers are going to use. Each customer might have their own incompatible service locators. I can get around some of this by using the segregated interface. Each customer can write an adapter that matches my interface to their locator, but in any case I still need to see the first locator to lookup my specific interface. And once the adapter appears then the simplicity of the direct connection to a locator is beginning to slip.

Since with an injector you don’t have a dependency from a component to the injector, the component cannot obtain further services from the injector once it’s been configured.

A common reason people give for preferring dependency injection is that it makes testing easier. The point here is that to do testing, you need to easily replace real service implementations with stubs or mocks. However there is really no difference here between dependency injection and service locator: both are very amenable to stubbing. I suspect this observation comes from projects where people don’t make the effort to ensure that their service locator can be easily substituted. This is where continual testing helps, if you can’t easily stub services for testing, then this implies a serious problem with your design.

Of course the testing problem is exacerbated by component environments that are very intrusive, such as Java’s EJB framework. My view is that these kinds of frameworks should minimize their impact upon application code, and particularly should not do things that slow down the edit-execute cycle. Using plugins to substitute heavyweight components does a lot to help this process, which is vital for practices such as Test Driven Development.

So the primary issue is for people who are writing code that expects to be used in applications outside of the control of the writer. In these cases even a minimal assumption about a Service Locator is a problem.

Constructor versus Setter Injection

For service combination, you always have to have some convention in order to wire things together. The advantage of injection is primarily that it requires very simple conventions – at least for the constructor and setter injections. You don’t have to do anything odd in your component and it’s fairly straightforward for an injector to get everything configured.

Interface injection is more invasive since you have to write a lot of interfaces to get things all sorted out. For a small set of interfaces required by the container, such as in Avalon’s approach, this isn’t too bad. But it’s a lot of work for assembling components and dependencies, which is why the current crop of lightweight containers go with setter and constructor injection.

The choice between setter and constructor injection is interesting as it mirrors a more general issue with object-oriented programming – should you fill fields in a constructor or with setters.

My long running default with objects is as much as possible, to create valid objects at construction time. This advice goes back to Kent Beck’s Smalltalk Best Practice Patterns: Constructor Method and Constructor Parameter Method. Constructors with parameters give you a clear statement of what it means to create a valid object in an obvious place. If there’s more than one way to do it, create multiple constructors that show the different combinations.

Another advantage with constructor initialization is that it allows you to clearly hide any fields that are immutable by simply not providing a setter. I think this is important – if something shouldn’t change then the lack of a setter communicates this very well. If you use setters for initialization, then this can become a pain. (Indeed in these situations I prefer to avoid the usual setting convention, I’d prefer a method like initFoo, to stress that it’s something you should only do at birth.)

But with any situation there are exceptions. If you have a lot of constructor parameters things can look messy, particularly in languages without keyword parameters. It’s true that a long constructor is often a sign of an over-busy object that should be split, but there are cases when that’s what you need.

If you have multiple ways to construct a valid object, it can be hard to show this through constructors, since constructors can only vary on the number and type of parameters. This is when Factory Methods come into play, these can use a combination of private constructors and setters to implement their work. The problem with classic Factory Methods for components assembly is that they are usually seen as static methods, and you can’t have those on interfaces. You can make a factory class, but then that just becomes another service instance. A factory service is often a good tactic, but you still have to instantiate the factory using one of the techniques here.

Constructors also suffer if you have simple parameters such as strings. With setter injection you can give each setter a name to indicate what the string is supposed to do. With constructors you are just relying on the position, which is harder to follow.

If you have multiple constructors and inheritance, then things can get particularly awkward. In order to initialize everything you have to provide constructors to forward to each superclass constructor, while also adding you own arguments. This can lead to an even bigger explosion of constructors.

Despite the disadvantages my preference is to start with constructor injection, but be ready to switch to setter injection as soon as the problems I’ve outlined above start to become a problem.

This issue has led to a lot of debate between the various teams who provide dependency injectors as part of their frameworks. However it seems that most people who build these frameworks have realized that it’s important to support both mechanisms, even if there’s a preference for one of them.

Code or configuration files

A separate but often conflated issue is whether to use configuration files or code on an API to wire up services. For most applications that are likely to be deployed in many places, a separate configuration file usually makes most sense. Almost all the time this will be an XML file, and this makes sense. However there are cases where it’s easier to use program code to do the assembly. One case is where you have a simple application that’s not got a lot of deployment variation. In this case a bit of code can be clearer than a separate XML file.

A contrasting case is where the assembly is quite complex, involving conditional steps. Once you start getting close to programming language then XML starts breaking down and it’s better to use a real language that has all the syntax to write a clear program. You then write a builder class that does the assembly. If you have distinct builder scenarios you can provide several builder classes and use a simple configuration file to select between them.

I often think that people are over-eager to define configuration files. Often a programming language makes a straightforward and powerful configuration mechanism. Modern languages can easily compile small assemblers that can be used to assemble plugins for larger systems. If compilation is a pain, then there are scripting languages that can work well also.

It’s often said that configuration files shouldn’t use a programing language because they need to be edited by non-programmers. But how often is this the case? Do people really expect non-programmers to alter the transaction isolation levels of a complex server-side application? Non-language configuration files work well only to the extent they are simple. If they become complex then it’s time to think about using a proper programming language.

One thing we’re seeing in the Java world at the moment is a cacophony of configuration files, where every component has its own configuration files which are different to everyone else’s. If you use a dozen of these components, you can easily end up with a dozen configuration files to keep in sync.

My advice here is to always provide a way to do all configuration easily with a programmatic interface, and then treat a separate configuration file as an optional feature. You can easily build configuration file handling to use the programmatic interface. If you are writing a component you then leave it up to your user whether to use the programmatic interface, your configuration file format, or to write their own custom configuration file format and tie it into the programmatic interface

Separating Configuration from Use

The important issue in all of this is to ensure that the configuration of services is separated from their use. Indeed this is a fundamental design principle that sits with the separation of interfaces from implementation. It’s something we see within an object-oriented program when conditional logic decides which class to instantiate, and then future evaluations of that conditional are done through polymorphism rather than through duplicated conditional code.

If this separation is useful within a single code base, it’s especially vital when you’re using foreign elements such as components and services. The first question is whether you wish to defer the choice of implementation class to particular deployments. If so you need to use some implementation of plugin. Once you are using plugins then it’s essential that the assembly of the plugins is done separately from the rest of the application so that you can substitute different configurations easily for different deployments. How you achieve this is secondary. This configuration mechanism can either configure a service locator, or use injection to configure objects directly.


Some further issues

In this article, I’ve concentrated on the basic issues of service configuration using Dependency Injection and Service Locator. There are some more topics that play into this which also deserve attention, but I haven’t had time yet to dig into. In particular there is the issue of life-cycle behavior. Some components have distinct life-cycle events: stop and starts for instance. Another issue is the growing interest in using aspect oriented ideas with these containers. Although I haven’t considered this material in the article at the moment, I do hope to write more about this either by extending this article or by writing another.

You can find out a lot more about these ideas by looking at the web sites devoted to the lightweight containers. Surfing from the picocontainer and spring web sites will lead to you into much more discussion of these issues and a start on some of the further issues.


Concluding Thoughts

The current rush of lightweight containers all have a common underlying pattern to how they do service assembly – the dependency injector pattern. Dependency Injection is a useful alternative to Service Locator. When building application classes the two are roughly equivalent, but I think Service Locator has a slight edge due to its more straightforward behavior. However if you are building classes to be used in multiple applications then Dependency Injection is a better choice.

If you use Dependency Injection there are a number of styles to choose between. I would suggest you follow constructor injection unless you run into one of the specific problems with that approach, in which case switch to setter injection. If you are choosing to build or obtain a container, look for one that supports both constructor and setter injection.

The choice between Service Locator and Dependency Injection is less important than the principle of separating service configuration from the use of services within an application.

Reference : http://martinfowler.com/articles/injection.html

Best Practices for Speeding Up Your Web Site

The Exceptional Performance team has identified a number of best practices for making web pages fast. The list includes 35 best practices divided into 7 categories.

Filter by category:

  • Content
  • Server
  • Cookie
  • CSS
  • JavaScript
  • Images
  • Mobile
  • All

Minimize HTTP Requests

tag: content

80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.

One way to reduce the number of components in the page is to simplify the page’s design. But is there a way to build pages with richer content while also achieving fast response times? Here are some techniques for reducing the number of HTTP requests, while still supporting rich page designs.

Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single stylesheet. Combining files is more challenging when the scripts and stylesheets vary from page to page, but making this part of your release process improves response times.

CSS Sprites are the preferred method for reducing the number of image requests. Combine your background images into a single image and use the CSS background-image and background-position properties to display the desired image segment.

Image maps combine multiple images into a single image. The overall size is about the same, but reducing the number of HTTP requests speeds up the page. Image maps only work if the images are contiguous in the page, such as a navigation bar. Defining the coordinates of image maps can be tedious and error prone. Using image maps for navigation is not accessible too, so it’s not recommended.

Inline images use the data: URL scheme to embed the image data in the actual page. This can increase the size of your HTML document. Combining inline images into your (cached) stylesheets is a way to reduce HTTP requests and avoid increasing the size of your pages. Inline images are not yet supported across all major browsers.

Reducing the number of HTTP requests in your page is the place to start. This is the most important guideline for improving performance for first time visitors. As described in Tenni Theurer’s blog postBrowser Cache Usage – Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.

top | discuss this rule

Use a Content Delivery Network

tag: server

The user’s proximity to your web server has an impact on response times. Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user’s perspective. But where should you start?

As a first step to implementing geographically dispersed content, don’t attempt to redesign your web application to work in a distributed architecture. Depending on the application, changing the architecture could include daunting tasks such as synchronizing session state and replicating database transactions across server locations. Attempts to reduce the distance between users and your content could be delayed by, or never pass, this application architecture step.

Remember that 80-90% of the end-user response time is spent downloading all the components in the page: images, stylesheets, scripts, Flash, etc. This is the Performance Golden Rule. Rather than starting with the difficult task of redesigning your application architecture, it’s better to first disperse your static content. This not only achieves a bigger reduction in response times, but it’s easier thanks to content delivery networks.

A content delivery network (CDN) is a collection of web servers distributed across multiple locations to deliver content more efficiently to users. The server selected for delivering content to a specific user is typically based on a measure of network proximity. For example, the server with the fewest network hops or the server with the quickest response time is chosen.

Some large Internet companies own their own CDN, but it’s cost-effective to use a CDN service provider, such as Akamai TechnologiesEdgeCast, or level3. For start-up companies and private web sites, the cost of a CDN service can be prohibitive, but as your target audience grows larger and becomes more global, a CDN is necessary to achieve fast response times. At Yahoo!, properties that moved static content off their application web servers to a CDN (both 3rd party as mentioned above as well as Yahoo’s own CDN) improved end-user response times by 20% or more. Switching to a CDN is a relatively easy code change that will dramatically improve the speed of your web site.

top | discuss this rule

Add an Expires or a Cache-Control Header

tag: server

There are two aspects to this rule:

  • For static components: implement “Never expire” policy by setting far future Expires header
  • For dynamic components: use an appropriate Cache-Control header to help the browser with conditional requests

Web page designs are getting richer and richer, which means more scripts, stylesheets, images, and Flash in the page. A first-time visitor to your page may have to make several HTTP requests, but by using the Expires header you make those components cacheable. This avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often used with images, but they should be used onall components including scripts, stylesheets, and Flash components.

Browsers (and proxies) use a cache to reduce the number and size of HTTP requests, making web pages load faster. A web server uses the Expires header in the HTTP response to tell the client how long a component can be cached. This is a far future Expires header, telling the browser that this response won’t be stale until April 15, 2010.

      Expires: Thu, 15 Apr 2010 20:00:00 GMT

If your server is Apache, use the ExpiresDefault directive to set an expiration date relative to the current date. This example of the ExpiresDefault directive sets the Expires date 10 years out from the time of the request.

      ExpiresDefault "access plus 10 years"

Keep in mind, if you use a far future Expires header you have to change the component’s filename whenever the component changes. At Yahoo! we often make this step part of the build process: a version number is embedded in the component’s filename, for example, yahoo_2.0.6.js.

Using a far future Expires header affects page views only after a user has already visited your site. It has no effect on the number of HTTP requests when a user visits your site for the first time and the browser’s cache is empty. Therefore the impact of this performance improvement depends on how often users hit your pages with a primed cache. (A “primed cache” already contains all of the components in the page.) We measured this at Yahoo! and found the number of page views with a primed cache is 75-85%. By using a far future Expires header, you increase the number of components that are cached by the browser and re-used on subsequent page views without sending a single byte over the user’s Internet connection.

top | discuss this rule

Gzip Components

tag: server

The time it takes to transfer an HTTP request and response across the network can be significantly reduced by decisions made by front-end engineers. It’s true that the end-user’s bandwidth speed, Internet service provider, proximity to peering exchange points, etc. are beyond the control of the development team. But there are other variables that affect response times. Compression reduces response times by reducing the size of the HTTP response.

Starting with HTTP/1.1, web clients indicate support for compression with the Accept-Encoding header in the HTTP request.

      Accept-Encoding: gzip, deflate

If the web server sees this header in the request, it may compress the response using one of the methods listed by the client. The web server notifies the web client of this via the Content-Encoding header in the response.

      Content-Encoding: gzip

Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. The only other compression format you’re likely to see is deflate, but it’s less effective and less popular.

Gzipping generally reduces the response size by about 70%. Approximately 90% of today’s Internet traffic travels through browsers that claim to support gzip. If you use Apache, the module configuring gzip depends on your version: Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate.

There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically.

Servers choose what to gzip based on file type, but are typically too limited in what they decide to compress. Most web sites gzip their HTML documents. It’s also worthwhile to gzip your scripts and stylesheets, but many web sites miss this opportunity. In fact, it’s worthwhile to compress any text response including XML and JSON. Image and PDF files should not be gzipped because they are already compressed. Trying to gzip them not only wastes CPU but can potentially increase file sizes.

Gzipping as many file types as possible is an easy way to reduce page weight and accelerate the user experience.

top | discuss this rule

Put Stylesheets at the Top

tag: css

While researching performance at Yahoo!, we discovered that moving stylesheets to the document HEAD makes pages appear to be loading faster. This is because putting stylesheets in the HEAD allows the page to render progressively.

Front-end engineers that care about performance want a page to load progressively; that is, we want the browser to display whatever content it has as soon as possible. This is especially important for pages with a lot of content and for users on slower Internet connections. The importance of giving users visual feedback, such as progress indicators, has been well researched and documented. In our case the HTML page is the progress indicator! When the browser loads the page progressively the header, the navigation bar, the logo at the top, etc. all serve as visual feedback for the user who is waiting for the page. This improves the overall user experience.

The problem with putting stylesheets near the bottom of the document is that it prohibits progressive rendering in many browsers, including Internet Explorer. These browsers block rendering to avoid having to redraw elements of the page if their styles change. The user is stuck viewing a blank white page.

The HTML specification clearly states that stylesheets are to be included in the HEAD of the page: “Unlike A, [LINK] may only appear in the HEAD section of a document, although it may appear any number of times.” Neither of the alternatives, the blank white screen or flash of unstyled content, are worth the risk. The optimal solution is to follow the HTML specification and load your stylesheets in the document HEAD.

top | discuss this rule

Put Scripts at the Bottom

tag: javascript

The problem caused by scripts is that they block parallel downloads. The HTTP/1.1 specificationsuggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel. While a script is downloading, however, the browser won’t start any other downloads, even on different hostnames.

In some situations it’s not easy to move scripts to the bottom. If, for example, the script usesdocument.write to insert part of the page’s content, it can’t be moved lower in the page. There might also be scoping issues. In many cases, there are ways to workaround these situations.

An alternative suggestion that often comes up is to use deferred scripts. The DEFER attribute indicates that the script does not contain document.write, and is a clue to browsers that they can continue rendering. Unfortunately, Firefox doesn’t support the DEFER attribute. In Internet Explorer, the script may be deferred, but not as much as desired. If a script can be deferred, it can also be moved to the bottom of the page. That will make your web pages load faster.

top | discuss this rule

Avoid CSS Expressions

tag: css

CSS expressions are a powerful (and dangerous) way to set CSS properties dynamically. They were supported in Internet Explorer starting with version 5, but were deprecated starting with IE8. As an example, the background color could be set to alternate every hour using CSS expressions:

      background-color: expression( (new Date()).getHours()%2 ? "#B8D4FF" : "#F08A00" );

As shown here, the expression method accepts a JavaScript expression. The CSS property is set to the result of evaluating the JavaScript expression. The expression method is ignored by other browsers, so it is useful for setting properties in Internet Explorer needed to create a consistent experience across browsers.

The problem with expressions is that they are evaluated more frequently than most people expect. Not only are they evaluated when the page is rendered and resized, but also when the page is scrolled and even when the user moves the mouse over the page. Adding a counter to the CSS expression allows us to keep track of when and how often a CSS expression is evaluated. Moving the mouse around the page can easily generate more than 10,000 evaluations.

One way to reduce the number of times your CSS expression is evaluated is to use one-time expressions, where the first time the expression is evaluated it sets the style property to an explicit value, which replaces the CSS expression. If the style property must be set dynamically throughout the life of the page, using event handlers instead of CSS expressions is an alternative approach. If you must use CSS expressions, remember that they may be evaluated thousands of times and could affect the performance of your page.

top | discuss this rule

Make JavaScript and CSS External

tag: javascript, css

Many of these performance rules deal with how external components are managed. However, before these considerations arise you should ask a more basic question: Should JavaScript and CSS be contained in external files, or inlined in the page itself?

Using external files in the real world generally produces faster pages because the JavaScript and CSS files are cached by the browser. JavaScript and CSS that are inlined in HTML documents get downloaded every time the HTML document is requested. This reduces the number of HTTP requests that are needed, but increases the size of the HTML document. On the other hand, if the JavaScript and CSS are in external files cached by the browser, the size of the HTML document is reduced without increasing the number of HTTP requests.

The key factor, then, is the frequency with which external JavaScript and CSS components are cached relative to the number of HTML documents requested. This factor, although difficult to quantify, can be gauged using various metrics. If users on your site have multiple page views per session and many of your pages re-use the same scripts and stylesheets, there is a greater potential benefit from cached external files.

Many web sites fall in the middle of these metrics. For these sites, the best solution generally is to deploy the JavaScript and CSS as external files. The only exception where inlining is preferable is with home pages, such as Yahoo!’s front page and My Yahoo!. Home pages that have few (perhaps only one) page view per session may find that inlining JavaScript and CSS results in faster end-user response times.

For front pages that are typically the first of many page views, there are techniques that leverage the reduction of HTTP requests that inlining provides, as well as the caching benefits achieved through using external files. One such technique is to inline JavaScript and CSS in the front page, but dynamically download the external files after the page has finished loading. Subsequent pages would reference the external files that should already be in the browser’s cache.

top | discuss this rule

Reduce DNS Lookups

tag: content

The Domain Name System (DNS) maps hostnames to IP addresses, just as phonebooks map people’s names to their phone numbers. When you type http://www.yahoo.com into your browser, a DNS resolver contacted by the browser returns that server’s IP address. DNS has a cost. It typically takes 20-120 milliseconds for DNS to lookup the IP address for a given hostname. The browser can’t download anything from this hostname until the DNS lookup is completed.

DNS lookups are cached for better performance. This caching can occur on a special caching server, maintained by the user’s ISP or local area network, but there is also caching that occurs on the individual user’s computer. The DNS information remains in the operating system’s DNS cache (the “DNS Client service” on Microsoft Windows). Most browsers have their own caches, separate from the operating system’s cache. As long as the browser keeps a DNS record in its own cache, it doesn’t bother the operating system with a request for the record.

Internet Explorer caches DNS lookups for 30 minutes by default, as specified by the DnsCacheTimeoutregistry setting. Firefox caches DNS lookups for 1 minute, controlled by thenetwork.dnsCacheExpiration configuration setting. (Fasterfox changes this to 1 hour.)

When the client’s DNS cache is empty (for both the browser and the operating system), the number of DNS lookups is equal to the number of unique hostnames in the web page. This includes the hostnames used in the page’s URL, images, script files, stylesheets, Flash objects, etc. Reducing the number of unique hostnames reduces the number of DNS lookups.

Reducing the number of unique hostnames has the potential to reduce the amount of parallel downloading that takes place in the page. Avoiding DNS lookups cuts response times, but reducing parallel downloads may increase response times. My guideline is to split these components across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads.

top | discuss this rule

Minify JavaScript and CSS

tag: javascript, css

Minification is the practice of removing unnecessary characters from code to reduce its size thereby improving load times. When code is minified all comments are removed, as well as unneeded white space characters (space, newline, and tab). In the case of JavaScript, this improves response time performance because the size of the downloaded file is reduced. Two popular tools for minifying JavaScript code areJSMin and YUI Compressor. The YUI compressor can also minify CSS.

Obfuscation is an alternative optimization that can be applied to source code. It’s more complex than minification and thus more likely to generate bugs as a result of the obfuscation step itself. In a survey of ten top U.S. web sites, minification achieved a 21% size reduction versus 25% for obfuscation. Although obfuscation has a higher size reduction, minifying JavaScript is less risky.

In addition to minifying external scripts and styles, inlined <script> and <style> blocks can and should also be minified. Even if you gzip your scripts and styles, minifying them will still reduce the size by 5% or more. As the use and size of JavaScript and CSS increases, so will the savings gained by minifying your code.

top | discuss this rule

Avoid Redirects

tag: content

Redirects are accomplished using the 301 and 302 status codes. Here’s an example of the HTTP headers in a 301 response:

      HTTP/1.1 301 Moved Permanently
      Location: http://example.com/newuri
      Content-Type: text/html

The browser automatically takes the user to the URL specified in the Location field. All the information necessary for a redirect is in the headers. The body of the response is typically empty. Despite their names, neither a 301 nor a 302 response is cached in practice unless additional headers, such asExpires or Cache-Control, indicate it should be. The meta refresh tag and JavaScript are other ways to direct users to a different URL, but if you must do a redirect, the preferred technique is to use the standard 3xx HTTP status codes, primarily to ensure the back button works correctly.

The main thing to remember is that redirects slow down the user experience. Inserting a redirect between the user and the HTML document delays everything in the page since nothing in the page can be rendered and no components can start being downloaded until the HTML document has arrived.

One of the most wasteful redirects happens frequently and web developers are generally not aware of it. It occurs when a trailing slash (/) is missing from a URL that should otherwise have one. For example, going to http://astrology.yahoo.com/astrology results in a 301 response containing a redirect tohttp://astrology.yahoo.com/astrology/ (notice the added trailing slash). This is fixed in Apache by usingAlias or mod_rewrite, or the DirectorySlash directive if you’re using Apache handlers.

Connecting an old web site to a new one is another common use for redirects. Others include connecting different parts of a website and directing the user based on certain conditions (type of browser, type of user account, etc.). Using a redirect to connect two web sites is simple and requires little additional coding. Although using redirects in these situations reduces the complexity for developers, it degrades the user experience. Alternatives for this use of redirects include using Alias and mod_rewrite if the two code paths are hosted on the same server. If a domain name change is the cause of using redirects, an alternative is to create a CNAME (a DNS record that creates an alias pointing from one domain name to another) in combination with Alias or mod_rewrite.

top | discuss this rule

Remove Duplicate Scripts

tag: javascript

It hurts performance to include the same JavaScript file twice in one page. This isn’t as unusual as you might think. A review of the ten top U.S. web sites shows that two of them contain a duplicated script. Two main factors increase the odds of a script being duplicated in a single web page: team size and number of scripts. When it does happen, duplicate scripts hurt performance by creating unnecessary HTTP requests and wasted JavaScript execution.

Unnecessary HTTP requests happen in Internet Explorer, but not in Firefox. In Internet Explorer, if an external script is included twice and is not cacheable, it generates two HTTP requests during page loading. Even if the script is cacheable, extra HTTP requests occur when the user reloads the page.

In addition to generating wasteful HTTP requests, time is wasted evaluating the script multiple times. This redundant JavaScript execution happens in both Firefox and Internet Explorer, regardless of whether the script is cacheable.

One way to avoid accidentally including the same script twice is to implement a script management module in your templating system. The typical way to include a script is to use the SCRIPT tag in your HTML page.

      <script type="text/javascript" src="menu_1.0.17.js"></script>

An alternative in PHP would be to create a function called insertScript.

      <?php insertScript("menu.js") ?>

In addition to preventing the same script from being inserted multiple times, this function could handle other issues with scripts, such as dependency checking and adding version numbers to script filenames to support far future Expires headers.

top | discuss this rule

Configure ETags

tag: server

Entity tags (ETags) are a mechanism that web servers and browsers use to determine whether the component in the browser’s cache matches the one on the origin server. (An “entity” is another word a “component”: images, scripts, stylesheets, etc.) ETags were added to provide a mechanism for validating entities that is more flexible than the last-modified date. An ETag is a string that uniquely identifies a specific version of a component. The only format constraints are that the string be quoted. The origin server specifies the component’s ETag using the ETag response header.

      HTTP/1.1 200 OK
      Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT
      ETag: "10c24bc-4ab-457e1c1f"
      Content-Length: 12195

Later, if the browser has to validate a component, it uses the If-None-Match header to pass the ETag back to the origin server. If the ETags match, a 304 status code is returned reducing the response by 12195 bytes for this example.

      GET /i/yahoo.gif HTTP/1.1
      Host: us.yimg.com
      If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT
      If-None-Match: "10c24bc-4ab-457e1c1f"
      HTTP/1.1 304 Not Modified

The problem with ETags is that they typically are constructed using attributes that make them unique to a specific server hosting a site. ETags won’t match when a browser gets the original component from one server and later tries to validate that component on a different server, a situation that is all too common on Web sites that use a cluster of servers to handle requests. By default, both Apache and IIS embed data in the ETag that dramatically reduces the odds of the validity test succeeding on web sites with multiple servers.

The ETag format for Apache 1.3 and 2.x is inode-size-timestamp. Although a given file may reside in the same directory across multiple servers, and have the same file size, permissions, timestamp, etc., its inode is different from one server to the next.

IIS 5.0 and 6.0 have a similar issue with ETags. The format for ETags on IIS isFiletimestamp:ChangeNumber. A ChangeNumber is a counter used to track configuration changes to IIS. It’s unlikely that the ChangeNumber is the same across all IIS servers behind a web site.

The end result is ETags generated by Apache and IIS for the exact same component won’t match from one server to another. If the ETags don’t match, the user doesn’t receive the small, fast 304 response that ETags were designed for; instead, they’ll get a normal 200 response along with all the data for the component. If you host your web site on just one server, this isn’t a problem. But if you have multiple servers hosting your web site, and you’re using Apache or IIS with the default ETag configuration, your users are getting slower pages, your servers have a higher load, you’re consuming greater bandwidth, and proxies aren’t caching your content efficiently. Even if your components have a far future Expiresheader, a conditional GET request is still made whenever the user hits Reload or Refresh.

If you’re not taking advantage of the flexible validation model that ETags provide, it’s better to just remove the ETag altogether. The Last-Modified header validates based on the component’s timestamp. And removing the ETag reduces the size of the HTTP headers in both the response and subsequent requests. This Microsoft Support article describes how to remove ETags. In Apache, this is done by simply adding the following line to your Apache configuration file:

      FileETag none

top | discuss this rule

Make Ajax Cacheable

tag: content

One of the cited benefits of Ajax is that it provides instantaneous feedback to the user because it requests information asynchronously from the backend web server. However, using Ajax is no guarantee that the user won’t be twiddling his thumbs waiting for those asynchronous JavaScript and XML responses to return. In many applications, whether or not the user is kept waiting depends on how Ajax is used. For example, in a web-based email client the user will be kept waiting for the results of an Ajax request to find all the email messages that match their search criteria. It’s important to remember that “asynchronous” does not imply “instantaneous”.

To improve performance, it’s important to optimize these Ajax responses. The most important way to improve the performance of Ajax is to make the responses cacheable, as discussed in Add an Expires or a Cache-Control Header. Some of the other rules also apply to Ajax:

 
Let’s look at an example. A Web 2.0 email client might use Ajax to download the user’s address book for autocompletion. If the user hasn’t modified her address book since the last time she used the email web app, the previous address book response could be read from cache if that Ajax response was made cacheable with a future Expires or Cache-Control header. The browser must be informed when to use a previously cached address book response versus requesting a new one. This could be done by adding a timestamp to the address book Ajax URL indicating the last time the user modified her address book, for example, &t=1190241612. If the address book hasn’t been modified since the last download, the timestamp will be the same and the address book will be read from the browser’s cache eliminating an extra HTTP roundtrip. If the user has modified her address book, the timestamp ensures the new URL doesn’t match the cached response, and the browser will request the updated address book entries.

Even though your Ajax responses are created dynamically, and might only be applicable to a single user, they can still be cached. Doing so will make your Web 2.0 apps faster.

top | discuss this rule

Flush the Buffer Early

tag: server

When users request a page, it can take anywhere from 200 to 500ms for the backend server to stitch together the HTML page. During this time, the browser is idle as it waits for the data to arrive. In PHP you have the function flush(). It allows you to send your partially ready HTML response to the browser so that the browser can start fetching components while your backend is busy with the rest of the HTML page. The benefit is mainly seen on busy backends or light frontends.

A good place to consider flushing is right after the HEAD because the HTML for the head is usually easier to produce and it allows you to include any CSS and JavaScript files for the browser to start fetching in parallel while the backend is still processing.

Example:

      ... <!-- css, js -->
    </head>
    <?php flush(); ?>
    <body>
      ... <!-- content -->

Yahoo! search pioneered research and real user testing to prove the benefits of using this technique.

top

Use GET for AJAX Requests

tag: server

The Yahoo! Mail team found that when using XMLHttpRequest, POST is implemented in the browsers as a two-step process: sending the headers first, then sending data. So it’s best to use GET, which only takes one TCP packet to send (unless you have a lot of cookies). The maximum URL length in IE is 2K, so if you send more than 2K data you might not be able to use GET.

An interesting side affect is that POST without actually posting any data behaves like GET. Based on theHTTP specs, GET is meant for retrieving information, so it makes sense (semantically) to use GET when you’re only requesting data, as opposed to sending data to be stored server-side.

 

top

Post-load Components

tag: content

You can take a closer look at your page and ask yourself: “What’s absolutely required in order to render the page initially?”. The rest of the content and components can wait.

JavaScript is an ideal candidate for splitting before and after the onload event. For example if you have JavaScript code and libraries that do drag and drop and animations, those can wait, because dragging elements on the page comes after the initial rendering. Other places to look for candidates for post-loading include hidden content (content that appears after a user action) and images below the fold.

Tools to help you out in your effort: YUI Image Loader allows you to delay images below the fold and theYUI Get utility is an easy way to include JS and CSS on the fly. For an example in the wild take a look atYahoo! Home Page with Firebug’s Net Panel turned on.

It’s good when the performance goals are inline with other web development best practices. In this case, the idea of progressive enhancement tells us that JavaScript, when supported, can improve the user experience but you have to make sure the page works even without JavaScript. So after you’ve made sure the page works fine, you can enhance it with some post-loaded scripts that give you more bells and whistles such as drag and drop and animations.

top

Preload Components

tag: content

Preload may look like the opposite of post-load, but it actually has a different goal. By preloading components you can take advantage of the time the browser is idle and request components (like images, styles and scripts) you’ll need in the future. This way when the user visits the next page, you could have most of the components already in the cache and your page will load much faster for the user.

There are actually several types of preloading:

  • Unconditional preload – as soon as onload fires, you go ahead and fetch some extra components. Check google.com for an example of how a sprite image is requested onload. This sprite image is not needed on the google.com homepage, but it is needed on the consecutive search result page.
  • Conditional preload – based on a user action you make an educated guess where the user is headed next and preload accordingly. On search.yahoo.com you can see how some extra components are requested after you start typing in the input box.
  • Anticipated preload – preload in advance before launching a redesign. It often happens after a redesign that you hear: “The new site is cool, but it’s slower than before”. Part of the problem could be that the users were visiting your old site with a full cache, but the new one is always an empty cache experience. You can mitigate this side effect by preloading some components before you even launched the redesign. Your old site can use the time the browser is idle and request images and scripts that will be used by the new site

top

Reduce the Number of DOM Elements

tag: content

A complex page means more bytes to download and it also means slower DOM access in JavaScript. It makes a difference if you loop through 500 or 5000 DOM elements on the page when you want to add an event handler for example.

A high number of DOM elements can be a symptom that there’s something that should be improved with the markup of the page without necessarily removing content. Are you using nested tables for layout purposes? Are you throwing in more <div>s only to fix layout issues? Maybe there’s a better and more semantically correct way to do your markup.

A great help with layouts are the YUI CSS utilities: grids.css can help you with the overall layout, fonts.css and reset.css can help you strip away the browser’s defaults formatting. This is a chance to start fresh and think about your markup, for example use <div>s only when it makes sense semantically, and not because it renders a new line.

The number of DOM elements is easy to test, just type in Firebug’s console:
document.getElementsByTagName('*').length

And how many DOM elements are too many? Check other similar pages that have good markup. For example the Yahoo! Home Page is a pretty busy page and still under 700 elements (HTML tags).

top

Split Components Across Domains

tag: content

Splitting components allows you to maximize parallel downloads. Make sure you’re using not more than 2-4 domains because of the DNS lookup penalty. For example, you can host your HTML and dynamic content on www.example.org and split static components between static1.example.org andstatic2.example.org

For more information check “Maximizing Parallel Downloads in the Carpool Lane” by Tenni Theurer and Patty Chi.

top

Minimize the Number of iframes

tag: content

Iframes allow an HTML document to be inserted in the parent document. It’s important to understand how iframes work so they can be used effectively.

<iframe> pros:

  • Helps with slow third-party content like badges and ads
  • Security sandbox
  • Download scripts in parallel

<iframe> cons:

  • Costly even if blank
  • Blocks page onload
  • Non-semantic

top

No 404s

tag: content

HTTP requests are expensive so making an HTTP request and getting a useless response (i.e. 404 Not Found) is totally unnecessary and will slow down the user experience without any benefit.

Some sites have helpful 404s “Did you mean X?”, which is great for the user experience but also wastes server resources (like database, etc). Particularly bad is when the link to an external JavaScript is wrong and the result is a 404. First, this download will block parallel downloads. Next the browser may try to parse the 404 response body as if it were JavaScript code, trying to find something usable in it.

top

tag: cookie

HTTP cookies are used for a variety of reasons such as authentication and personalization. Information about cookies is exchanged in the HTTP headers between web servers and browsers. It’s important to keep the size of cookies as low as possible to minimize the impact on the user’s response time.

For more information check “When the Cookie Crumbles” by Tenni Theurer and Patty Chi. The take-home of this research:

 

  • Eliminate unnecessary cookies
  • Keep cookie sizes as low as possible to minimize the impact on the user response time
  • Be mindful of setting cookies at the appropriate domain level so other sub-domains are not affected
  • Set an Expires date appropriately. An earlier Expires date or none removes the cookie sooner, improving the user response time

top

tag: cookie

When the browser makes a request for a static image and sends cookies together with the request, the server doesn’t have any use for those cookies. So they only create network traffic for no good reason. You should make sure static components are requested with cookie-free requests. Create a subdomain and host all your static components there.

If your domain is www.example.org, you can host your static components onstatic.example.org. However, if you’ve already set cookies on the top-level domain example.orgas opposed to www.example.org, then all the requests to static.example.org will include those cookies. In this case, you can buy a whole new domain, host your static components there, and keep this domain cookie-free. Yahoo! uses yimg.com, YouTube uses ytimg.com, Amazon uses images-amazon.com and so on.

Another benefit of hosting static components on a cookie-free domain is that some proxies might refuse to cache the components that are requested with cookies. On a related note, if you wonder if you should use example.org or http://www.example.org for your home page, consider the cookie impact. Omitting www leaves you no choice but to write cookies to *.example.org, so for performance reasons it’s best to use the www subdomain and write the cookies to that subdomain.

top

Minimize DOM Access

tag: javascript

Accessing DOM elements with JavaScript is slow so in order to have a more responsive page, you should:

  • Cache references to accessed elements
  • Update nodes “offline” and then add them to the tree
  • Avoid fixing layout with JavaScript

For more information check the YUI theatre’s “High Performance Ajax Applications” by Julien Lecomte.

top

Develop Smart Event Handlers

tag: javascript

Sometimes pages feel less responsive because of too many event handlers attached to different elements of the DOM tree which are then executed too often. That’s why using event delegation is a good approach. If you have 10 buttons inside a div, attach only one event handler to the div wrapper, instead of one handler for each button. Events bubble up so you’ll be able to catch the event and figure out which button it originated from.

You also don’t need to wait for the onload event in order to start doing something with the DOM tree. Often all you need is the element you want to access to be available in the tree. You don’t have to wait for all images to be downloaded. DOMContentLoaded is the event you might consider using instead of onload, but until it’s available in all browsers, you can use the YUI Event utility, which has anonAvailable method.

For more information check the YUI theatre’s “High Performance Ajax Applications” by Julien Lecomte.

top

tag: css

One of the previous best practices states that CSS should be at the top in order to allow for progressive rendering.

In IE @import behaves the same as using <link> at the bottom of the page, so it’s best not to use it.

top

Avoid Filters

tag: css

The IE-proprietary AlphaImageLoader filter aims to fix a problem with semi-transparent true color PNGs in IE versions < 7. The problem with this filter is that it blocks rendering and freezes the browser while the image is being downloaded. It also increases memory consumption and is applied per element, not per image, so the problem is multiplied.

The best approach is to avoid AlphaImageLoader completely and use gracefully degrading PNG8 instead, which are fine in IE. If you absolutely need AlphaImageLoader, use the underscore hack_filter as to not penalize your IE7+ users.

top

Optimize Images

tag: images

After a designer is done with creating the images for your web page, there are still some things you can try before you FTP those images to your web server.

  • You can check the GIFs and see if they are using a palette size corresponding to the number of colors in the image. Using imagemagick it’s easy to check using
    identify -verbose image.gif
    When you see an image useing 4 colors and a 256 color “slots” in the palette, there is room for improvement.
  • Try converting GIFs to PNGs and see if there is a saving. More often than not, there is. Developers often hesitate to use PNGs due to the limited support in browsers, but this is now a thing of the past. The only real problem is alpha-transparency in true color PNGs, but then again, GIFs are not true color and don’t support variable transparency either. So anything a GIF can do, a palette PNG (PNG8) can do too (except for animations). This simple imagemagick command results in totally safe-to-use PNGs:
    convert image.gif image.png
    “All we are saying is: Give PiNG a Chance!”
  • Run pngcrush (or any other PNG optimizer tool) on all your PNGs. Example:
    pngcrush image.png -rem alla -reduce -brute result.png
  • Run jpegtran on all your JPEGs. This tool does lossless JPEG operations such as rotation and can also be used to optimize and remove comments and other useless information (such as EXIF information) from your images.
    jpegtran -copy none -optimize -perfect src.jpg dest.jpg

top

Optimize CSS Sprites

tag: images

  • Arranging the images in the sprite horizontally as opposed to vertically usually results in a smaller file size.
  • Combining similar colors in a sprite helps you keep the color count low, ideally under 256 colors so to fit in a PNG8.
  • “Be mobile-friendly” and don’t leave big gaps between the images in a sprite. This doesn’t affect the file size as much but requires less memory for the user agent to decompress the image into a pixel map. 100×100 image is 10 thousand pixels, where 1000×1000 is 1 million pixels

top

Don’t Scale Images in HTML

tag: images

Don’t use a bigger image than you need just because you can set the width and height in HTML. If you need
<img width="100" height="100" src="mycat.jpg" alt="My Cat" />
then your image (mycat.jpg) should be 100x100px rather than a scaled down 500x500px image.

top

Make favicon.ico Small and Cacheable

tag: images

The favicon.ico is an image that stays in the root of your server. It’s a necessary evil because even if you don’t care about it the browser will still request it, so it’s better not to respond with a 404 Not Found. Also since it’s on the same server, cookies are sent every time it’s requested. This image also interferes with the download sequence, for example in IE when you request extra components in the onload, the favicon will be downloaded before these extra components.

So to mitigate the drawbacks of having a favicon.ico make sure:

  • It’s small, preferably under 1K.
  • Set Expires header with what you feel comfortable (since you cannot rename it if you decide to change it). You can probably safely set the Expires header a few months in the future. You can check the last modified date of your current favicon.ico to make an informed decision.

Imagemagick can help you create small favicons

top

Keep Components under 25K

tag: mobile

This restriction is related to the fact that iPhone won’t cache components bigger than 25K. Note that this is the uncompressed size. This is where minification is important because gzip alone may not be sufficient.

For more information check “Performance Research, Part 5: iPhone Cacheability – Making it Stick” by Wayne Shea and Tenni Theurer.

top

Pack Components into a Multipart Document

tag: mobile

Packing components into a multipart document is like an email with attachments, it helps you fetch several components with one HTTP request (remember: HTTP requests are expensive). When you use this technique, first check if the user agent supports it (iPhone does not).

Avoid Empty Image src

tag: server

Image with empty string src attribute occurs more than one will expect. It appears in two form:

  1. straight HTML

    <img src=””>

  2. JavaScript

    var img = new Image();
    img.src = “”;

 

Both forms cause the same effect: browser makes another request to your server.

  • Internet Explorer makes a request to the directory in which the page is located.
  • Safari and Chrome make a request to the actual page itself.
  • Firefox 3 and earlier versions behave the same as Safari and Chrome, but version 3.5 addressed this issue[bug 444931] and no longer sends a request.
  • Opera does not do anything when an empty image src is encountered.

 
Why is this behavior bad?

  1. Cripple your servers by sending a large amount of unexpected traffic, especially for pages that get millions of page views per day.
  2. Waste server computing cycles generating a page that will never be viewed.
  3. Possibly corrupt user data. If you are tracking state in the request, either by cookies or in another way, you have the possibility of destroying data. Even though the image request does not return an image, all of the headers are read and accepted by the browser, including all cookies. While the rest of the response is thrown away, the damage may already be done.

 
The root cause of this behavior is the way that URI resolution is performed in browsers. This behavior is defined in RFC 3986 – Uniform Resource Identifiers. When an empty string is encountered as a URI, it is considered a relative URI and is resolved according to the algorithm defined in section 5.2. This specific example, an empty string, is listed in section 5.4. Firefox, Safari, and Chrome are all resolving an empty string correctly per the specification, while Internet Explorer is resolving it incorrectly, apparently in line with an earlier version of the specification, RFC 2396 – Uniform Resource Identifiers (this was obsoleted by RFC 3986). So technically, the browsers are doing what they are supposed to do to resolve relative URIs. The problem is that in this context, the empty string is clearly unintentional.

HTML5 adds to the description of the  tag’s src attribute to instruct browsers not to make an additional request in section 4.8.2:

The src attribute must be present, and must contain a valid URL referencing a non-interactive, optionally animated, image resource that is neither paged nor scripted. If the base URI of the element is the same as the document’s address, then the src attribute’s value must not be the empty string.

Hopefully, browsers will not have this problem in the future. Unfortunately, there is no such clause for <script src=””> and <link href=””>. Maybe there is still time to make that adjustment to ensure browsers don’t accidentally implement this behavior. 

This rule was inspired by Yahoo!’s JavaScript guru Nicolas C. Zakas. For more information check out his article “Empty image src can destroy your site“.

top