Consensus Roadmap for Defeating Distributed Denial of Service Attacks

Defeating Distributed Denial of Service Attacks

 

Contents
  1. Introduction
  2. Key Trends and Factors
  3. Immediate Steps To Reduce Risk and Dampen The Effects of Attacks
  4. Longer Term Efforts to Provide Adequate Safeguards
  5. A Living Document
  6. Download this Document in PDF format

Introduction

The distributed denial of service attacks during the week of February 7 highlighted security weaknesses in hosts and software used in the Internet that put electronic commerce at risk. These attacks also illuminated several recent trends and served as a warning for the kinds of high-impact attacks that we may see in the near future. This document outlines key trends and other factors that have exacerbated these Internet security problems, summarizes near-term activities that can be taken to help reduce the threat, and suggests research and development directions that will be required to manage the emerging risks and keep them within more tolerable bounds. For the problems described, activities are listed for user organizations, Internet service providers, network manufacturers, and system software providers.

Key Trends and Factors

The recent attacks against e-commerce sites demonstrate the opportunities that attackers now have because of several Internet trends and related factors:

  • Attack technology is developing in an open-source environment and is evolving rapidly. Technology producers, system administrators, and users are improving their ability to react to emerging problems, but they are behind and significant damage to systems and infrastructure can occur before effective defenses can be implemented. As long as defensive strategies are reactionary, this situation will worsen. Currently, there are tens of thousands – perhaps even millions – of systems with weak security connected to the Internet. Attackers are (and will) compromising these machines and building attack networks. Attack technology takes advantage of the power of the Internet to exploit its own weaknesses and overcome defenses.
  • Increasingly complex software is being written by programmers who have no training in writing secure code and are working in organizations that sacrifice the safety of their clients for speed to market. This complex software is then being deployed in security-critical environments and applications, to the detriment of all users.
  • User demand for new software features instead of safety, coupled with industry response to that demand, has resulted in software that is increasingly supportive of subversion, computer viruses, data theft, and other malicious acts.
  • Because of the scope and variety of the Internet, changing any particular piece of technology usually cannot eliminate newly emerging problems; broad community action is required. While point solutions can help dampen the effects of attacks, robust solutions will come only with concentrated effort over several years.
  • The explosion in use of the Internet is straining our scarce technical talent. The average level of system administrator technical competence has decreased dramatically in the last 5 years as non-technical people are pressed into service as system administrators. Additionally, there has been little organized support of higher education programs that can train and produce new scientists and educators with meaningful experience and expertise in this emerging discipline.
  • The evolution of attack technology and the deployment of attack tools transcend geography and national boundaries. Solutions must be international in scope.
  • The difficulty of criminal investigation of cybercrime coupled with the complexity of international law mean that successful apprehension and prosecution of computer crime is unlikely, and thus little deterrent value is realized.
  • The number of directly connected homes, schools, libraries and other venues without trained system administration and security staff is rapidly increasing. These “always-on, rarely-protected” systems allow attackers to continue to add new systems to their arsenal of captured weapons.
Immediate Steps to Reduce Risk and Dampen the Effects of Attacks

There are several steps that can be taken immediately by user organizations, Internet service providers, network manufacturers, and system software providers to reduce risk and decrease the impact of attacks. We hope that major users, including the governments (around the world) will lead the user community by setting examples – taking the necessary steps to protect their computers. And we hope that industry and government will cooperate to educate the community of users – about threats and potential courses of action – through public information campaigns and technical education programs.

In all of these recommendations, there may be instances where some steps are not feasible, but these will be rare and requests for waivers within organizations should be granted only on the basis of substantive proof validated by independent security experts.

Problem 1: Spoofing

Attackers often hide the identity of machines used to carry out an attack by falsifying the source address of the network communication. This makes it more difficult to identity the sources of attack traffic and sometimes shifts attention onto innocent third parties. Limiting the ability of an attacker to spoof IP source addresses will not stop attacks, but will dramatically shorten the time needed to trace an attack back to its origins.

Solutions:

  • User organizations and Internet service providers can ensure that traffic exiting an organization’s site, or entering an ISP’s network from a site, carries a source address consistent with the set of addresses for that site. Although this would still allow addresses to be spoofed within a site, it would allow tracing of attack traffic to the site from which it emanated, substantially assisting in the process of locating and isolating attacks traffic sources. Specifically user organizations should ensure that all packets leaving their sites carry source addresses within the address range of those sites. They should also ensure that no traffic from “unroutable addresses” listed in RFC 1918 are sent from their sites. This activity is often called egress filtering. User organizations should take the lead in stopping this traffic because they have the capacity on their routers to handle the load. ISPs can provide backup to pick up spoofed traffic that is not caught by user filters. ISPs may also be able to stop spoofing by accepting traffic (and passing it along) only if it comes from authorized sources. This activity is often called ingress filtering.
  • Dial-up users are the source of some attacks. Stopping spoofing by these users is also an important step. ISPs, universities, libraries and others that serve dial-up users should ensure that proper filters are in place to prevent dial-up connections from using spoofed addresses. Network equipment vendors should ensure that no-IP-spoofing is a user setting, and the default setting, on their dial-up equipment.
Problem 2: Broadcast Amplification

In a common attack, the malicious user generates packets with a source address of the site he wishes to attack (site A) (using spoofing as described in problem 1) and then sends a series of network packets to an organization with lots of computers (Site B), using an address that broadcasts the packets to every machine at site B. Unless precautions have been taken, every machine at Site B will respond to the packets and send data to the organization (Site A) that was the target of the attack. The target will be flooded and people at Site A may blame the people at Site B. Attacks of this type often are referred to as Smurf attacks. In addition, the echo and chargen services can be used to create oscillation attacks similar in effect to Smurf.

Solutions:

  • Unless an organization is aware of a legitimate need to support broadcast or multicast traffic within its environment, the forwarding of directed broadcasts should be turned off. Even when broadcast applications are legitimate, an organization should block certain types of traffic sent to “broadcast” addresses (e.g., ICMP Echo Reply) messages so that its systems cannot be used to effect these Smurf attacks. Network hardware vendors should ensure that routers can turn off the forwarding of IP directed broadcast packets as described in RFC 2644 and that this is the default configuration of every router.
  • Users should turn off echo and chargen services unless they have a specific need for those services. (This is good advice, in general, for all network services – they should be disabled unless known to be needed.)
Problem 3: Lack of Appropriate Response To Attacks

Many organizations do not respond to complaints of attacks originating from their sites or to attacks against their sites, or respond in a haphazard manner. This makes containment and eradication of attacks difficult. Further, many organizations fail to share information about attacks, giving the attacker community the advantage of better intelligence sharing.

Solutions:

  • User organizations should establish incident response policies and teams with clearly defined responsibilities and procedures. ISPs should establish methods of responding quickly and staffing to support those methods when their systems are found to have been used for attacks on other organizations.
  • User organizations should encourage system administrators to participate in industry-wide early warning systems, where their corporate identities can be protected (if necessary), to counter rapid dissemination of information among the attack community.
  • Attacks and system flaws should be reported to appropriate authorities (e.g., vendors, response teams) so that the information can be applied to defenses for other users.
Problem 4: Unprotected Computers

Many computers are vulnerable to take-over for distributed denial of service attacks because of inadequate implementation of well-known “best practices.” When those computers are used in attacks, the carelessness of their owners is instantly converted to major costs, headaches, and embarrassment for the owners of computers being attacked. Furthermore, once a computer has been compromised, the data may be copied, altered or destroyed, programs changed, and the system disabled.

Solutions:

  • User organizations should check their systems periodically to determine whether they have had malicious software installed, including DDOS Trojan Horse programs. If such software is found, the system should be restored to a known good state.
  • User organizations should reduce the vulnerability of their systems by installing firewalls with rule sets that tightly limit transmission across the site’s periphery (e.g. deny traffic, both incoming and outgoing, unless given specific instructions to allow it).
  • All machines, routers, and other Internet-accessible equipment should be periodically checked to verify that all recommended security patches have been installed.
  • The security community should maintain and publicize a current “Top-20 Exploited vulnerabilities” and the “Top 20 Attacks” list of currently most-often-exploited vulnerabilities to help system administrators set priorities.
  • Users should turn off services that are not required and limit access to vulnerable management services (e.g., RPC-based services).
  • Users and vendors should cooperate to create “system-hardening” scripts that can be used by less sophisticated users to close known holes and tighten settings to make their systems more secure. Users should employ these tools when they are available.
  • System software vendors should ship systems where security defaults are set to the highest level of security rather than the lowest level of security. These “secure out-of –the-box” configurations will greatly aid novice users and system administrators. They will furthermore save critically-scarce time for even the most experienced security professionals.
  • System administrators should deploy “best practice” tools including firewalls (as described above), intrusion detection systems, virus detection software, and software to detect unauthorized changes to files. This will reduce the risk that systems are compromised and used as a base for launching attacks. It will increase confidence in the correct functioning of the systems. Use of software to detect unauthorized changes may also be helpful in restoring compromised systems to normal function.
  • System and network administrators should be given time and support for training and enhancement of their skills. System administrators and auditors should be periodically certified to verify that their security knowledge and skills are current.
Longer Term Efforts to Provide Adequate Safeguards

The steps listed above are needed now to allow us to begin to move away from the extremely vulnerable state we are in. While these steps will help, they will not adequately reduce the risk given the trends listed above. These trends hint at new security requirements that will only be met if information technology and community attitudes about the Internet are changed in fundamental ways. In addition, research is needed in the areas of policy and law to enable us to deal with aspects of the problem that technology improvements will not be able to address by themselves. The following are some of the items that should be considered:

  • Establish load and traffic volume monitoring at ISPs to provide early warning of attacks.
  • Accelerate the adoption of the IPsec components of Internet Protocol Version 6 and Secure Domain Name System.
  • Increase the emphasis on security in the research and development of Internet II.
  • Support the development of tools that automatically generate router access control lists for firewall and router policy.
  • Encourage the development of software and hardware that is engineered for safety with possibly vulnerable settings and services turned off, and encourage vendors to automate security updating for their clients.
  • Sponsor research in network protocols and infrastructure to implement real-time flow analysis and flow control.
  • Encourage wider adoption of routers and switches that can perform sophisticated filtering with minimal performance degradation.
  • Sponsor continuing topological studies of the Internet to understand the nature of “choke points.”
  • Test deployment and continue research in anomaly-based, and other forms of intrusion detection.
  • Support community-wide consensus of uniform security policies to protect systems and to outline security responsibilities of network operators, Internet service providers, and Internet users.
  • Encourage development and deployment of a secure communications infrastructure that can be used by network operators and Internet service providers to enable real-time collaboration when dealing with attacks.
  • Sponsor research and development leading to safer operating systems that are also easier to maintain and manage.
  • Sponsor research into survivable systems that are better able to resist, recognize, and recover from attacks while still providing critical functionality.
  • Sponsor research into better forensic tools and methods to trace and apprehend malicious users without forcing the adoption of privacy-invading monitoring.
  • Provide meaningful infrastructure support for centers of excellence in information security education and research to produce a new generation of leaders in the field.
  • Consider changes in government procurement policy to emphasize security and safety rather than simply cost when acquiring information systems, and to hold managers accountable for poor security.
A Living Document

This Roadmap is a living document and will be updated periodically when new or altered threats require changes to the document. Furthermore it is a consensus document – a product of the joint thinking of some of the best minds in security – and it will continue to improve if you share your experiences in implementing the prescriptions.

 

Reference : http://www.sans.org/dosstep/roadmap.php

Securing e-Commerce Web Sites

Introduction

Securing web sites, and web servers in particular, has been the focus of many security
articles and conferences over the past few years. Obviously, a web site’s security level
is heavily influenced by the security means, which are used by, and on, the web server.
It seems obvious that the key to a secure web site is the level of security achieved from
security of the web server. One might have “stumbled” over a web site’s database
security issues if he or she was interested in DBA chores. Database security is also a
well-known subject in web site security, but it is mostly documented as a standalone
issue.
Building a web site is a task that involves more then one OS and more then one kind of
software. Therefore, the security of the web site is achieved from the synergy of all the
factors and not from the web server alone.
When I set out to write this paper, little did I know that public information regarding the
“fortification” of complex web sites will be hard to come by. Only few sites publicize the
internal workings of their systems, and fewer the security make-up and configuration.
All this said, the question I will be trying to answer in this paper is, “How do I put all
these ingredients together in order to build a secure e-Commerce web site?”
Assumptions

When building a web site we must survey the risks facing the web site from all different
aspects. Not all web sites face the same “threats”; many web sites are just another
collection of HTML pages in the vast cyberspace of the Internet. But, web sites
conducting business, containing information (considered valuable for a malicious
hacker) or holding a political view, are at higher risk then others. E-commerce web sites
often hold valuable information (credit card numbers or other private, personal data) and
conduct business, and are thus placed at a high-risk position.
Having recognized a web site is in the high-risk zone, we must consider the different
types of security hazards:
· Denial of Service (including distributed).
· Defacement (the replacement of content on a web site, indicating it has been
hacked).
· Data Theft.
· Fraud (data manipulation or actual theft).
While any of these attacks might cause revenue lose, the method of defense against
each is different. Since there is no global security solution that can provide the full
defensive spectrum an e-commerce web site requires, it has become extremely difficult
to choose the right line of defense.
Security is a product that comes with a price tag. At first, this might be very obvious
since products such as firewall and anti-virus have known pricing. However, the costs of
on-going security, software-security updates, new web-site technologies etc, cannot be
calculated during initial installation planning. Eventually the web site owner will have to
decide what level of security will be provided, while considering the current risks and
costs involved.

Web Sites Under Attack

Web site attacks vary significantly from site to site and from hacker to hacker, and their
focus has changed as well in the passing years, shifting from network level attacks to
web server hacking from within the HTTP protocol itself. DoS and DDoS attacks have
become a hacker-sport and can be seen in different forms; Ranging from network based
DoS such as PING flooding, to full connection HTTP requests.

DoS and DDoS
When a hacker wishes to “down” a web site, all which is needed, is a computing base
that can produce a larger amount of CPU-demanding activities (for example, IP floods)
then the web site is capable of handling. This is true for a fully clustered web site that is
connected via a T1 connection, not only for web sites with more limited resources. The
attacker needs only to generate traffic that exceeds the line capabilities, and effectively
the web site will no longer be available to the Internet.
Generating a large amount of traffic doesn’t require having a large connection on the
attacker side. The attacker may choose to use “bots1” or amplifiers2 as the attack base.
Most information regarding DoS and DDoS shows the use of network level exploits and
various methods of IP based flooding. The SANS paper on the subject “Consensus
Roadmap for Defeating Distributed Denial of Service Attacks“ which can be found at
http://www.sans.org/ddos_roadmap.htm, reflects these methods and the possible
defense.
Recently, a new method of DDoS has been developed. Using bots to open full
connections to the web site, and request an object on the web site. Using full
connections compromises the identity and the origin of the attack, since the bots can be
hard to trace back to their owner. These connections cannot be differentiated for all
intents and purposes from ordinary requests of web browsers.
Currently there are no known defenses against DoS attacks implementing full
connections (CDN3 is a partial and extremely expensive method that isn’t feasible for
most web sites). This is due to the fact that no publicly available web server or security
product can fully guarantee connection originates from a “bot” and not from a legitimate
connection.
Defending your web site against the more “ordinary” DoS and DDoS attacks (namely
network level attacks) is a well documented art, and consists mainly of ISP cooperation
with the web site owner. Most methods of defense include rate-limit of various forms,
and unwanted network traffic blocking (such as fragment blocking, UDP blocking etc).
1 Bots are computers connected to the Internet that the attacker was able to take over
fully or partially using various means. These computers then act as “robots” controlled
by the attacker and can be used to initiate different types of attacks (based on the level
of control gained by the malicious user). One method of taking control over PC’s and
turning them into bots is by spreading a dedicated virus.
2 Amplifiers are computers on the Internet that have a larger Internet connection or
computing capabilities and are used to amplify the attack generated by the attacker.
3 CDN – web Content Delivery Network service, provided by companies such as Akamy,
Adero and Eplication.

Most of the blocks need to be performed at the ISP level, or the attacker will be able to
saturate the line connecting the web site, effectively denying service to the web site.
Web Server Based Attacks
Many of the network-based attacks that create a denial of service are hard to achieve,
or hold little “glory” to the attacker. This said, one must consider the fact that data theft
cannot be achieved via DoS attacks. Therefore, web server attacks have become
extremely popular in the past few years. Web server attacks bypass the firewall since
they connect to the web site with legal network requests (i.e. TCP port 80), and are hard
to trace if the web site does not employ strict log file procedures.
Web based attacks vary from web server to web server. For example: gaining control
over a console on a remote MS-IIS server can be achieved using different variants of
the Unicode attack, while Linux Apache server console can be controlled using a Perl
test cgi attack. Other attacks and vulnerabilities through which a remote attacker can
gain access to a web server while bypassing the firewall are listed in various web
resources, such as http://www.securityfocus.com, the bugtraq mailing lists and more.
Known Web Configuration

There is no single way to install a web site that will hold all the security answers. The
different ways to install and configure the different web and network components varies
greatly as web sites become more complex.
A few known configurations that address the security issues are:
Configuration 1 – Basic Disjointed
A straightforward configuration, which includes the web server as a multi-homed server
with one interface connected to the world and a second interface dedicated for
database communications. All communications to and from the web site are maintained
by the firewall while internal communications are not monitored or filtered.
Figure 1 – Basic Disjointed
Pros:
1. Simplicity and streamlining of communications.
2. Easy troubleshooting on all levels.
3. Scalability (when no n-tier4 architecture is needed).
4. Low cost implementation and minimum hardware.
Cons:
1. Management of the DB server requires an out-of-band5 communication method
or web server routing.
2. Web content is distributed manually or via local scripts and applications.
Security considerations:

1. This basic configuration provides network level security (via the firewall) and DB
protection (via disjointed networks).

2. The load balancer (if external hardware is used) can be used as the second level
network-filtering device for extra security.

3. The use of two network cards provides low-level protection against poorly
configured firewall devices (for example, fire-walking will not reveal the DB
server).
This configuration provides no means of application or OS level protection. The entire
security architecture is based upon the filtering devices (firewall and load balancer). If
the OS hardening process is not redone frequently on a per-patch basis, the web site
will be vulnerable to application and OS level hacking.
In the event that the web server is hacked the database server will be fully exposed to
the hacker via the web server. This is true even if the second NIC on the web server
uses a different protocol. It is recommended that a basic method of filtering be used to
prevent the misuse of networking protocols.
The Compaq DISA6 and Microsoft DNA7 web site designs are similar and are basically
modeled in this configuration. Both Compaq and Microsoft rely on the OS hardening
process to provide the application level security and on the programmers’ capability to
produce secure code.
Configuration 2 – Filtered Disjointed (figure 2)
In this configuration, the addition of the filtering firewall, via the second “DMZ” on the
main firewall provides an added level of security8. Any hacking on the web servers will
provide only minimal access to the database servers. Obviously the web servers can
access the database server with an appropriate ODBC connector or similar means. This
configuration could potentially provide a hacker (should he be able to “own” the web
server machine) limited direct data access capabilities.

4 The n-tier configuration is shown in configuration 2 and is driven from the need to
process business logic on a separate server.

5 The use of out-of-band communications means that the connection to the server is
done from a different route then all other communication to and from the web site.

6 Found on Compaq web site at http://www.compaq.com/solutions/internet/disa.html
7 Found on the Microsoft web site at http://www.microsoft.com/technet/treeview/default.asp?url=/technet/itsolutions/ecommer ce/maintain/operate/ecomsec.asp

8 This configuration can be achieved with a second firewall for improved performance.
The firewall would be placed between the DB and the IIS servers (as suggested in the
MS paper). It is not necessary to place the DB server in the corporate network.
Application business logic for the web site is based on a separate server to allow for
easier scalability. This server may also be used for web management. Software such as
MS Site Server or MS Application Server provides the content distribution, web statistics
etc.
Figure 2 – Filtered Disjointed
Pros:
1. Relatively easy installation and routing configuration.
2. Easy troubleshooting for connectivity and system level events.
3. Minimal hardware.
Cons:
1. Development environment must be similar to the production web site, to allow
developers to adjust application connectivity with internal servers to the filtering
device used.
2. The use of one firewall as a filtering device might show a degradation in the site’s
performance. Should the use of extra firewalls be applied, cost and ease of
installation will no longer be an advantage for this configuration.
Security considerations:

1. This configuration provides network level security (via the firewall) and DB
protection (via disjointed networks). It also provides low-level application
protection since core data processing is shifted from the front-end web servers to
back office application servers that have no direct communications with the site’s
users.

2. If MS SQL is used, TCP 1433 should be used instead of named pipes. This will
provide a higher level of filtering.

3. When implementing the web content distribution mechanism it is recommended
not to use windows shares. FTP or MS Site Server replications are preferred.
The “Filtered Disjointed” configuration provides the administrator with the tools to
filter all network-based activity on the secure side of the firewall. The main idea
behind this configuration is to eliminate the ability of one server to communicate
directly with the other servers. Application connectivity is allowed to provide the
site functionality (web servers will be allowed communications with MS SQL
Server using TCP 1433), and no other protocol will be allowed. Although there’s
a performance penalty due to the extra network segments and filtering, should
one of the web servers be compromised all network transactions can be logged,
leaving an audit trail.

Configuration 3 – Application Protection (figure 3)
In the effort to protect the web site from application level hacking, we need to use a
“higher level” filter. The filter would be used to examine the HTTP protocol, and if
possible the HTTP GET, HEAD, POST, and PUT commands and parameters. This
parameter should comply with RFC 2616 (http://www.faqs.org/rfcs/rfc2616.html) and
with the restrictions of the site administrator. Such a filter can be found i n some of the
commercial proxy servers or in dedicated filtering products9. This approach apposes the
Microsoft e-commerce strategy shown earlier in configuration 1, and in the e-commerce
web site security, that all application level security should be driven from the DNA
design and proper code writing.
Figure 3 – Application Protection
Pros:
1. High level of assurance that Internet traffic enters the various applications in the
correct form and manner.
2. The use of proxy servers could improve performance, if the proxies implement a
caching mechanism.
Cons:
1. Extremely hard to troubleshoot and configure.
2. High cost of hardware and initial installation.
3. The use of filter devises at the application level could cause functionality issues.
This is due to the fact that the connection terminates at the proxy level and
connection stickiness, session information and other client information might be
misinterpreted before they reach the web servers.
9 A commercial filter, which acts as an application level proxy can be found at
http://www.sanctuminc.com
4. It is imperative that the development of the development of the application is
done with full awareness to the system configuration. Not all existing web sites
can use this configuration with no application adjustments.
Security considerations:
1. This configuration provides a high level of security, both network and application
level.
2. Application filtering might require the use of out-of-band management tools, since
not all proxy servers can act as routers for other non-HTTP protocols.
The “Application Protection” configuration provides the administrator with multi-layer
security protection. It can be used in versatile situations, and has proven itself in
protecting web sites from new hazards such as Nimda and code-red (at the time of the
worm release un-patched web sites using the “Application Protection” configuration
would not be harmed). This protection, however, doesn’t scale easily to mega-sized ecommerce
sites.
Monitoring tasks should be carefully planed. When monitoring a web site that has only
one function that answers to HTTP requests in the client path, the monitor termination
point is clear. In a configuration that holds many different components that receive
HTTP requests it is imperative to monitor them separately and to assure that they are all
up.
Summery
The job of building an e-commerce web site never stops. The web site, as the
technology itself, constantly evolves. Security risks change as the site positions itself
on the net, and, as the platform used by the site become obsolete.
The different web site configuration, and approaches shown in this document come to
prove, that the network level protection that so many web sites have become costumed
to, might not be enough. The use of advanced configurations and filtering mechanisms
is currently the only way to “keep-up” with the increasing risks of conducting business
on the Internet.
Companies such as Check Point that have long been identified as a packet-filtering
firewall software manufacture, have developed their software to provide application
filtering capabilities with the use of “Secure-Servers”10. This shows us that market
leaders have identified the need for application level filtering.

Resources
· “Web site security and Internet threats in the wild” – http://www.w3.org/Security/faq/
· A description of the DISA model at Compaq’s web site. This is the theory behind
Compaq’s recommended web site installation and the company’s statement on
securing web sites. – http://www.compaq.com/solutions/internet/disa.html

Source : http://www.google.com.kh/url?sa=t&rct=j&q=how+to+secure+e-commerce+site%3F&source=web&cd=1&cad=rja&ved=0CCQQFjAA&url=http%3A%2F%2Fwww.sans.org%2Freading_room%2Fwhitepapers%2Fwebservers%2Fsecuring-e-commerce-web-sites_303&ei=2yUJUfnwOoetiQe4g4DwCg&usg=AFQjCNEsttUznPtCWZUfLfDsN2mp6OpQEA&bvm=bv.41642243,d.aGc

ASP.NET Web API

The last few years have seen the rise of Web APIs – services exposed over plain HTTP rather than through a more formal service contract (like SOAP or WS*).  Exposing services this way can make it easier to integrate functionality with a broad variety of device and client platforms, as well as create richer HTML experiences using JavaScript from within the browser.  Most large sites on the web now expose Web APIs (some examples: Facebook, Twitter, LinkedIn, Netflix, etc), and the usage of them is going to accelerate even more in the years ahead as connected devices proliferate and users demand richer user experiences.

Our new ASP.NET Web API support enables you to easily create powerful Web APIs that can be accessed from a broad range of clients (ranging from browsers using JavaScript, to native apps on any mobile/client platform).  It provides the following support:

  • Modern HTTP programming model: Directly access and manipulate HTTP requests and responses in your Web APIs using a clean, strongly typed HTTP object model.  In addition to supporting this HTTP programming model on the server, we also support the same programming model on the client with the new HttpClient API that can be used to call Web APIs from any .NET application.
  • Content negotiation: Web API has built-in support for content negotiation – which enables the client and server to work together to determine the right format for data being returned from an API.  We provide default support for JSON, XML and Form URL-encoded formats, and you can extend this support by adding your own formatters, or even replace the default content negotiation strategy with one of your own.
  • Query composition: Web API enables you to easily support querying via the OData URL conventions.  When you return a type of IQueryable from your Web API, the framework will automatically provide OData query support over it – making it easy to implement paging and sorting.
  • Model binding and validation: Model binders provide an easy way to extract data from various parts of an HTTP request and convert those message parts into .NET objects which can be used by Web API actions.  Web API supports the same model binding and validation infrastructure that ASP.NET MVC supports today.
  • Routes: Web APIs support the full set of routing capabilities supported within ASP.NET MVC and ASP.NET today, including route parameters and constraints. Web API also provides smart conventions by default, enabling you to easily create classes that implement Web APIs without having to apply attributes to your classes or methods.  Web API configuration is accomplished solely through code – leaving your config files clean.
  • Filters: Web APIs enables you to easily use and create filters (for example: [authorization]) that enable you to encapsulate and apply cross-cutting behavior.
  • Improved testability: Rather than setting HTTP details in static context objects, Web API actions can now work with instances of HttpRequestMessage and HttpResponseMessage – two new HTTP objects that (among other things) make testing much easier. As an example, you can unit test your Web APIs without having to use a Mocking framework.
  • IoC Support: Web API supports the service locator pattern implemented by ASP.NET MVC, which enables you to resolve dependencies for many different facilities.  You can easily integrate this with an IoC container or dependency injection framework to enable clean resolution of dependencies.
  • Flexible Hosting: Web APIs can be hosted within any type of ASP.NET application (including both ASP.NET MVC and ASP.NET Web Forms based applications).  We’ve also designed the Web API support so that you can also optionally host/expose them within your own process if you don’t want to use ASP.NET/IIS to do so.  This gives you maximum flexibility in how and where you use it.

Reference : http://weblogs.asp.net/scottgu/archive/2012/02/23/asp-net-web-api-part-1.aspx

What news in ASP.NET MVC 4?

 

Installation Notes

ASP.NET MVC 4 for Visual Studio 2010 can be installed from the ASP.NET MVC 4 home page using the Web Platform Installer.

We recommend uninstalling any previously installed previews of ASP.NET MVC 4 prior to installing ASP.NET MVC 4. You can upgrade the ASP.NET MVC 4 Beta and Release Candidate to ASP.NET MVC 4 without uninstalling.

This release is not compatible with any preview releases of .NET Framework 4.5. You must separately upgrade the any installed preview releases of .NET Framework 4.5 to the final version prior to installing ASP.NET MVC 4.

ASP.NET MVC 4 can be installed and run side-by-side with ASP.NET MVC 3.

Documentation

Documentation for ASP.NET MVC is available on the MSDN website at the following URL:

http://go.microsoft.com/fwlink/?LinkID=243043

Tutorials and other information about ASP.NET MVC are available on the MVC 4 page of the ASP.NET website (http://www.asp.net/mvc/mvc4).

Support

ASP.NET MVC 4 is fully supported. If you have questions about working with this release you can also post them to the ASP.NET MVC forum (http://forums.asp.net/1146.aspx), where members of the ASP.NET community are frequently able to provide informal support.

Software Requirements

The ASP.NET MVC 4 components for Visual Studio require PowerShell 2.0 and either Visual Studio 2010 with Service Pack 1 or Visual Web Developer Express 2010 with Service Pack 1.

New Features in ASP.NET MVC 4

This section describes features that have been introduced in the ASP.NET MVC 4 release.

ASP.NET Web API

ASP.NET MVC 4 includes ASP.NET Web API, a new framework for creating HTTP services that can reach a broad range of clients including browsers and mobile devices. ASP.NET Web API is also an ideal platform for building RESTful services.

ASP.NET Web API includes support for the following features:

  • Modern HTTP programming model: Directly access and manipulate HTTP requests and responses in your Web APIs using a new, strongly typed HTTP object model. The same programming model and HTTP pipeline is symmetrically available on the client through the newHttpClient type.
  • Full support for routes: ASP.NET Web API supports the full set of route capabilities of ASP.NET Routing, including route parameters and constraints. Additionally, use simple conventions to map actions to HTTP methods.
  • Content negotiation: The client and server can work together to determine the right format for data being returned from a web API. ASP.NET Web API provides default support for XML, JSON, and Form URL-encoded formats and you can extend this support by adding your own formatters, or even replace the default content negotiation strategy.
  • Model binding and validation: Model binders provide an easy way to extract data from various parts of an HTTP request and convert those message parts into .NET objects which can be used by the Web API actions. Validation is also performed on action parameters based on data annotations.
  • Filters: ASP.NET Web API supports filters including well-known filters such as the [Authorize]attribute. You can author and plug in your own filters for actions, authorization and exception handling.
  • Query composition: Use the [Queryable] filter attribute on an action that returns IQueryable to enable support for querying your web API via the OData query conventions.
  • Improved testability: Rather than setting HTTP details in static context objects, web API actions work with instances of HttpRequestMessage and HttpResponseMessage. Create a unit test project along with your Web API project to get started quickly writing unit tests for your Web API functionality.
  • Code-based configuration: ASP.NET Web API configuration is accomplished solely through code, leaving your config files clean. Use the provide service locator pattern to configure extensibility points.
  • Improved support for Inversion of Control (IoC) containers: ASP.NET Web API provides great support for IoC containers through an improved dependency resolver abstraction
  • Self-host: Web APIs can be hosted in your own process in addition to IIS while still using the full power of routes and other features of Web API.
  • Create custom help and test pages: You now can easily build custom help and test pages for your web APIs by using the new IApiExplorer service to get a complete runtime description of your web APIs.
  • Monitoring and diagnostics: ASP.NET Web API now provides light weight tracing infrastructure that makes it easy to integrate with existing logging solutions such as System.Diagnostics, ETW and third party logging frameworks. You can enable tracing by providing an ITraceWriterimplementation and adding it to your web API configuration.
  • Link generation: Use the ASP.NET Web API UrlHelper to generate links to related resources in the same application.
  • Web API project template: Select the new Web API project form the New MVC 4 Project wizard to quickly get up and running with ASP.NET Web API.
  • Scaffolding: Use the Add Controller dialog to quickly scaffold a web API controller based on an Entity Framework based model type.

For more details on ASP.NET Web API please visit http://www.asp.net/web-api.

Enhancements to Default Project Templates

The template that is used to create new ASP.NET MVC 4 projects has been updated to create a more modern-looking website:

In addition to cosmetic improvements, there’s improved functionality in the new template. The template employs a technique called adaptive rendering to look good in both desktop browsers and mobile browsers without any customization.

To see adaptive rendering in action, you can use a mobile emulator or just try resizing the desktop browser window to be smaller. When the browser window gets small enough, the layout of the page will change.

Mobile Project Template

If you’re starting a new project and want to create a site specifically for mobile and tablet browsers, you can use the new Mobile Application project template. This is based on jQuery Mobile, an open-source library for building touch-optimized UI:

This template contains the same application structure as the Internet Application template (and the controller code is virtually identical), but it’s styled using jQuery Mobile to look good and behave well on touch-based mobile devices. To learn more about how to structure and style mobile UI, see thejQuery Mobile project website.

If you already have a desktop-oriented site that you want to add mobile-optimized views to, or if you want to create a single site that serves differently styled views to desktop and mobile browsers, you can use the new Display Modes feature. (See the next section.)

Display Modes

The new Display Modes feature lets an application select views depending on the browser that’s making the request. For example, if a desktop browser requests the Home page, the application might use the Views\Home\Index.cshtml template. If a mobile browser requests the Home page, the application might return the Views\Home\Index.mobile.cshtml template.

Layouts and partials can also be overridden for particular browser types. For example:

  • If your Views\Shared folder contains both the _Layout.cshtml and _Layout.mobile.cshtml templates, by default the application will use _Layout.mobile.cshtml during requests from mobile browsers and _Layout.cshtml during other requests.
  • If a folder contains both _MyPartial.cshtml and _MyPartial.mobile.cshtml, the instruction @Html.Partial(“_MyPartial”) will render _MyPartial.mobile.cshtml during requests from mobile browsers, and _MyPartial.cshtml during other requests.

If you want to create more specific views, layouts, or partial views for other devices, you can register a new DefaultDisplayMode instance to specify which name to search for when a request satisfies particular conditions. For example, you could add the following code to the Application_Start method in the Global.asax file to register the string “iPhone” as a display mode that applies when the Apple iPhone browser makes a request:

DisplayModeProvider.Instance.Modes.Insert(0, new
DefaultDisplayMode("iPhone")
{
    ContextCondition = (context => context.GetOverriddenUserAgent().IndexOf
        ("iPhone", StringComparison.OrdinalIgnoreCase) >= 0)
 });

After this code runs, when an Apple iPhone browser makes a request, your application will use the Views\Shared\_Layout.iPhone.cshtml layout (if it exists). For more information on Display Mode, seeASP.NET MVC 4 Mobile Features. Applications using DisplayModeProvider should install the Fixed DisplayModes NuGet package. The ASP.NET Fall 2012 Update includes the Fixed DisplayModes NuGet package in the new project templates. See ASP.NET MVC 4 Mobile Caching Bug Fixedd for details on the fix.

jQuery Mobile and Mobile Features

For information on building Mobile applications with ASP.NET MVC 4 using jQuery Mobile, see the tutorial  ASP.NET MVC 4 Mobile Features.

  • Task Support for Asynchronous Controllers

    You can now write asynchronous action methods as single methods that return an object of type Taskor Task<ActionResult>.

    For more information see Using Asynchronous Methods in ASP.NET MVC 4.

    Azure SDK

    ASP.NET MVC 4 supports the 1.6 and newer releases of the Windows Azure SDK.

    Database Migrations

    ASP.NET MVC 4 projects now include Entity Framework 5. One of the great features in Entity Framework 5 is support for database migrations. This feature enables you to easily evolve your database schema using a code-focused migration while preserving the data in the database. For more information on database migrations, see Adding a New Field to the Movie Model and Table in the Introduction to ASP.NET MVC 4 tutorial.

    Empty Project Template

    The MVC Empty project template is now truly empty so that you can start from a completely clean slate. The earlier version of the Empty project template has been renamed to Basic.

    Add Controller to any project folder

    You can now right click and select Add Controller from any folder in your MVC project. This gives you more flexibility to organize your controllers however you want, including keeping your MVC and Web API controllers in separate folders.

    Bundling and Minification

    The bundling and minification framework enables you to reduce the number of HTTP requests that a Web page needs to make by combining individual files into a single, bundled file for scripts and CSS. It can then reduce the overall size of those requests by minifying the contents of the bundle. Minifying can include activities like eliminating whitespace to shortening variable names to even collapsing CSS selectors based on their semantics. Bundles are declared and configured in code and are easily referenced in views via helper methods which can generate either a single link to the bundle or, when debugging, multiple links to the individual contents of the bundle. For more information see Bundling and Minification.

    Enabling Logins from Facebook and Other Sites Using OAuth and OpenID

    The default templates in ASP.NET MVC 4 Internet Project template now includes support for OAuth and OpenID login using the DotNetOpenAuth library. For information on configuring an OAuth or OpenID provider, see OAuth/OpenID Support for WebForms, MVC and WebPages and the OAuth and OpenID feature documentation in ASP.NET Web Pages.

    Upgrading an ASP.NET MVC 3 Project to ASP.NET MVC 4

    ASP.NET MVC 4 can be installed side by side with ASP.NET MVC 3 on the same computer, which gives you flexibility in choosing when to upgrade an ASP.NET MVC 3 application to ASP.NET MVC 4.

    The simplest way to upgrade is to create a new ASP.NET MVC 4 project and copy all the views, controllers, code, and content files from the existing MVC 3 project to the new project and then to update the assembly references in the new project to match any non-MVC template included assembiles you are using. If you have made changes to the Web.config file in the MVC 3 project, you must also merge those changes into the Web.config file in the MVC 4 project.

    To manually upgrade an existing ASP.NET MVC 3 application to version 4, do the following:

      1. In all Web.config files in the project (there is one in the root of the project, one in the Views folder, and one in the Views folder for each area in your project), replace every instance of the following text (note: System.Web.WebPages, Version=1.0.0.0 is not found in projects created with Visual Studio 2012):
    System.Web.Mvc, Version=3.0.0.0
    System.Web.WebPages, Version=1.0.0.0
    System.Web.Helpers, Version=1.0.0.0
    System.Web.WebPages.Razor, Version=1.0.0.0

    with the following corresponding text:

    System.Web.Mvc, Version=4.0.0.0
    System.Web.WebPages, Version=2.0.0.0
    System.Web.Helpers, Version=2.0.0.0
    System.Web.WebPages.Razor, Version=2.0.0.0
      1. In the root Web.config file, update the webPages:Version element to “2.0.0.0” and add a newPreserveLoginUrl key that has the value “true”:
    <appSettings>
      <add key="webpages:Version" value="2.0.0.0" />
      <add key="PreserveLoginUrl" value="true" />
    </appSettings>
    1. In Solution Explorer, right-click on the References and select Manage NuGet Packages. In the left pane, select Online\NuGet official package source, then update the following:
      • ASP.NET MVC 4
      • (Optional) jQuery, jQuery Validation and jQuery UI
      • (Optional) Entity Framework
      • (Optonal) Modernizr
    2. In Solution Explorer, right-click the project name and then select Unload Project. Then right-click the name again and select Edit ProjectName.csproj.
    3. Locate the ProjectTypeGuids element and replace {E53F8FEA-EAE0-44A6-8774-FFD645390401} with {E3E379DF-F4C6-4180-9B81-6769533ABE47}.
    4. Save the changes, close the project (.csproj) file you were editing, right-click the project, and then select Reload Project.
    5. If the project references any third-party libraries that are compiled using previous versions of ASP.NET MVC, open the root Web.config file and add the following three bindingRedirect elements under the configuration section:
      <configuration>
        <!--... elements deleted for clarity ...-->
       
        <runtime>
          <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
            <dependentAssembly>
              <assemblyIdentity name="System.Web.Helpers" 
                   publicKeyToken="31bf3856ad364e35" />
              <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0"/>
            </dependentAssembly>
            <dependentAssembly>
              <assemblyIdentity name="System.Web.Mvc" 
                   publicKeyToken="31bf3856ad364e35" />
              <bindingRedirect oldVersion="1.0.0.0-3.0.0.0" newVersion="4.0.0.0"/>
            </dependentAssembly>
            <dependentAssembly>
              <assemblyIdentity name="System.Web.WebPages" 
                   publicKeyToken="31bf3856ad364e35" />
              <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0"/>
            </dependentAssembly>
          </assemblyBinding>
        </runtime>
      </configuration>

    Changes from ASP.NET MVC 4 Release Candidate

    The release notes for ASP.NET MVC 4 Release Candidate can be found here:

    The major changes from ASP.NET MVC 4 Release Candidate in this release are summarized below:

    • Per controller configuration: ASP.NET Web API controllers can be attributed with a custom attribute that implements IControllerConfiguration to setup their own formatters, action selector and parameter binders. The HttpControllerConfigurationAttribute has been removed.
    • Per route message handlers: You can now specify the final message handler in the request chain for a given route. This enables support for ride-along frameworks to use routing to dispatch to their own (non-IHttpController) endpoints.
    • Progress notifications: The ProgressMessageHandler generates progress notification for both request entities being uploaded and response entities being downloaded. Using this handler it is possible to keep track of how far you are uploading a request body or downloading a response body.
    • Push content: The PushStreamContent class enables scenarios where a data producer wants to write directly to the request or response(either synchronously or asynchronously) using a stream. When the PushStreamContent is ready to accept data it calls out to an action delegate with the output stream. The developer can then write to the stream for as long as necessary and close the stream when writing has completed. The PushStreamContent detects the closing of the stream and completes the underlying asynchronous Task for writing out the content.
    • Creating error responses: Use the HttpError type to consistently represent error information from such as validation errors and exceptions while still honoring the IncludeErrorDetailPolicy. Use the new CreateErrorResponse extension methods to easily create error responses with HttpError as content. The HttpError content is fully content negotiated.
    • MediaRangeMapping removed: Media type ranges are now handled by the default content negotiator.
    • Default parameter binding for simple type parameters is now [FromUri]: In previous releases of ASP.NET Web API the default parameter binding for simple type parameters used model binding. The default parameter binding for simple type parameters is now [FromUri].
    • Action selection honors required parameters: Action selection in ASP.NET Web API will now only select an action if all required parameters that come from the URI are provided. A parameter can be specified as optional by providing a default value for the argument in the action method signature.
    • Customize HTTP parameter bindings: Use the ParameterBindingAttribute to customize the parameter binding for a specific action parameter or use the ParameterBindingRules on theHttpConfiguration to customize parameter bindings more broadly.
    • MediaTypeFormatter improvements: Formatters now have access to the full HttpContentinstance.
    • Host buffering policy selection: Implement and configure the IHostBufferPolicySelector service in ASP.NET Web API to enable hosts to determine the policy for when buffering is to be used.
    • Access client certificates in a host agnostic manner: Use the GetClientCertificate extension method to get the supplied client certificate from the request message.
    • Content negotiation extensibility: Customize content negotiation by deriving from theDefaultContentNegotiator and overriding any aspect of content negotiation that you would like.
    • Support for returning 406 Not Acceptable responses: You can now return 406 Not Acceptable responses in ASP.NET Web API when a suitable formatter is not found by creating aDefaultContentNegotiator with the excludeMatchOnTypeOnly parameter set to true.
    • Read form data as NameValueCollection or JToken: You can read form data in the URI query string or in the request body as a NameValueCollection using the ParseQueryString andReadAsFormDataAsync extension methods respectively. Similarly, you can read form data in the URI query string or in the request body as a JToken using the TryReadQueryAsJson andReadAsAsync extension methods respectively.
    • Multipart improvements: It is now possible to write a MultipartStreamProvider that is completely tailored to the type of MIME multipart data that it can read and present the result in the optimal way to the user. You can also hook a post processing step on the MultipartStreamProvider that allows the implementation to do whatever post processing it wants on the MIME multipart body parts. For example, the MultipartFormDataStreamProvider implementation reads the HTML form data parts and adds them to a NameValueCollection so they are easy to get at from the caller.
    • Link generation improvements: The UrlHelper no longer depends on HttpControllerContext. You can now access the UrlHelper from any context where the HttpRequestMessage is available.
    • Message handler execution order change: Message handlers are now executed in the order that they are configured instead of in reverse order.
    • Helper for wiring up message handlers: The new HttpClientFactory that can wire upDelegatingHandlers and create an HttpClient with the desired pipeline ready to go. It also provides functionality for wiring up with alternative inner handlers (the default is HttpClientHandler) as well as do the wiring up when using HttpMessageInvoker or another DelegatingHandler instead ofHttpClient as the top-invoker.
    •  Support for CDNs in ASP.NET Web Optimization: ASP.NET Web Optimization now provides support for CDN alternate paths enabling you to specify for each bundle an additional URL which points to that same resource on a content delivery network. Supporting CDNs enables you to get your script and style bundles geographically closer to the end consumers of your Web applications.
    • ASP.NET Web API routes and configuration moved to WebApiConfig.Register static method that can be resused in test code. ASP.NET Web API routes previously were added inRouteConfig.RegisterRoutes along with the standard MVC routes. The default ASP.NET Web API routes and configuration are now handled in a separate WebApiConfig.Register method to facilitate testing.

    Known Issues and Breaking Changes

    • The RC and RTM version of ASP.NET MVC 4 incorrectly returned cached desktop views when mobile views should be returned.
    • Breaking changes in the Razor View Engine. The following types were removed fromSystem.Web.Mvc.Razor:
      • ModelSpan
      • MvcVBRazorCodeGenerator
      • MvcCSharpRazorCodeGenerator
      • MvcVBRazorCodeParser

      The following methods were also removed:

      • MvcCSharpRazorCodeParser.ParseInheritsStatement(System.Web.Razor.Parser.CodeBlockInfo)
      • MvcWebPageRazorHost.DecorateCodeGenerator(System.Web.Razor.Generator.RazorCodeGenerator)
      • MvcVBRazorCodeParser.ParseInheritsStatement(System.Web.Razor.Parser.CodeBlockInfo)
    • When WebMatrix.WebData.dll is included in in the /bin directory of an ASP.NET MVC 4 apps, it takes over the URL for forms authentication. Adding the WebMatrix.WebData.dll assembly to your application (for example, by selecting “ASP.NET Web Pages with Razor Syntax” when using the Add Deployable Dependencies dialog) will override the authentication login redirect to /account/logon rather than /account/login as expected by the default ASP.NET MVC Account Controller. To prevent this behavior and use the URL specified already in the authentication section of web.config, you can add an appSetting called PreserveLoginUrl and set it to true:
      <appSettings>
          <add key="PreserveLoginUrl" value="true"/>
      </appSettings>
    • The NuGet package manager fails to install when attempting to install ASP.NET MVC 4 for side by side installations of Visual Studio 2010 and Visual Web Developer 2010. To run Visual Studio 2010 and Visual Web Developer 2010 side by side with ASP.NET MVC 4 you must install ASP.NET MVC 4 after both versions of Visual Studio have already been installed.
    • Uninstalling ASP.NET MVC 4 fails if prerequisites have already been uninstalled. To cleanly uninstall ASP.NET MVC 4 you must uninstall ASP.NET MVC 4 prior to uninstalling Visual Studio.
    • Installing ASP.NET MVC 4  breaks ASP.NET MVC 3 RTM applications. ASP.NET MVC 3 applications that were created with the RTM release (not with the ASP.NET MVC 3 Tools Updaterelease) require the following changes in order to work side-by-side with ASP.NET MVC 4. Building the project without making these updates results in compilation errors.Required updates
        1. In the root Web.config file, add a new <appSettings> entry with the key webPages:Version and the value 1.0.0.0.
      <appSettings>
          <add key="webpages:Version" value="1.0.0.0"/>
          <add key="ClientValidationEnabled" value="true"/>
          <add key="UnobtrusiveJavaScriptEnabled" value="true"/>
      </appSettings>
      1. In Solution Explorer, right-click the project name and then select Unload Project. Then right-click the name again and select Edit ProjectName.csproj.
      2. Locate the following assembly references:
        <Reference Include="System.Web.WebPages"/> 
        <Reference Include="System.Web.Helpers" />

        Replace them with the following:

        <Reference Include="System.Web.WebPages, Version=1.0.0.0,
        Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL "/> 
        <Reference Include="System.Web.Helpers, Version=1.0.0.0,
        Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL" />
      3. Save the changes, close the project (.csproj) file you were editing, and then right-click the project and select Reload.
    • Changing an ASP.NET MVC 4 project to target 4.0 from 4.5 does not update the EntityFramework assembly reference: If you change an ASP.NET MVC 4 project to target 4.0 after targetting 4.5 the reference to the EntityFramework assembly will still point to the 4.5 version. To fix this issue uninstall and reinstall the EntityFramework NuGet package.
    • 403 Forbidden when running an ASP.NET MVC 4 application on Azure after changing to target 4.0 from 4.5: If you change an ASP.NET MVC 4 project to target 4.0 after targetting 4.5 and then deploy to Azure you may see a 403 Forbidden error at runtime. To workaround this issue add the following to your web.config: <modules runAllManagedModulesForAllRequests="true" />
    • Visual Studio 2012 crashes when you type a ‘\’ in a string literal in a Razor file. To work around the issue enter the closing quote of the string literal first.
    • Browsing to “Account/Manage” in the Internet template results in a runtime error for CHS, TRK and CHT languages. To fix the issue modify the page to separate out @User.Identity.Nameby puting it as the only content within the <strong> tag.
    • Google and LinkedIn providers are not supported within Azure Web Sites. Use alternative authentication providers when deploying to Azure Web Sites.
    • When using UriPathExtensionMapping with IIS 8 Express/IIS, you would receive 404 Not Found errors when you try to use the extension. The static file handler will interfere with requests to web APIs that use UriPathExtensionMappings. SetrunAllManagedModulesForAllRequests=true in web.config to work around the issue.
    • Controller.Execute method is no longer called. All MVC controllers are now always executed asynchronously.

Reference : http://www.asp.net/whitepapers/mvc4-release-notes