Top 5 Tools for network security monitoring

Security data can be found on virtually all systems in a corporate network. However, all systems do not provide equally valuable security context. While monitoring everything would be ideal, this is impractical for most organizations due to resource constraints. So what data sources should you prioritize to make the most of your monitoring efforts?

When it comes to security monitoring, context is the key. The more relevant security context you have, the more likely it is you will successfully detect real security incidents while weeding out false positives (e.g. non-threats). In determining which devices and systems to monitor for security data, the first priority is to give yourself as much useful context as possible.

Based on a decade of monitoring experience, SecureWorks believes the top five sources of security context are:

Number One: Network-based Intrusion Detection and Prevention Systems (NIDS/NIPS)

NIDS and NIPS devices use signatures to detect security events on your network. Performing full packet inspection of network traffic at the perimeter or across key network segments, most NIDS/NIPS devices provide detailed alerts that help to detect:

  • Known vulnerability exploit attempts
  • Known Trojan activity
  • Anomalous behavior (depending on the IDS/IPS)
  • Port and Host scans

Number Two: Firewalls

Serving as the network’s gatekeeper, firewalls allow and log incoming and outgoing network connections based on your policies. Some firewalls also have basic NIDS/NIPS signatures to detect security events. Monitoring firewall logs and alerts helps to detect:

  • New and unknown threats, such as custom Trojan activity
  • Port and Host scans
  • Worm outbreaks
  • Minor anomalous behavior
  • Most any activity denied by firewall policy

Number Three: Host-based Intrusion Detection and Prevention Systems (HIDS/HIPS)

Like NIDS/NIPS, host-based intrusion detection and prevention systems utilize signatures to detect security events. But instead of inspecting network traffic, HIDS/HIPS agents are installed on servers to directly alert on security activity. Monitoring HIDS/HIPS alerts helps to detect:

  • Known vulnerability exploit attempts
  • Console exploit attempts
  • Exploit attempts performed over encrypted channels
  • Password grinding (manual or automated attempts to guess passwords)
  • Anomalous behavior by users or applications

Number Four: Network Devices with Access Control Lists (ACLs)

Network devices that can use ACLs, such as routers and VPN servers, have the ability to control network traffic based on permitted networks and hosts. Monitoring logs from devices with ACLs helps to detect:

  • New and unknown threats, such as custom Trojan activity
  • Port and Host scans
  • Minor anomalous behavior
  • Most anything denied by the ACL’s

Number Five: Server and Application Logs

Many types of servers and applications log events such as login attempts and user activity. Depending on the extent of logging capabilities, monitoring server and application logs can help to detect:

  • Known and unknown exploit attempts
  • Password Grinding
  • Anomalous behavior by users or applications

It is important to understand that the incremental value of a data source will vary from situation to situation. A source’s purpose, its location in your network and the quality of the data it provides are a few of the many variables that must be considered when planning your security monitoring strategy.

Keep in mind that there are many other security technologies, network devices and log sources throughout your IT environment that may also provide beneficial context to your security monitoring efforts. For example, Unified Threat Management (UTM) devices which combine firewall, NIDS/NIPS and other capabilities onto a single device can be monitored to detect similar events as standalone firewalls and NIDS/NIPS devices.

By monitoring the assets that provide the highest value security context, you can optimize security monitoring efforts. Doing so will provide faster, more accurate detection of threats while making the most of your security resources. For additional information on monitoring security events and other security topics, please visit theSecureWorks website.

 

Featured Gartner Research:

What Organizations are Spending on IT Security

According to research and advisory firm Gartner Inc., “Many CIOs and chief information security officers (CISOs) are uncertain about what is a ‘normal’ level of security spending in terms of a percentage of the overall IT budget – especially during economic uncertainty.” This research note will help IT managers understand how organizations are investing in their information security and compare their spending with that of their peers.

View the complimentary Gartner report made available to you by SecureWorks.

 

Security 101: Web Application Firewalls

What is a Web Application Firewall?
A web application firewall (WAF) is a tool designed to protect externally-facing web applications used for online banking, Internet retail sales, discussion boards and many other functions from application layer attacks such as cross-site scripting (XSS), cross-site request forgery (XSRF) and SQL injection. Because web application attacks exploit flaws in application logic that is often developed internally, each attack is unique to its target application. This makes it difficult to detect and prevent application layer attacks using existing defenses such as network firewalls and NIDS/NIPS.

How do WAFs Work?
WAFs utilize a set of rules or policies to control communications to and from a web application. These rules are designed to block common application layer attacks. Architecturally, a WAF is deployed in front of an application to intercept communications and enforce policies before they reach the application.

What are the Risks of Deploying a WAF?

Depending on the importance of the web application to your business, the risk of experiencing false positives that interrupt legitimate communications can be a concern. To provide sound protection with minimal false positives, WAF rules and policies must be tailored to the application(s) the WAF is defending. In many cases, this requires significant up-front customization based on in-depth knowledge of the application in question. This effort must also be maintained to address modifications to the application over time.

What are the Benefits of Deploying a WAF?

A WAF can be beneficial in terms of both security and compliance. Applications are a prime target for today’s hackers. Also, the Payment Card Industry (PCI) Data Security Standard requires companies who process, store or transmit payment card data to protect their externally-facing web applications from known attacks (Requirement 6.6). If managed properly and used in conjunction with regular application code reviews, vulnerability testing and remediation, WAFs can be a solid option for protecting against web application attacks and satisfying related compliance requirements.

 

Reference: http://www.secureworks.com/resources/newsletter/2008-07/

NIDS (Network Intrusion Detection System) and NIPS (Network Intrusion Prevention System)

NIDS and NIPS (Behavior based, signature based, anomaly based, heuristic)

An intrusion detection system (IDS) is software that runs on a server or network device to monitor and track network activity. By using an IDS, a network administrator can configure the system to monitor network activity for suspicious behavior that can indicate unauthorized access attempts. IDSs can be configured to evaluate system logs, look at suspicious network activity, and disconnect sessions that appear to violate security settings.

IDSs can be sold with firewalls. Firewalls by themselves will prevent many common attacks, but they don’t usually have the intelligence or the reporting capabilities to monitor the entire network. An IDS, in conjunction with a firewall, allows both a reactive posture with the firewall and a preventive posture with the IDS.

In response to an event, the IDS can react by disabling systems, shutting down ports, ending sessions, deception (redirect to honeypot), and even potentially shutting down your network. A network-based IDS that takes active steps to halt or prevent an intrusion is called a network intrusion prevention system (NIPS). When operating in this mode, they are considered active systems.

Passive detection systems log the event and rely on notifications to alert administrators of an intrusion. Shunning or ignoring an attack is an example of a passive response, where an invalid attack can be safely ignored. A disadvantage of passive systems is the lag between intrusion detection and any remediation steps taken by the administrator.

Intrusion prevention systems (IPS) like IDSs follows the same process of gathering and identifying data and behavior, with the added ability to block (prevent) the activity.

A network-based IDS examines network patters, such as an unusual number or requests destined for a particular server or service, such as an FTP server. Network IDS systems should be located as upfront as possible, e.g. on the firewall, a network tap, span port, or hub, to monitor external traffic. Host IDS systems on the other hand, are placed on individual hosts where they can more efficiently monitor internally generated events.

Using both network and host IDS enhances the security of the environment.

Snort is an example of a network intrusion detection and prevention system. It conducts traffic analysis and packet logging on IP networks. Snort uses a flexible rule-based language to describe traffic that it should collect or pass, and a modular detection engine.

Network based intrusion detection attempts to identify unauthorized, illicit, and anomalous behavior based solely on network traffic. Using the captured data, the Network IDS processes and flags any suspicious traffic. Unlike an intrusion prevention system, an intrusion detection system does not actively block network traffic. The role of a network IDS is passive, only gathering, identifying, logging and alerting.

Host based intrusion detection system (HIDS) attempts to identify unauthorized, illicit, and anomalous behavior on a specific device. HIDS generally involves an agent installed on each system, monitoring and alerting on local OS and application activity. The installed agent uses a combination of signatures, rules, and heuristics to identify unauthorized activity. The role of a host IDS is passive, only gathering, identifying, logging, and alerting. Tripwire is an example of a HIDS.

There are no fully mature open standards for ID at present. The Internet Engineering Task Force (IETF) is the body which develops new Internet standards. They have a working group to develop a common format for IDS alerts.

The following types of monitoring methodologies can be used to detect intrusions and malicious behavior: signature, anomaly, heuristic and rule-based monitoring.

A signature based IDS will monitor packets on the network and compare them against a database of signatures or attributes from known malicious threats. This is similar to the way most antivirus software detects malware. The issue is that there will be a lag between a new threat being discovered in the wild and the signature for detecting that threat being applied to your IDS.

A network IDS signature is a pattern that we want to look for in traffic. Signatures range from very simple – checking the value of a header field – to highly complex signatures that may actually track the state of a connection or perform extensive protocol analysis.

An anomaly-based IDS examines ongoing traffic, activity, transactions, or behavior for anomalies (things outside the norm) on networks or systems that may indicate attack. An IDS which is anomaly based will monitor network traffic and compare it against an established baseline. The baseline will identify what is “normal” for that network, what sort of bandwidth is generally used, what protocols are used, what ports and devices generally connect to each other, and alert the administrator when traffic is detected which is anomalous to the baseline.

A heuristic-based security monitoring uses an initial database of known attack types but dynamically alters their signatures base on learned behavior of network traffic. A heuristic system uses algorithms to analyze the traffic passing through the network. Heuristic systems require more fine-tuning to prevent false positives in your network.

A behavior-based system looks for variations in behavior such as unusually high traffic, policy violations, and so on. By looking for deviations in behavior, it is able to recognize potential threats and respond quickly.
Similar to firewall access control rules, a rule-based security monitoring system relies on the administrator to create rules and determine the actions to take when those rules are transgressed.

References:
http://netsecurity.about.com/cs/hackertools/a/aa030504.htm
http://www.sans.org/security-resources/idfaq/
• CompTIA Security+ Study Guide: Exam SY0-301, Fifth Edition by Emmett Dulaney
• Mike Meyers’ CompTIA Security+ Certification Passport, Second Edition by T. J. Samuelle

http://neokobo.blogspot.com/2012/01/118-nids-and-nips.html

Consensus Roadmap for Defeating Distributed Denial of Service Attacks

Defeating Distributed Denial of Service Attacks

 

Contents
  1. Introduction
  2. Key Trends and Factors
  3. Immediate Steps To Reduce Risk and Dampen The Effects of Attacks
  4. Longer Term Efforts to Provide Adequate Safeguards
  5. A Living Document
  6. Download this Document in PDF format

Introduction

The distributed denial of service attacks during the week of February 7 highlighted security weaknesses in hosts and software used in the Internet that put electronic commerce at risk. These attacks also illuminated several recent trends and served as a warning for the kinds of high-impact attacks that we may see in the near future. This document outlines key trends and other factors that have exacerbated these Internet security problems, summarizes near-term activities that can be taken to help reduce the threat, and suggests research and development directions that will be required to manage the emerging risks and keep them within more tolerable bounds. For the problems described, activities are listed for user organizations, Internet service providers, network manufacturers, and system software providers.

Key Trends and Factors

The recent attacks against e-commerce sites demonstrate the opportunities that attackers now have because of several Internet trends and related factors:

  • Attack technology is developing in an open-source environment and is evolving rapidly. Technology producers, system administrators, and users are improving their ability to react to emerging problems, but they are behind and significant damage to systems and infrastructure can occur before effective defenses can be implemented. As long as defensive strategies are reactionary, this situation will worsen. Currently, there are tens of thousands – perhaps even millions – of systems with weak security connected to the Internet. Attackers are (and will) compromising these machines and building attack networks. Attack technology takes advantage of the power of the Internet to exploit its own weaknesses and overcome defenses.
  • Increasingly complex software is being written by programmers who have no training in writing secure code and are working in organizations that sacrifice the safety of their clients for speed to market. This complex software is then being deployed in security-critical environments and applications, to the detriment of all users.
  • User demand for new software features instead of safety, coupled with industry response to that demand, has resulted in software that is increasingly supportive of subversion, computer viruses, data theft, and other malicious acts.
  • Because of the scope and variety of the Internet, changing any particular piece of technology usually cannot eliminate newly emerging problems; broad community action is required. While point solutions can help dampen the effects of attacks, robust solutions will come only with concentrated effort over several years.
  • The explosion in use of the Internet is straining our scarce technical talent. The average level of system administrator technical competence has decreased dramatically in the last 5 years as non-technical people are pressed into service as system administrators. Additionally, there has been little organized support of higher education programs that can train and produce new scientists and educators with meaningful experience and expertise in this emerging discipline.
  • The evolution of attack technology and the deployment of attack tools transcend geography and national boundaries. Solutions must be international in scope.
  • The difficulty of criminal investigation of cybercrime coupled with the complexity of international law mean that successful apprehension and prosecution of computer crime is unlikely, and thus little deterrent value is realized.
  • The number of directly connected homes, schools, libraries and other venues without trained system administration and security staff is rapidly increasing. These “always-on, rarely-protected” systems allow attackers to continue to add new systems to their arsenal of captured weapons.
Immediate Steps to Reduce Risk and Dampen the Effects of Attacks

There are several steps that can be taken immediately by user organizations, Internet service providers, network manufacturers, and system software providers to reduce risk and decrease the impact of attacks. We hope that major users, including the governments (around the world) will lead the user community by setting examples – taking the necessary steps to protect their computers. And we hope that industry and government will cooperate to educate the community of users – about threats and potential courses of action – through public information campaigns and technical education programs.

In all of these recommendations, there may be instances where some steps are not feasible, but these will be rare and requests for waivers within organizations should be granted only on the basis of substantive proof validated by independent security experts.

Problem 1: Spoofing

Attackers often hide the identity of machines used to carry out an attack by falsifying the source address of the network communication. This makes it more difficult to identity the sources of attack traffic and sometimes shifts attention onto innocent third parties. Limiting the ability of an attacker to spoof IP source addresses will not stop attacks, but will dramatically shorten the time needed to trace an attack back to its origins.

Solutions:

  • User organizations and Internet service providers can ensure that traffic exiting an organization’s site, or entering an ISP’s network from a site, carries a source address consistent with the set of addresses for that site. Although this would still allow addresses to be spoofed within a site, it would allow tracing of attack traffic to the site from which it emanated, substantially assisting in the process of locating and isolating attacks traffic sources. Specifically user organizations should ensure that all packets leaving their sites carry source addresses within the address range of those sites. They should also ensure that no traffic from “unroutable addresses” listed in RFC 1918 are sent from their sites. This activity is often called egress filtering. User organizations should take the lead in stopping this traffic because they have the capacity on their routers to handle the load. ISPs can provide backup to pick up spoofed traffic that is not caught by user filters. ISPs may also be able to stop spoofing by accepting traffic (and passing it along) only if it comes from authorized sources. This activity is often called ingress filtering.
  • Dial-up users are the source of some attacks. Stopping spoofing by these users is also an important step. ISPs, universities, libraries and others that serve dial-up users should ensure that proper filters are in place to prevent dial-up connections from using spoofed addresses. Network equipment vendors should ensure that no-IP-spoofing is a user setting, and the default setting, on their dial-up equipment.
Problem 2: Broadcast Amplification

In a common attack, the malicious user generates packets with a source address of the site he wishes to attack (site A) (using spoofing as described in problem 1) and then sends a series of network packets to an organization with lots of computers (Site B), using an address that broadcasts the packets to every machine at site B. Unless precautions have been taken, every machine at Site B will respond to the packets and send data to the organization (Site A) that was the target of the attack. The target will be flooded and people at Site A may blame the people at Site B. Attacks of this type often are referred to as Smurf attacks. In addition, the echo and chargen services can be used to create oscillation attacks similar in effect to Smurf.

Solutions:

  • Unless an organization is aware of a legitimate need to support broadcast or multicast traffic within its environment, the forwarding of directed broadcasts should be turned off. Even when broadcast applications are legitimate, an organization should block certain types of traffic sent to “broadcast” addresses (e.g., ICMP Echo Reply) messages so that its systems cannot be used to effect these Smurf attacks. Network hardware vendors should ensure that routers can turn off the forwarding of IP directed broadcast packets as described in RFC 2644 and that this is the default configuration of every router.
  • Users should turn off echo and chargen services unless they have a specific need for those services. (This is good advice, in general, for all network services – they should be disabled unless known to be needed.)
Problem 3: Lack of Appropriate Response To Attacks

Many organizations do not respond to complaints of attacks originating from their sites or to attacks against their sites, or respond in a haphazard manner. This makes containment and eradication of attacks difficult. Further, many organizations fail to share information about attacks, giving the attacker community the advantage of better intelligence sharing.

Solutions:

  • User organizations should establish incident response policies and teams with clearly defined responsibilities and procedures. ISPs should establish methods of responding quickly and staffing to support those methods when their systems are found to have been used for attacks on other organizations.
  • User organizations should encourage system administrators to participate in industry-wide early warning systems, where their corporate identities can be protected (if necessary), to counter rapid dissemination of information among the attack community.
  • Attacks and system flaws should be reported to appropriate authorities (e.g., vendors, response teams) so that the information can be applied to defenses for other users.
Problem 4: Unprotected Computers

Many computers are vulnerable to take-over for distributed denial of service attacks because of inadequate implementation of well-known “best practices.” When those computers are used in attacks, the carelessness of their owners is instantly converted to major costs, headaches, and embarrassment for the owners of computers being attacked. Furthermore, once a computer has been compromised, the data may be copied, altered or destroyed, programs changed, and the system disabled.

Solutions:

  • User organizations should check their systems periodically to determine whether they have had malicious software installed, including DDOS Trojan Horse programs. If such software is found, the system should be restored to a known good state.
  • User organizations should reduce the vulnerability of their systems by installing firewalls with rule sets that tightly limit transmission across the site’s periphery (e.g. deny traffic, both incoming and outgoing, unless given specific instructions to allow it).
  • All machines, routers, and other Internet-accessible equipment should be periodically checked to verify that all recommended security patches have been installed.
  • The security community should maintain and publicize a current “Top-20 Exploited vulnerabilities” and the “Top 20 Attacks” list of currently most-often-exploited vulnerabilities to help system administrators set priorities.
  • Users should turn off services that are not required and limit access to vulnerable management services (e.g., RPC-based services).
  • Users and vendors should cooperate to create “system-hardening” scripts that can be used by less sophisticated users to close known holes and tighten settings to make their systems more secure. Users should employ these tools when they are available.
  • System software vendors should ship systems where security defaults are set to the highest level of security rather than the lowest level of security. These “secure out-of –the-box” configurations will greatly aid novice users and system administrators. They will furthermore save critically-scarce time for even the most experienced security professionals.
  • System administrators should deploy “best practice” tools including firewalls (as described above), intrusion detection systems, virus detection software, and software to detect unauthorized changes to files. This will reduce the risk that systems are compromised and used as a base for launching attacks. It will increase confidence in the correct functioning of the systems. Use of software to detect unauthorized changes may also be helpful in restoring compromised systems to normal function.
  • System and network administrators should be given time and support for training and enhancement of their skills. System administrators and auditors should be periodically certified to verify that their security knowledge and skills are current.
Longer Term Efforts to Provide Adequate Safeguards

The steps listed above are needed now to allow us to begin to move away from the extremely vulnerable state we are in. While these steps will help, they will not adequately reduce the risk given the trends listed above. These trends hint at new security requirements that will only be met if information technology and community attitudes about the Internet are changed in fundamental ways. In addition, research is needed in the areas of policy and law to enable us to deal with aspects of the problem that technology improvements will not be able to address by themselves. The following are some of the items that should be considered:

  • Establish load and traffic volume monitoring at ISPs to provide early warning of attacks.
  • Accelerate the adoption of the IPsec components of Internet Protocol Version 6 and Secure Domain Name System.
  • Increase the emphasis on security in the research and development of Internet II.
  • Support the development of tools that automatically generate router access control lists for firewall and router policy.
  • Encourage the development of software and hardware that is engineered for safety with possibly vulnerable settings and services turned off, and encourage vendors to automate security updating for their clients.
  • Sponsor research in network protocols and infrastructure to implement real-time flow analysis and flow control.
  • Encourage wider adoption of routers and switches that can perform sophisticated filtering with minimal performance degradation.
  • Sponsor continuing topological studies of the Internet to understand the nature of “choke points.”
  • Test deployment and continue research in anomaly-based, and other forms of intrusion detection.
  • Support community-wide consensus of uniform security policies to protect systems and to outline security responsibilities of network operators, Internet service providers, and Internet users.
  • Encourage development and deployment of a secure communications infrastructure that can be used by network operators and Internet service providers to enable real-time collaboration when dealing with attacks.
  • Sponsor research and development leading to safer operating systems that are also easier to maintain and manage.
  • Sponsor research into survivable systems that are better able to resist, recognize, and recover from attacks while still providing critical functionality.
  • Sponsor research into better forensic tools and methods to trace and apprehend malicious users without forcing the adoption of privacy-invading monitoring.
  • Provide meaningful infrastructure support for centers of excellence in information security education and research to produce a new generation of leaders in the field.
  • Consider changes in government procurement policy to emphasize security and safety rather than simply cost when acquiring information systems, and to hold managers accountable for poor security.
A Living Document

This Roadmap is a living document and will be updated periodically when new or altered threats require changes to the document. Furthermore it is a consensus document – a product of the joint thinking of some of the best minds in security – and it will continue to improve if you share your experiences in implementing the prescriptions.

 

Reference : http://www.sans.org/dosstep/roadmap.php

Securing e-Commerce Web Sites

Introduction

Securing web sites, and web servers in particular, has been the focus of many security
articles and conferences over the past few years. Obviously, a web site’s security level
is heavily influenced by the security means, which are used by, and on, the web server.
It seems obvious that the key to a secure web site is the level of security achieved from
security of the web server. One might have “stumbled” over a web site’s database
security issues if he or she was interested in DBA chores. Database security is also a
well-known subject in web site security, but it is mostly documented as a standalone
issue.
Building a web site is a task that involves more then one OS and more then one kind of
software. Therefore, the security of the web site is achieved from the synergy of all the
factors and not from the web server alone.
When I set out to write this paper, little did I know that public information regarding the
“fortification” of complex web sites will be hard to come by. Only few sites publicize the
internal workings of their systems, and fewer the security make-up and configuration.
All this said, the question I will be trying to answer in this paper is, “How do I put all
these ingredients together in order to build a secure e-Commerce web site?”
Assumptions

When building a web site we must survey the risks facing the web site from all different
aspects. Not all web sites face the same “threats”; many web sites are just another
collection of HTML pages in the vast cyberspace of the Internet. But, web sites
conducting business, containing information (considered valuable for a malicious
hacker) or holding a political view, are at higher risk then others. E-commerce web sites
often hold valuable information (credit card numbers or other private, personal data) and
conduct business, and are thus placed at a high-risk position.
Having recognized a web site is in the high-risk zone, we must consider the different
types of security hazards:
· Denial of Service (including distributed).
· Defacement (the replacement of content on a web site, indicating it has been
hacked).
· Data Theft.
· Fraud (data manipulation or actual theft).
While any of these attacks might cause revenue lose, the method of defense against
each is different. Since there is no global security solution that can provide the full
defensive spectrum an e-commerce web site requires, it has become extremely difficult
to choose the right line of defense.
Security is a product that comes with a price tag. At first, this might be very obvious
since products such as firewall and anti-virus have known pricing. However, the costs of
on-going security, software-security updates, new web-site technologies etc, cannot be
calculated during initial installation planning. Eventually the web site owner will have to
decide what level of security will be provided, while considering the current risks and
costs involved.

Web Sites Under Attack

Web site attacks vary significantly from site to site and from hacker to hacker, and their
focus has changed as well in the passing years, shifting from network level attacks to
web server hacking from within the HTTP protocol itself. DoS and DDoS attacks have
become a hacker-sport and can be seen in different forms; Ranging from network based
DoS such as PING flooding, to full connection HTTP requests.

DoS and DDoS
When a hacker wishes to “down” a web site, all which is needed, is a computing base
that can produce a larger amount of CPU-demanding activities (for example, IP floods)
then the web site is capable of handling. This is true for a fully clustered web site that is
connected via a T1 connection, not only for web sites with more limited resources. The
attacker needs only to generate traffic that exceeds the line capabilities, and effectively
the web site will no longer be available to the Internet.
Generating a large amount of traffic doesn’t require having a large connection on the
attacker side. The attacker may choose to use “bots1” or amplifiers2 as the attack base.
Most information regarding DoS and DDoS shows the use of network level exploits and
various methods of IP based flooding. The SANS paper on the subject “Consensus
Roadmap for Defeating Distributed Denial of Service Attacks“ which can be found at
http://www.sans.org/ddos_roadmap.htm, reflects these methods and the possible
defense.
Recently, a new method of DDoS has been developed. Using bots to open full
connections to the web site, and request an object on the web site. Using full
connections compromises the identity and the origin of the attack, since the bots can be
hard to trace back to their owner. These connections cannot be differentiated for all
intents and purposes from ordinary requests of web browsers.
Currently there are no known defenses against DoS attacks implementing full
connections (CDN3 is a partial and extremely expensive method that isn’t feasible for
most web sites). This is due to the fact that no publicly available web server or security
product can fully guarantee connection originates from a “bot” and not from a legitimate
connection.
Defending your web site against the more “ordinary” DoS and DDoS attacks (namely
network level attacks) is a well documented art, and consists mainly of ISP cooperation
with the web site owner. Most methods of defense include rate-limit of various forms,
and unwanted network traffic blocking (such as fragment blocking, UDP blocking etc).
1 Bots are computers connected to the Internet that the attacker was able to take over
fully or partially using various means. These computers then act as “robots” controlled
by the attacker and can be used to initiate different types of attacks (based on the level
of control gained by the malicious user). One method of taking control over PC’s and
turning them into bots is by spreading a dedicated virus.
2 Amplifiers are computers on the Internet that have a larger Internet connection or
computing capabilities and are used to amplify the attack generated by the attacker.
3 CDN – web Content Delivery Network service, provided by companies such as Akamy,
Adero and Eplication.

Most of the blocks need to be performed at the ISP level, or the attacker will be able to
saturate the line connecting the web site, effectively denying service to the web site.
Web Server Based Attacks
Many of the network-based attacks that create a denial of service are hard to achieve,
or hold little “glory” to the attacker. This said, one must consider the fact that data theft
cannot be achieved via DoS attacks. Therefore, web server attacks have become
extremely popular in the past few years. Web server attacks bypass the firewall since
they connect to the web site with legal network requests (i.e. TCP port 80), and are hard
to trace if the web site does not employ strict log file procedures.
Web based attacks vary from web server to web server. For example: gaining control
over a console on a remote MS-IIS server can be achieved using different variants of
the Unicode attack, while Linux Apache server console can be controlled using a Perl
test cgi attack. Other attacks and vulnerabilities through which a remote attacker can
gain access to a web server while bypassing the firewall are listed in various web
resources, such as http://www.securityfocus.com, the bugtraq mailing lists and more.
Known Web Configuration

There is no single way to install a web site that will hold all the security answers. The
different ways to install and configure the different web and network components varies
greatly as web sites become more complex.
A few known configurations that address the security issues are:
Configuration 1 – Basic Disjointed
A straightforward configuration, which includes the web server as a multi-homed server
with one interface connected to the world and a second interface dedicated for
database communications. All communications to and from the web site are maintained
by the firewall while internal communications are not monitored or filtered.
Figure 1 – Basic Disjointed
Pros:
1. Simplicity and streamlining of communications.
2. Easy troubleshooting on all levels.
3. Scalability (when no n-tier4 architecture is needed).
4. Low cost implementation and minimum hardware.
Cons:
1. Management of the DB server requires an out-of-band5 communication method
or web server routing.
2. Web content is distributed manually or via local scripts and applications.
Security considerations:

1. This basic configuration provides network level security (via the firewall) and DB
protection (via disjointed networks).

2. The load balancer (if external hardware is used) can be used as the second level
network-filtering device for extra security.

3. The use of two network cards provides low-level protection against poorly
configured firewall devices (for example, fire-walking will not reveal the DB
server).
This configuration provides no means of application or OS level protection. The entire
security architecture is based upon the filtering devices (firewall and load balancer). If
the OS hardening process is not redone frequently on a per-patch basis, the web site
will be vulnerable to application and OS level hacking.
In the event that the web server is hacked the database server will be fully exposed to
the hacker via the web server. This is true even if the second NIC on the web server
uses a different protocol. It is recommended that a basic method of filtering be used to
prevent the misuse of networking protocols.
The Compaq DISA6 and Microsoft DNA7 web site designs are similar and are basically
modeled in this configuration. Both Compaq and Microsoft rely on the OS hardening
process to provide the application level security and on the programmers’ capability to
produce secure code.
Configuration 2 – Filtered Disjointed (figure 2)
In this configuration, the addition of the filtering firewall, via the second “DMZ” on the
main firewall provides an added level of security8. Any hacking on the web servers will
provide only minimal access to the database servers. Obviously the web servers can
access the database server with an appropriate ODBC connector or similar means. This
configuration could potentially provide a hacker (should he be able to “own” the web
server machine) limited direct data access capabilities.

4 The n-tier configuration is shown in configuration 2 and is driven from the need to
process business logic on a separate server.

5 The use of out-of-band communications means that the connection to the server is
done from a different route then all other communication to and from the web site.

6 Found on Compaq web site at http://www.compaq.com/solutions/internet/disa.html
7 Found on the Microsoft web site at http://www.microsoft.com/technet/treeview/default.asp?url=/technet/itsolutions/ecommer ce/maintain/operate/ecomsec.asp

8 This configuration can be achieved with a second firewall for improved performance.
The firewall would be placed between the DB and the IIS servers (as suggested in the
MS paper). It is not necessary to place the DB server in the corporate network.
Application business logic for the web site is based on a separate server to allow for
easier scalability. This server may also be used for web management. Software such as
MS Site Server or MS Application Server provides the content distribution, web statistics
etc.
Figure 2 – Filtered Disjointed
Pros:
1. Relatively easy installation and routing configuration.
2. Easy troubleshooting for connectivity and system level events.
3. Minimal hardware.
Cons:
1. Development environment must be similar to the production web site, to allow
developers to adjust application connectivity with internal servers to the filtering
device used.
2. The use of one firewall as a filtering device might show a degradation in the site’s
performance. Should the use of extra firewalls be applied, cost and ease of
installation will no longer be an advantage for this configuration.
Security considerations:

1. This configuration provides network level security (via the firewall) and DB
protection (via disjointed networks). It also provides low-level application
protection since core data processing is shifted from the front-end web servers to
back office application servers that have no direct communications with the site’s
users.

2. If MS SQL is used, TCP 1433 should be used instead of named pipes. This will
provide a higher level of filtering.

3. When implementing the web content distribution mechanism it is recommended
not to use windows shares. FTP or MS Site Server replications are preferred.
The “Filtered Disjointed” configuration provides the administrator with the tools to
filter all network-based activity on the secure side of the firewall. The main idea
behind this configuration is to eliminate the ability of one server to communicate
directly with the other servers. Application connectivity is allowed to provide the
site functionality (web servers will be allowed communications with MS SQL
Server using TCP 1433), and no other protocol will be allowed. Although there’s
a performance penalty due to the extra network segments and filtering, should
one of the web servers be compromised all network transactions can be logged,
leaving an audit trail.

Configuration 3 – Application Protection (figure 3)
In the effort to protect the web site from application level hacking, we need to use a
“higher level” filter. The filter would be used to examine the HTTP protocol, and if
possible the HTTP GET, HEAD, POST, and PUT commands and parameters. This
parameter should comply with RFC 2616 (http://www.faqs.org/rfcs/rfc2616.html) and
with the restrictions of the site administrator. Such a filter can be found i n some of the
commercial proxy servers or in dedicated filtering products9. This approach apposes the
Microsoft e-commerce strategy shown earlier in configuration 1, and in the e-commerce
web site security, that all application level security should be driven from the DNA
design and proper code writing.
Figure 3 – Application Protection
Pros:
1. High level of assurance that Internet traffic enters the various applications in the
correct form and manner.
2. The use of proxy servers could improve performance, if the proxies implement a
caching mechanism.
Cons:
1. Extremely hard to troubleshoot and configure.
2. High cost of hardware and initial installation.
3. The use of filter devises at the application level could cause functionality issues.
This is due to the fact that the connection terminates at the proxy level and
connection stickiness, session information and other client information might be
misinterpreted before they reach the web servers.
9 A commercial filter, which acts as an application level proxy can be found at
http://www.sanctuminc.com
4. It is imperative that the development of the development of the application is
done with full awareness to the system configuration. Not all existing web sites
can use this configuration with no application adjustments.
Security considerations:
1. This configuration provides a high level of security, both network and application
level.
2. Application filtering might require the use of out-of-band management tools, since
not all proxy servers can act as routers for other non-HTTP protocols.
The “Application Protection” configuration provides the administrator with multi-layer
security protection. It can be used in versatile situations, and has proven itself in
protecting web sites from new hazards such as Nimda and code-red (at the time of the
worm release un-patched web sites using the “Application Protection” configuration
would not be harmed). This protection, however, doesn’t scale easily to mega-sized ecommerce
sites.
Monitoring tasks should be carefully planed. When monitoring a web site that has only
one function that answers to HTTP requests in the client path, the monitor termination
point is clear. In a configuration that holds many different components that receive
HTTP requests it is imperative to monitor them separately and to assure that they are all
up.
Summery
The job of building an e-commerce web site never stops. The web site, as the
technology itself, constantly evolves. Security risks change as the site positions itself
on the net, and, as the platform used by the site become obsolete.
The different web site configuration, and approaches shown in this document come to
prove, that the network level protection that so many web sites have become costumed
to, might not be enough. The use of advanced configurations and filtering mechanisms
is currently the only way to “keep-up” with the increasing risks of conducting business
on the Internet.
Companies such as Check Point that have long been identified as a packet-filtering
firewall software manufacture, have developed their software to provide application
filtering capabilities with the use of “Secure-Servers”10. This shows us that market
leaders have identified the need for application level filtering.

Resources
· “Web site security and Internet threats in the wild” – http://www.w3.org/Security/faq/
· A description of the DISA model at Compaq’s web site. This is the theory behind
Compaq’s recommended web site installation and the company’s statement on
securing web sites. – http://www.compaq.com/solutions/internet/disa.html

Source : http://www.google.com.kh/url?sa=t&rct=j&q=how+to+secure+e-commerce+site%3F&source=web&cd=1&cad=rja&ved=0CCQQFjAA&url=http%3A%2F%2Fwww.sans.org%2Freading_room%2Fwhitepapers%2Fwebservers%2Fsecuring-e-commerce-web-sites_303&ei=2yUJUfnwOoetiQe4g4DwCg&usg=AFQjCNEsttUznPtCWZUfLfDsN2mp6OpQEA&bvm=bv.41642243,d.aGc

Why ‘Do Not Track’ doesn’t change much about web privacy

Cookies were originally used to make logging into websites easier and make the day-to-day browsing experience more convenient for users. These days, only a fraction of the cookies stored inside your browser’s cache are used for logons or your convenience. The vast majority are dropped by ad servers when they place ads on your favorite websites to track your usage history.

Everything you always wanted to know about Web tracking (but were too paranoid to ask) ]

If you think this sounds like an invasion of privacy, you’re not alone. The makers of all the leading browsers agree and offer Do Not Track settings as a way to give users more control over the information that is collected about them.

How it works: When you set your browser to ‘Do Not Track’, the DNT=1 bit is sent by your browser with every HTTP request for a website, telling the website that you don’t want to be tracked by third-party cookies before it even loads. This should prevent the storage of third-party cookies and only allow cookies of the website you actually visited to be saved. The header clearly states that you’re opting out of analysis and, thus, behavioral ads. But DNT is not an ad blocking mechanism: Once enabled (and if a website supports it), it’s not going to turn your web into an ad free zone.

 

Trace your typical browsing routine with Firefox Collusion

To get a sense of how many ad networks really collect your data, I suggest you try the (free) FirefoxCollusion ad-on. Once enabled, it gives you a visual representation of how many ad networks placed cookies on your machine. Go ahead and visit 10-15 of your favorite websites and see what happens.

 

What’s wrong with DNT?

As I see it, there are four problems with DNT currently.

1. Once you enable DNT, you’ll see the real problem with it: You’re going to have the same old browsing experience you always had. Browsers can send the DNT=1 header until the cows come home, but if websites don’t accept it, there’s little to prevent the ad servers from dropping cookies.

The FTC urged ad companies to set up DNT and — to everyone’s surprise — the DAA (Digital Advertising Alliance) followed. But currently, Twitter is one of the few websites that actively respect DNT. Most websites see DNT as what it is: a voluntary setting.

2. Users may find the web a more annoying place to be with DNT enabled. For example, I saw car rental ads on the tech websites I visit regularly just because I browsed for rental cars a few days ago — that’s creepy and unwanted, yes, but at least it’s relevant. With DNT enabled, I still get ads, they’re just less targeted.

3. As I previously mentioned, Microsoft decided to enable DNT by default when the user opts for the “Express Settings” in the Windows 8 setup wizard. This move led Apache (which is used by 65% of website all around the world) to ignore the DNT header send by IE10.

4. And last but not least: While the intent of Do Not Track is pretty clear (cookies from a website that the user actively opens is ok, third-party cookies are not), the definition of what exactly a third-party cookie is is open for interpretation. Is a Microsoft ad cookie on a Microsoft website a third-party cookie? Or is it first party? I don’t have the answers. Neither does the W3C committee or any of its partners.

Setting Do Not Track in your browser

Don’t take my word for how well Do Not Track works (or doesn’t, as the case may be). Try it yourself. Here’s how to enable it in IE, Firefox, Chrome, and Safari:

Internet Explorer

IE9 supported Do Not Track, but with IE10 Microsoft has taken it a step further and made it a default setting: During the express setup of Windows 8 (which includes IE10), the “Always send Do Not Track header” is enabled by default. This caused quite a stir with the Tracking Protection Working Group of the W3C. However,Microsoft stuck to the plan and is shipping Windows 8 RTM with the header enabled.

Ad industry calls IE10’s ‘Do Not Track’ setting ‘unacceptable’ ]

Firefox

Mozilla implemented DNT early with release 4 and is still allowing the user to opt-in using the “Tell web sites I do not want to be tracked” setting.

8 essential privacy extensions for Firefox ]

Chrome

Google just recently added DNT to build 23 (Chromium). The respective setting can be found under the “Privacy” section of the browsers setting. Google Chrome, however, has yet to add Do Not Track to its developer build. I’m not exactly surprised that Google a big hesitant, as online ad revenue is big business for the search giant.

The smart paranoid’s guide to using Google ]

Safari

Apple added DNT with Safari 5. You’ll find this both on Windows and on the Mac in the settings menu under the “Privacy” tab. Just check the “Ask websites not to track me” box and you’re done.

The bottom line

Follow the DNT discussion, enable it if you like, but if you really don’t like being tracked, use tools such as Ad Blocker, DoNotTrackPlus, Ghostery, and NoScript and clean out your local cache regularly.

Now read: The first truly honest privacy policy

reference : http://www.itworld.com/security/299821/do-not-track-great-idea-or-futile-privacy-attempt?page=0,0