Changes of Accounting Policy

Firms are at liberty to change the accounting policy applied to their financial results. Changes to accounting policy fall into three main categories:

1. Changes of Accounting Principle
2. Changes in Accounting Estimates
3. Changes in the Reporting Entities

Changes of Accounting Principle
In preparing their financial accounts, firms may select from several alternative accounting principles. For example, in depreciating its fixed assets, a firm may elect for accelerated depreciation (which front-loads the depreciation charge) or straight line depreciation (which allocates the depreciation charge evenly across all periods). Typically there is some guidance on which principle to apply, but no hard rules and so the firm should select the principle which it deems most appropriate.

If the firm subsequently decides that the original principle is inferior to an alternative principle it may change the principle applied. Such a change should be justified and explained in the notes to the accounts, and also needs to be noted in the auditor’s report.

Although firm can elect to change an accounting principle, most changes to accounting principle are due to new regulatory requirements which mandate the change.

Changes in Accounting Estimates
Firms are required to make estimates for wide variety of items in its financial accounts. For example, depreciation requires an estimate of an asset’s useful life, bad debt provisions require estimates on delinquency rates, stock requires an estimate of the current market value for that stock.

Valium

All estimates need to been examined and validated for each reporting period to ensure they remain valid. If it is determined that an estimate is no longer accurate, it will need to be adjusted. The frequency of changes in an estimate varies by the nature of the estimate, useful asset lives typically are relatively stable whereas provisions for bad debts will change regularly with the prevailing economic conditions.

Changes in estimates can be a method for firm’s to manage (or smooth) their earnings. Provisions for bad debts is a common item for managing earnings. A firm may make an overly negative estimate on bad debts during a healthy period which would reduce the earnings and create a liability (‘Allowance for Doubtful Debts’) on the Balance Sheet. The Allowance for Doubtful Debts liability can be later reversed in a period when the firm needs to improve earnings, in such a circumstance the liability is reduced and appears on the Income Statement as ‘Allowance for Doubtful Accounts’ or similar.

Some firms require extensive estimates in arriving at their revenue number. In such a circumstance the firm’s revenue (and estimates in arriving at the revenue) should be closely scrutinised. One example of this is Zynga. Zynga apportions the revenue from the purchase of virtual goods over the expected term these goods will be used over. This estimate is contingent on a lot of data internal to Zynga such as granular data on the time players are expected to play a particular game, and only the very high level data is disclosed in Zynga’s accounts.
When Zynga reduced its estimate of then life of a virtual good’s lifespan then it essentially accelerated the recognition of its revenue and boosted its current period revenue.

Financial Impact of Changes in Principle and Changes in Estimates
A change in principle or estimate will likely impact the financial results of not only the current period but also previous periods. There are two methods for handling the impact of the change – cumulative-effect (catch-up) or restatement. The restatement method requires the firm to restate its previous periods using the new accounting principle or estimate. More commonly used is the cumulative-effect method, whereby the firm rolls the cumulative net impact of the change for all prior periods into the current period. This amount appears as a separate item on the Income Statement, typically as the final item just before Net Income.
Also of interest to an analyst is the impact of the change for the current period. This is not broken out separately in the Income Statement but needs to be disclosed in the notes to the accounts.

Change In Reporting Entity
The reporting entity can be materially changed when a merger or divestiture occurs. Google is a good example of both. When Google’s purchase of Motorola was approved it restated its current and prior period accounts to include Motorola. Google subsequently divested Motorola Home, only keeping Motorola Mobility, this divestiture necessitated a further restatement of the prior periods accounts.
Note that a change in the reporting entity requires a restatement and the cumulative-effect method available for changes in accounting principle or estimate is not available.

Protecting Against SQL Injection In .NET Applications

Despite being so well understood, SQL Injection remains one of the most common vulnerabilities in web applications.

What is SQL Injection

Any SQL which is dynamically created has the potential for having malicious SQL injected into it. For example, the below code receives a querystring and adds it to a SQL select string which will then be executed against the database.

//Querystring for example could be ?userid=1
var inputStr = Request.QueryString("userid"); 

var sqlQuery = "select createDate from users where userid = '" + inputStr  + "'";
//sqlQuery is 'select createDate from users where userid = '1'

All the attacker has to do is append sql to the querystring for it to execute. Thus adding 1; delete from users;-- will cause the sql statement to be select createDate from users where userid = '1'; delete from users;-- and two statements will be executed by the database. The first statement returns the createDate for the ‘jude’ user, the malicious second statement deletes all records from the database. Note that no statements added to the sqlQuery will be executed since they are commented out using the — characters.

There are a multitude of variations on these basic scenarios, as well as executing operations on the database, attackers can retrieve sensitive data by displaying it on the site which does not require execute permissions on the database. Take for example a search box on a site which places the search term in the querystring which is then used to form the SQL query as below :

var searchTerm = Request.QueryString("searchterm");
var sqlQuery = "select * from articles where article.title like '%" + "%'";

The sql is then executed against the database and the search results are output to the webpage.
A malicious user can inject sql into the search term and return a listing of table names in the database using the below ‘search’ term:

' union select name from sysobjects where type = 'u' --

Once armed with the table name listing, the attacker can target a table to return the column names using the below term:

' union select * from information_schema.columns where table name = 'users' --

Then it is just a straightforward select statement to return the sensitive data:

' union select socialsecuritynumber from users --

There are a large number of sources of SQL Injection, as noted above querystrings and form values are common sources of injection, but cookies and Http Headers are also potential sources. Http header values in particular are commonly stored in the database, for example logging the referring url is common practice on many sites:

var referringUrl = ServerVariables("HTTP_REFERER")
var insertReferralSql = "insert into trafficreferrals   values ('" + referringUrl + "')"

The Http Post can easily be tampered with using a tool such as Fiddler and malicious code entered in the Referrer field in the header. Note in this particular instance, the .NET framework provides the Request.UrlReferrer property which will pass back a valid Uri and should be clear from injected code.

Problematic Fix – Manually Cleaning Strings

Before looking at robust defenses against SQL injection it is worth looking at ad-hoc protections which are often ineffective and should not be used.

Writing custom code to ‘clean’ the input string of dangerous characters is effective in theory but needs to be done with extreme rigour and many solutions can very easily be bypassed.

For example, the potentially hazardous ‘ and – – characters can be removed as below :

var inputStr = Request.QueryString("userid"); 
var cleanStr = inputStr.Replace("--", "") 
                       .Replace("'", "");

This may also be performed on SQL Server using the TSQL REPLACE function:

...
set @input = REPLACE(@input, '--', '')
select @input
set @input = REPLACE(@input, '--', '')
select @input
...

In this example the cleanStr may be consider safe of the ‘ and – – sequences. However this is not the case. Consider the below input string:

; delete from orders -'-

Note the -‘- sequence, in this case the ‘ character will be removed after the – – characters are tested for and the — character sequence will then be created leaving the below string:

; delete from orders -'-

The ‘ and – characters are not always necessary for attacks. Consider the first example we looked at

var inputStr = Request.QueryString("userid"); 
var sqlQuery = "select createDate from users where userid = " + inputStr

In this case the below SQL could be injected without the ‘ and – characters :

19; delete from users;

If the database has SET QUOTED-IDENTIFIER OFF then attacker could simply replace the ‘ character with ” .

Defenses Against SQL Injection

Ensure Correct DataTypes Are Input
In many of the above scenarios an integer was expected and the attacker passed in an SQL string instead.
In the application we can test the input is in the correct format :

int orderId = int.Parse(Request.QueryString["OrderId"]);
//The below code can be problematic when users have different date formats
//Consider also DateTime.TryParse
DateTime orderDate = DateTime.Parse(Request.QueryString["orderDate"]);

This ensures that strings are not input when integers, date formats and other datatypes are expected. This step is sometimes seen as unnecessary overhead when parameterised queries are used but it is does provide additional protection.

Parameterised Queries
Parameterised queries are a primary defense against SQL injection. The below stored procedure uses a parameter on the where clause which will prevent malicious code being injected.

CREATE PROCEDURE GetUserJoinDate 
    @UserName nvarchar(50) 
AS 
    SELECT JoinDate
    FROM Users
    WHERE UserName = @UserName 
GO

Parameters are not always effective however, and building dynamic SQL using string concatenation can introduce http://armodexperiment.com/ vulnerabilities. The below stored procedure performs the same operation by using dynamic SQL.

CREATE PROCEDURE GetUserJoinDate(@UserName nvarchar(45))  

 AS
  DECLARE @sql nvarchar(255)
  SET @sql = 'SELECT JoinDate FROM Users WHERE UserName = ' + @UserName + '
  EXECUTE sp_executesql @sql
GO

In this case the below SQL passed in as the UserName parameter will execute malicious SQL on the database:

'; delete from users --

This is a very simplistic sample and is an unusual use of dynamic SQL, dynamic SQL is more commonly used in other scenarios where the developer may believe it is the only option. For example SQL is often passed in to build such as sorting where parameters

CREATE PROCEDURE GetUsers 
    @Sort nvarchar(50) 
AS
  DECLARE @sql nvarchar(255)
  SET @sql = 'SELECT UserName FROM Users   ' + @Sort 
  EXECUTE sp_executesql @sql
GO

This allows partial sql to be passed in for sorting:

exec @Sort = 'order by UserName ASC'

This stored procedure could be purged of dynamic SQL and written as below:

CREATE PROCEDURE GetUsers 
    @Sort int = 1 
AS
   SELECT UserName FROM Users
   ORDER BY CASE
     WHEN @Sort = 1 THEN ( Rank() OVER (ORDER BY UserName ASC) )
     WHEN @Sort = 2 THEN ( Rank() OVER (ORDER BY UserName DESC) )
     WHEN @Sort = 3 THEN ( Rank() OVER (ORDER BY CreateDate ASC) )
GO

There are numerous scenarios such as this where dynamic SQL can be removed. If dymanic SQL is absolutely necessary then string concatenation should be avoided and parameters placed within the SQL which will will ensure that the parameter is properly escaped:

CREATE PROCEDURE GetUserJoinDate 
  @UserName nvarchar(45)
AS
  DECLARE @sql nvarchar(255)
  SET @sql = N'SELECT JoinDate FROM Users WHERE UserName=@UserName'
  EXECUTE sp_executesql @sql
GO

These examples have focused on stored procedures on the database, but this applies equally to SQL created in the application :

//Vulnerable
SqlConnection conn = new SqlConnection(connectionString);
SqlCommand command = new SqlCommand("SELECT JoinDate FROM Users 
                                     WHERE UserName='" + userName + "'", conn);

//Secure SqlConnection conn = new SqlConnection(connectionString); SqlCommand command = new SqlCommand(“SELECT JoinDate FROM Users WHERE UserName=@UserName”, conn); command.Parameters.Add(new SqlParameter(“UserName”, userName);

Permissions

The database is only vulnerable to SQL injection to the extent of the user’s permissions on the SQL Server database. It is thus essential to audit and limit the permissions on the database, a detailed discussed of permissions is beyond the scope of this article but a good starting point for auditing SQL Server permissions is Auditing SQL Server Permissions

IIS Global Filtering
As an additional defense for querystring injection, IIS can filter requests for dangerous keywords. This can be done globally using the applicationhost.config (located at %systemroot%system32inetsrvconfigapplicationhost.config ). Under the tag add your rules as below:

 <filteringRules> 
    <filteringRule name="SQLInjection" scanQueryString="true"> 
        <appliesTo> 
            <add fileExtension=".asp" /> 
            <add fileExtension=".aspx" /> 
        </appliesTo> 
        <denyStrings> 
            <add string="--" /> 
            <add string=";" /> 
            <add string="/*" /> 
            <add string="@" /> 
            <add string="char" /> 
            <add string="alter" /> 
            <add string="begin" /> 
            <add string="cast" /> 
            <add string="create" /> 
            <add string="cursor" /> 
            <add string="declare" /> 
            <add string="delete" /> 
            <add string="drop" /> 
            <add string="end" /> 
            <add string="exec" /> 
            <add string="fetch" /> 
            <add string="insert" /> 
            <add string="kill" /> 
            <add string="open" /> 
            <add string="select" /> 
            <add string="sys" /> 
            <add string="table" /> 
            <add string="update" /> 
        </denyStrings> 
    </filteringRule> 
</filteringRules>

/* Courtesy Wade Hilmo */

This will deny requests with these keywords in the querystring.

Leave ValidateRequest On
ValidateRequest is inbuilt in ASP.NET and throws an error whenever a page is submitted with potentially malicious content. However, ValidateRequest is very aggressive and out of frustration developers frequently resort to turning it off in across the entire application. Instead, it can be selectively disabled for controls, pages in ASP.NET and for controllers in ASP.NET MVC (using [ValidateInput(false)] ). This gives more granular control over where there are potential exposures to SQL injection.

Consider Using an ORM
Object Relation Mappers such as Entity Framework or Nhibernate take control of the SQL generation and execution and generally do an excellent job of protecting against SQL Injection.

Groupon’s Ominous Balance Sheet

In yet another rebuke of Groupon’s (GRPN) accounting methods, Anthony Catanach drew attention to the intangible assets sitting on Groupon’s balance sheet. Whilst he noted the danger of Groupon’s high intangible assets, it is so serious that it warrants a more in-depth analysis.

Intangible assets encompass a wide variety of non-physical assets such as patents and goodwill; however, unlike physical assets, such as cash or equipment, they exist only on paper and so are difficult to value. The valuation of intangibles therefore relies heavily on the estimates and judgment of the company – and Groupon has a less than stellar record in its accounting and reporting.

At first glance, the picture is not too awful – Groupon’s intangible assets account for 15% of it total assets.

(click to enlarge)

As shown above, Groupon’s proportion of intangible assets appears roughly inline with its peers. Google (GOOG) and eBay (EBAY) both have higher percentages of intangibles (at 20% and 26%, respectively) whilst Linked In (LNKD) and Zynga (ZNGA) are both lower at just over 10%.

Things begin to get scary when reviewing what the intangible assets comprise. Google has a pile of online-pharmacy.org valuable patents (many from its Motorola acquisition) and eBay has some prime acquisitions such as GSI Commerce. Groupon’s intangible assets, by contrast, are far less compelling.

The largest chunk of Groupon’s intangibles is Goodwill, which accounts for $207 million of the $310 million total. Goodwill is created when another company is purchased (technically it is the excess of the amount paid for the company above its net asset value). However, Goodwill can only continue to be carried on the balance sheet so long as its value is not substantially impaired. If the current value is substantially lower, then it should be written off against profits. This is exactly what happened to HP (HPQ) with the write-down of its Automony purchase. The companies Groupon purchased were primarily in the “daily deals” space, and we need only look at the share price of Groupon and the travails of Living Social to gauge how well daily deals businesses are faring. It is therefore a very reasonable conclusion that the goodwill on Groupon’s balance sheet could be ripe for a write-down.

Next up is deferred tax assets of 85 million. These are strange accounting curiosities, the validity of which is an ongoing debate. But one thing is for sure, a deferred tax asset is only useful if a company will generate substantial profits (in which circumstance the deferred tax asset can offset a future tax charge). Groupon, having failed to generate any profit to date, is a pretty poor candidate for having what amounts to a profits tax credit sitting on its balance sheet.

Finally, “other intangibles: of $43 million. For a tech company this would normally include patents and developed technologies; however, Groupon, with a zero research and development expense, is not a technology company. As such, there are some curious items in here – “subscriber relationships” of $21 million and “developed technology” of $11 million. Given the rapid decline of the daily deals business, the subscriber relationships are unlikely to have a significant value (let alone the value Groupon ascribes to them); the developed technology is equally dubious, given Groupon’s zero R&D.

We have seen what substantial write-downs can do to the profitability of a company like Microsoft (MSFT) or the share price of HP. What, then, a major write-down would do to a struggling company like Groupon, which has yet to turn a profit, its rather ominous.

EBITDA – A Misleading Earnings Measure

This article was originally published on CoreEarnings.com .

What is EBITDA?

EBITDA (Earnings before Interest, Tax, Depreciation and Amortization) is a commonly used earnings metric in financial analysis. The central motivation for using EBITDA is that shows a firm’s earnings from its core business activities and it is more of a cash based metric.

The first thing to note about EBITDA is that it is a ‘pro-forma’ measure in that it is not an earnings measure sanctioned under generally accepted accounting principles (GAAP). Therefore http://getzonedup.com/ EBITDA does not appear on an Income Statement and is instead calculated by the firm (or analysts) and reported separately.

The US accounting standards board (FASB) has a strict definition for calculating EBITDA which is adding back only tax, interest, depreciation and amortization to Net Income as reported on the Income Statement. However, most firms choose to report ‘Adjusted EBITDA’ or ‘Sustainable Earnings’ in their earnings releases. These measures and non-standard and vary from firm to firm making comparisons difficult. Typically, in addition to the standard EBITDA adjustments, these measures will also add back items deemed as ‘non-recurring’ such as asset writedowns.

Myth : EBITDA is Cash Earnings

EBITDA is often used interchangeably with cash flow , however it is definitely not a purely cash measure. Whilst some non-cash items such as depreciation are excluded, the earnings element in EBITDA is still based on the accruals concept and so revenue can be recognised even when the cash has not yet been received. If cash earnings are required for analysis, there is already an item in the standard GAAP accounts  – Cash Flow From Operating Activities that is the cash amount earned from a firm’s core business. EBITDA is thus a strange hybrid of an accruals based and a cash-based earnings measure.

It is also difficult to find a convincing rationale for excluding many ‘non-cash’ items such as depreciation. The motivation for removing depreciation and amortization is that they are non-cash expenses. However this is very misleading, the asset which is being depreciated was originally purchased with cash and excluded as an expense on the basis that the asset can be used over several years and so only a proportion of the cost should be included as an expense each year. In the same way, asset impairments and writedowns are based on previous cash payments.

Myth : EBITDA is Sustainable

A common argument for using measures such as EBITDA is that they exclude one-off charges such as asset writedowns which do not recur. This logic has some merit if a company seldom has such charges but the reality is that ‘one-off’ charges are often frequent occcurances for some companies (such as Hewlett Packard) and are indicative of a core issue with the business (such as the inability to generate organic growth and a poor acquisition policy in the case of HP). Therefore excluding these charges would give a misleading impression of the company.

Why the Popularity of EBITDA?

The main proponents of EBITDA and especially its derivatives such as Adjusted EBITDA are the reporting companies themselves. Using EBITDA measures allows firms to ignore a wide variety of costs and expenses and so flatter the business.

Groupon – A Tech Company With Zero Research and Development

This article was originally published on CoreEarnings.com .

In its offering prospectus Groupon billed itself as a ‘local e-commerce’ company which brings ‘the brick and mortar world of local commerce onto the internet’ which clearly defines it as an online tech business.

It is therefore nothing less than astonishing to see no Research & Development expense on the company’s Income Statement. Other companies in the internet/technology space maintain their R&D expense at a fixed percentage of Revenue. The below chart shows the 2012 Q4  R&D / Revenue ratio for several major internet/tech companies :

Most companies have an R&D / Revenue  ratio of between 0.1 – 0.2, Apple is the outlier with only 2% of Revenue spent on R&D, which is partly due to its huge Revenue allowing it to still spend a large absolute amount on R&D yet keep the percentage low and also it essentially outsources its R&D to suppliers.

In addition to being a large expense for a tech company R&D is also kept at a very consistent percentage of https://hotcanadianpharmacy.com Revenue over time, this is a clue to just how important of an expense R&D is. The below chart of R&D/Revenue over time for some of Groupon’s ‘peers’ shows how consistent the ratio is kept.

It is possible this is simply an issue of attribution and Groupon’s R&D-type expenses are included under another heading such as ‘Selling, general and administrative’ however Groupon’s filing make no mention of research type activities being included in other expense headings.

Capital Expenditure

The first place to look for missing expenses is the Cash Flow Statement to see if it has been capitalized instead of expensed. Capital Expenditure is similar to R&D except that Cap Ex leads directly to the creation of a valuable asset (which can be software) whereas R&D is more general research which has not yet led to the creation of an asset. In terms of accounting, R&D is charged directly against Revenue to arrive at a firm’s profit, whereas Cap Ex is not charged against Revenue and instead creates an asset on the Balance Sheet. 

Thus, it can be advantageous for a company to classify R&D as Cap Ex although this is considered at a minimum an aggressive accounting practice. Groupon’s accounting policy is to treat all of its software development as Cap Ex which is a sound policy consistent with accounting policies of some other internet companies such as Amazon. Nevertheless with all of the software development being charged to R&D it remains a concern that not enough is being expensed on the Income Statement.

Using Cash Flow In Financial Analysis

This article was originally published on CoreEarnings.com 

Financial Reporting is primarily focused on providing a detailed view of a firm’s earnings. The accruals concept is applied to both revenues and expenses so that only income/expenses which are earned or occurred in the period are reflected in the accounts irrespective of payments. For example, a software firm which pre-sold licenses for software which has not  yet been delivered and collected payment would not  recognise the sale in its accounts until the software was fully delivered.

The accruals concept is intended to provide a truer picture of a firms financial performance, however it has the drawback of earnings diverging from the cash generated by the business. The core concept is that earnings will eventually show up as cash flow.

http://www.canadianpharmacy365.net/product/lady-era/

The issue with earnings  measures such as Net Income is that they include items such as depreciation and provisions of bad debts which require estimates and so affords the business a degree of latitude in ‘managing’ its earnings.  Cash Flow is a harder measure which must must backed up by movements in the firm’s cash balances which is straightforward to verify. Thus to aid the understanding of financial performance a Statement of Cash Flows is included in financial reports.

The most important item in the Cash Flow statement is the Cash Flows From Operating Activities which shows the cash generated by the firm’s core business activities.  It is arrived at by starting with Operating Income and adjusting for all non-cash items. In financial analysis, capital expenditure is sometimes deducted from Cash Flow from Operating Activities to arrive at Free Cash Flow. The rationale for deducting capital expenditure is that this is a necessary recurring expense for the firm to generate its income.

There are two other balances in the Cash Flow statement – Cash Flow From Financing Activities is the cash movement associated with the financing of the business so issuance or repayment of loans as well as equity issuance will be shown in this balance. Cash Flow from Investing Activities relates to investments the firm may make in securities as well as investments in fixed assets.

 

Use of the Cash Flow Statement

For a business or financial analyst the primary use of the Cash Flow Statement is in determining the quality of a firm’s earnings. As noted above, a firm’s earnings should eventually be reflected in the firm’s cash flow. Thus Free Cash Flow (or Cash Flow from Operating Activities) should trend with Operating Income, note from the below chart that Apple’s Free Cash Flow closely tracks its Operating Income:


A prolonged divergence in these series could signal aggressive accounting to artificially boost Income.

Cash Flow Statement Limitations

A common misconception is that cash flow is immune from manipulation since the firm must verify the actual movement in its cash balance with bank statements. However, this is only true for the total movement in cash flow, the Cash Flow From Operating Activities balance can easily be manipulated by misclasifying items which should ordinarily be expensed. For example, WorldCom misclassified operating expenses as capital expenditure, which has the effect of removing the expense from both the Income Statement and the Cash Flow From Operating Activities, instead the amount showed up in the Cash Flow From Investing Activities which is less used by financial analysts.

Finally, note that Cash Flow is not the same as EBITDA (Earnings Before Interest, Tax, Depreciation and Amortization) although financial reports sometimes use the two the interchangeably. EBITDA still use the accruals method to arrive at the earnings figure which is then adjusted for non-cash items such as depreciation and amortization. As such EBITDA is a hybrid between a regular accrual based earnings measure and a cash based measure of earnings.

Primer on Selecting Data Using Entity Framework

This question about selecting data using Entity Framework on StackOverflow got me thinking that a lot of LINQ to Entities code samples show very little consistency in how data is selected. Find(), Single(), SingleOrDefault(), First(), and FirstOrDefault() are often used interchangeably.

The first issue to address is using Where() in the data selection statement. Where() should only be used when several entities are required. There is no advantage in using Where() as below :

var db = new MyDBContext();
var employee = db.Employees.Where(x => x.ID == 1).Single();

This can be written as :

var employee = db.Employees.Single(x => x.ID == 1);

Note that there is a fundamental difference between using Where() in isolation and appending a standard LINQ operator such as Single() or ToList(). Where() and OrderBy() will return IQueryable and IOrderedQueryable objects respectively which contain core SQL Statements to be executed against the database but have not yet requested or received data from the database.
Appending Single(), First(), ToList() or similar will subsequently execute the query:

var emps = db.Where(x => x.Salary > 100000); // emps is an IQueryable() object, no database interaction yet
//.....
var highSalaryEmps = emps.ToList() //database query executed and an IEnumerable() returned as highSalaryEmps

This can have both performance and data integrity implications. In the above example, data is only read from the database when ToList() is called, intervening update/add operations will be reflected in the highSalaryEmps object which may not be the intended outcome.

Welcome to OnlinePhentermine.net

In terms of performance, the primary issue to be aware of is using filtering operations after executing a large database query. For example :

var emps = db.Employees.ToList();
var highestSalaryEmp = emps.OrderBy(x => x.Salary).Single();

This will first return all employees from the database and then performance an order and selection. Far better to just query the database using these criteria :

var highestSalaryEmp = db.Employees.OrderBy(x => x.Salary).Single();

Find()

Find() is set apart from other element operations in LINQ-To-Entities in that it will first query the in-memory entities that EF is tracking and only hit the database in the event that none is found. Thus it can have superior performance and should always be used where possible – especially in lengthy operations when the entity is likely to already be in memory. Find() is unfortunately not very flexible and can only be used to look up an entity by its Primary Key :

var emp = db.Employees.Find(empId);

Note that you can use the Local property on a DbSet to access the in-memory data if you require more advanced logic :

var emp = db.Employees.Local.SingleOrDefault(x => x.Name = "Jude")
        ?? db.Employees.SingleOrDefault(x => x.Name = "Jude");

Note the use of the null coalescor  (??) above which tests if the first statement evaluates to null and if so then proceeds to execute the second statement. Thus EF will first test if there is a match in-memory and only if no match is found will then hit the database.

Single or First

A lot of EF code samples contain First() when it is clearly not optimal. First() is often used interchangeably with Single() although they are quite different. Single mandates that one and only one entity can be returned, if two matching records are found in the database an error will be thrown. First() is only concerned with returning the first record and performs no check to determine if there are multiple matching records.
Thus Single performs a data integrity test. In many scenarios there should only but one matching record in the database, and so Single will ensure this – although this will necessitate addition error trapping logic since Single will throw errors is more than one match is found. Under the covers, Single does this by using ‘SELECT TOP (2)’ for its queries and then EF inspects the returned records and throws an error if two records are returned. First by contrast simply queries using  ‘SELECT TOP (1)’.

As a side note, using .Take(1) instead of .First() has no difference in the SQL used to query the database, but Take() returns an IQueryable object and not an entity as with First() so the above discussed issues will be relevant.

OrDefault()

SingleOrDefault() or FirstOrDefault() should only be used when it is not known if the entity has already been persisted to the database. Thus SingleOrDefault() will query the database for the entity and return null if no entities are found (note that an error will still be thrown if more than one match is found). A typical pattern is querying and returning an entity if it exists or creating and adding it if there is no matching entity. In such a scenario the ?? null coalescor introduced above can be very useful.
Take a scenario where users can create Tags for Articles, if the Tag exists it shouldnt be returned as an entity but it needs to be created if it does not yet exist:

var tag = db.Tags.SingleOrDefault(x => x.TagName ="EntityFramework") ??
new Tag { TagName ="EntityFramework", CreateDate = DateTime.UtcNow };

article.Tag = tag;
db.SaveChanges(); //Note the new tag will not be persisted to the database until SaveChanges is called.

Implementing PRG (POST-Redirect-GET) In ASP.NET MVC

Site visitors are conditioned to believe that hitting the browser’s refresh button will simply refresh the page. In reality the browser re-issues the last Http Request. That’s fine if it was just a GET Request which typically only loads a page, but if the request was a POST Request which typically updates values on the server that could result in undesirable results such as a duplicate order.

Fortunately there is a very simple pattern to protect against a re-POST due to page refresh – the PRG (POST-Redirect-GET) pattern. This is very easily implemented in an MVC web framework such as ASP.NET MVC. The controller which handles the POST Request performs the necessary processing and then redirects to another controller using a GET Request which then renders the final view to be returned to the site user.

A simple implementation is shown below. Here the Order controller accepts the POST Request from the user’s browser. If the ItemOrder object has no validation errors some processing is performed and then the ASP.NET MVC RedirectToAction method is called which issues a GET Request and passes the order id to the OrderSuccess controller. The OrderSuccess controller passes the OrderSuccess view (ie page) to the user. Now, if the user refreshes the browser a GET Request for  the OrderSuccess page is issued and no duplicate POST is made.

[HttpPost]
public ActionResult Order(ItemOrder itemOrder) {
    if (ModelState.IsValid) {
      //do processing
         return RedirectToAction("OrderSuccess", new { id = itemOrder.Id })
    }
    return View(model);  //if there is an error in the model the page is returned
} 

public ViewResult OrderSuccess(int id) {

    return View(id); 
}

Preventing Parameter Tampering in ASP.NET MVC

Never trust data submitted from a web page is a core security concept for web development. When using an MVC framework, this statement takes on added relevance. MVC frameworks rely heavily on binding querystrings, route values and form values to in-code objects.

Take for example the scenario where an authenticated user needs to update their profile. A common pattern in MVC would be to first load the the user in a controller and pass the user object to a View which renders the html to be sent to the browser:

[Authorize]
ActionResult UserDetails(int userId)
{
   var user = respository.GetUser(userId); 
   return View(user);
}

Another controller will then receive the Http Post from the update page and process the update:

[HttpPost]
[Authorize]
ActionResult UserDetails(User user)
{
   respository.UpdateUser(user);
   return View(user);
}

This works seamlessly, and developers are often comforted by the [Authorize] attribute which mandates that the user must first have logged in to the service and been authenticated to access the update page.

The vulnerability here is that when the page is sent to the user the html will contain the userid (typically in a hidden field), this forms part of the Http Post sent back to the server and is used as the primary key to determine which record to update. Using a tool such as Fiddler it is trivial to change this userid to any value and hence update any user record (attackers will often change a userid to 1 which is usually an admin account).

Accepting Data From a Trusted Source

The first defense against parameter tampering to only accept data from a trusted source where possible. In the above scenario, this could mean only accepting the data from the ASP.NET Membership provider:

[HttpPost]
[Authorize]
ActionResult UserDetails(User user)
   {
	 //Test is submitted user.Name is the same as the current logged in user
     if(user.Name == User.Identity.Name)
		{
   			respository.UpdateUser(user);
   			return View(user); 
		}
		else
		{
			//error
		}
     }

Encrypting and Decrypting Key Values

A more generalised solution is to pass down an encrypted value to the webpage and then decrypt and test this against the key value on the Http Post. Take for example an orderId which is used to update and order details (such as the address the order is to be sent to).

First, in the controller we can create an encrypted value of the key orderId field using the MachineKey API:

ActionResult OrderDetails(int orderId)
{
   var order = respository.GetOrder(orderId); 

     var orderidBytes = Encoding.UTF8.GetBytes(orderId);
     var encryptedOrderId =  MachineKey.Encode(plaintextBytes, MachineKeyProtection.All);
     ViewBag.OrderIdEncrypt = encryptedOrderId;


   return View(order);
}

In the Razor view, simply add the ViewBag.OrderIdEncrypt in a hidden field within the form:

...
@Html.Hidden("OrderIdEncrypt", ViewBag.OrderIdEncrypt)
...

Now when the form is posted back there will be a OrderIdEncrypt value on the form. This can then be decrypted with the MachineKey API and compared to the submitted value of OrderId:

[HttpPost]
[Authorize]
ActionResult OrderDetails(Order order, string orderIdEncrypt)
{
   var decryptedOrderIdBytes = MachineKey.Decode(encryptedValue, MachineKeyProtection.All);
   var unEncryptedOrderId =  Encoding.UTF8.GetString(decryptedBytes);
    
   if(unEncryptedOrderId == order.Id)
    { 
       respository.UpdateOrder(order);
       return View(order);
    }
   else
    {
       //test failed - send error message
    }

}

This solution could of course be cleaned up a lot – an enhancement would be firstly to have an computed field on the Order model which returns an encrypted value of the orderId and a helper method IsOrderIdValid which would compare an encrypted orderId value to a plain text orderId. This would clean up the solution and minimise the amount of logic in the controller.

In addition to tampering with existing values in a form, MVC frameworks are also vulnerable to adding additional parameters which are then bound to objects. This is commonly referred to as a Mass Assignment vulnerability and will be dealt with in my next article article.

Security – Simulating And Protecting Against A DoS Attack

On a recent project, I was created a web service which parsed a set of financial statements into name/value pairs from an XBRL document. The complexity of the XBRL   specification means that parsing an XBRL document takes approximately 90 seconds on mid-spec server. To avoid users having to wait for the 90 seconds for the data, the parsed data was saved to a database. However, since the service covers over 5000 companies it would need to store approximately 100,000 sets of financial statements which places a strain on database resources as each set of statements contains several thousand data points. The obvious solution is to only store the most commonly requested financial statements in the database and load the others on-demand with a message to users that the financial statements were being loaded. However this inoccuous design decision creates a major security vulnerability.

Slow Loading Pages = Security Vulnerability
One of the hardest attacks to defend against is a Denial of Service (DoS) attack which is simply a repeated set of requests for a server resource such as a web page. The repeated requests generate an excess of load on the server which is then rendered incapable of serving other legitimate site users and hence the web site is taken down.
The difficulty in launching a DoS attack is that most web pages involve very little CPU processing and hence the flood of requests necessary to bring the site down would be extremely large and hard to generate. Attackers therefore will target slow loading pages which may be consuming a lot of server resources.

Simulating a DoS Attack
To test a web application’s vulernability to DoS attacks we can use a tool to issue multiple simultaneous requests. For this demo I will use  LOIC (Low Orbit Ion Cannon) with the source code available at https://github.com/NewEraCracker/LOIC.
LOIC comes with a simple Windows application front-end, although you could just use the source code in the XXPFlooder.cs file to programatically launch the attack (note the simplicity of the code for running DoS attacks – the XXPFlooder.cs file is a mere 89 lines).
Using LOIC Windows app I configured it to make Http requests to the /loadaccounts?id=1002 page on my development server:

This quickly overwhelmed the development server, bringing the CPU utilization close to 100%:

Solution
DoS attacks are one of the most challenging security issues as there are numerous different ways of launching a DoS attack. Typically the defense will be at the network or server level, but in the case of a resource intensive page there are several potential application level optimisations. Firstly, the resource intensive page can be configured as a separate application with a dedicated application pool. A attack will therefore be isolated to the specific page (or area in the website) and leave the rest of the site unaffected.
Next, the page itself can be protected by using a short term in-memory cache. After executing the the resource intensive operation the resulting object can be placed in memory for a short period and subsequent requests will be served from the in-memory cache. A basic outline of the code is below:

FinancialStatements financialStatements;

//First test if there is an an object in the cache
if(Cache["FinancialStatements" + id] == null)
{
//If there is no object in the cache, create it an then load it into the cache
financialStatements = LoadFinancialStatements(id); //This is the resouce intensive operation
CacheItemPolicy policy = new CacheItemPolicy();
policy.AbsoluteExpiration = DateTime.Now + TimeSpan.FromMinutes(1);

Cache.Add(new CacheItem("FinancialStatements" + id, financialStatements), policy);
}
else
{
//If the object is already in the memory, simply load it from the
//cache which is a quick low impact operation.
financialStatements = Cache["FinancialStatements" + id];
}

Thus the first request will run resource intensive operation, but subsequent requests made within 60 seconds will simply be loaded from the memory.

Now running the LOIC attack again results in the below CPU resouce utilisation:

Note that CPU now only reaches a manageable 31% and only one of the cores is under significant load.