.NET news » Search results
Search results for query "data access" (34):
.NET Role-Based Security in a Production Environment
Explore the Data Access Options in Visual Studio 2008
In Visual Studio 2008 running on the .NET framework 3.5, developers can not only create DataReaders and DataSets; Microsoft has also added LINQ to SQL, Entity Framework, and ADO.NET Data Services, which leverages the first two. These new options of course, mean that you have new syntaxes to learn. LINQ, which is built into Visual Basic and C#, has one implementation for LINQ to SQL and another for LINQ to Entities. In Entity Framework, you have the option to use LINQ to Entities or make queries in two other ways with Entity SQL…
Windows Live Delegated APIs
Profiling Database Activity in the Entity Framework
SOA Tips: Address Scalability Bottlenecks with Distributed Caching
Finally! Entity Framework working in fully disconnected N-tier web app
The Baker’s Dozen Doubleheader: 26 Productivity Tips for Optimizing SQL Server Queries (Part 2 of 2)
In part two of this series on optimizing SQL Server queries I’m going to continue with some T-SQL scenarios that pit one approach versus another. I’ll also look at what SQL developers can do to optimize certain data access scenarios. I’ll also compare approaches with temporary tables versus table variables, and stored procedures versus views.
Sorting a Grid of Data in ASP.NET MVC
Last week's article, Displaying a Grid of Data in ASP.NET MVC, showed, step-by-step, how to display a grid of data in an ASP.NET MVC application. Last week's article started with creating a new ASP.NET MVC application in Visual Studio, then added the Northwind database to the project and showed how to use Microsoft's Linq-to-SQL tool to access data from the database. The article then looked at creating a Controller and View for displaying a list of product information (the Model).
This article builds on the demo application created in Displaying a Grid of Data in ASP.NET MVC, enhancing the grid to include bi-directional sorting. If you come from an ASP.NET WebForms background, you know that the GridView control makes implementing sorting as easy as ticking a checkbox. Unfortunately, implementing sorting in ASP.NET MVC involves a bit more work than simply checking a checkbox, but the quantity of work isn't significantly greater and with ASP.NET MVC we have more control over the grid and sorting interface's layout and markup, as well as the mechanism through which sorting is implemented. With the GridView control, sorting is handled through form postbacks with the sorting parameters - what column to sort by and whether to sort in ascending or descending order - being submitted as hidden form fields. In this article we'll use querystring parameters to indicate the sorting parameters, which means a particular sort order can be indexed by search engines, bookmarked, emailed to a colleague, and so on - things that are not possible with the GridView's built-in sorting capabilities.
Like with its predecessor, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more!
Displaying a Paged Grid of Data in ASP.NET MVC
This article demonstrates how to display a paged grid of data in an ASP.NET MVC application and builds upon the work done in two earlier articles: Displaying a Grid of Data in ASP.NET MVC and Sorting a Grid of Data in ASP.NET MVC. Displaying a Grid of Data in ASP.NET MVC started with creating a new ASP.NET MVC application in Visual Studio, then added the Northwind database to the project and showed how to use Microsoft's Linq-to-SQL tool to access data from the database. The article then looked at creating a Controller and View for displaying a list of product information (the Model).
Sorting a Grid of Data in ASP.NET MVC enhanced the application by adding a view-specific Model (ProductGridModel) that provided the View with
the sorted collection of products to display along with sort-related information, such as the name of the database column the products were sorted by and whether the
products were sorted in ascending or descending order. The Sorting a Grid of Data in ASP.NET MVC article also walked through creating a partial view to
render the grid's header row so that each column header was a link that, when clicked, sorted the grid by that column.
In this article we enhance the view-specific Model (ProductGridModel) to include paging-related information to include the current page being viewed,
how many records to show per page, and how many total records are being paged through. Next, we create an action in the Controller that efficiently retrieves the
appropriate subset of records to display and then complete the exercise by building a View that displays the subset of records and includes a paging interface that
allows the user to step to the next or previous page, or to jump to a particular page number, we create and use a partial view that displays a numeric paging interface
Like with its predecessors, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more!
Performance and Design Guidelines for Data Access Layers
Many problems you will face are actually the building data access layer, sometimes thinly disguised, sometimes in your face; it’s one of the broad patterns that you see in computer science – as the cliché says: it keeps rearing its ugly head.
Despite this, the same sorts of mistakes tend to be made in the design of such systems so I’d like to offer a bit of hard-won advice on how to approach a data access problem. Mostly this is going to be in the form of patterns/anti-patterns but nonetheless I hope it will be useful.
As always, in the interest of not writing a book, this advice is only approximately correct.
The main thing that you should remember is that access to the data will take two general shapes. In database parlance you might say some of the work will be OLTP-ish (online transaction processing) and some of the work will be OLAP-ish (online analytical processing). But simply, there’s how you update pieces of your data and how your read chunks of it. And they have different needs.
At present it seems to me that people feel a strong temptation to put an OO interface around the data and expose that to customers. This can be ok as part of the solution if you avoid some pitfalls, so I suggest you follow this advice:
1. Consider the unit of work carefully
There are likely to be several typical types of updates. Make sure that you fetch enough data so that the typical cases do one batch of reads for necessary data, modify the data locally, and then write back that data in a batch. If you read too much data you incur needless transfer costs, if you read too little data then you make too many round trips to do the whole job.
You may have noticed that I began with a model where you fetch some data, change it locally, and write it back. This is a fairly obvious thing to do given that you are going to want to do the write-back in probably a single transaction but it’s important to do this even if you aren’t working in a transacted system. Consider an alternative: if you were to provide some kind of proxy to the data to each client and then RPC each property change back to the server you are in a world of hurt. Now the number of round trips is very high and furthermore it’s impossible to write correct code because two people could be changing the very same object at the same time in partial/uncoordinated ways.
This may seem like a silly thing to do but if the authoritative store isn’t a database it’s all too common for people to forget that the database rules exist for a reason and they probably apply to any kind of store at all. Even if you’re using (e.g.) the registry or some other repository you still want to think about unit-of-work and make it so the each normal kind of update is a single operation.
Whatever you do don’t create an API where each field read/write is remoted to get the value. Besides the performance disaster this creates it’s impossible to understand what will happen if you several people are doing something like Provider.SomeValue += 1;
2. Consider your locking strategy
Implicit in the discussion above is some notion of accepting or rejecting writes because the data has changed out from under you. This is a normal situation and making it clear that it can and does happen and should be handled makes everyone’s life simpler. This is another reason why an API like Provider.SomeValue = 1 to do the writes is a disaster. How does it report failure? And if it failed, how much failed?
You can choose an optimistic locking strategy or something else but you’ll need one. A sure sign that you have it right is that the failure mode is obvious, and the recovery is equally obvious.
I once had a conversation with Jim Gray where I told him how ironic it was to me that the only reason transactions could ever succeed at all in a hot system was that they had the option of failing. Delicious irony that.
Remember, even data from a proxy isn’t really live. It’s an illusion. The moment you say “var x = provider.X;” your ‘x’ is already potentially stale by the time it’s assigned. Potentially stale data is the norm, it’s just a question of how stale and how do you recover. That means some kind of isolation and locking choice is mandatory.
3. Don’t forget the queries
Even if you did everything else completely correctly you’ve still only built half the system if all you can do is read and modify entities. The OLAP part of your system will want to do bulk reads like “find me all the photos for this week”. When doing these types of accesses it is vital to take advantage of their read aspect. Do not create transactable objects just bring back the necessary data in a raw form. Simple arrays of primitives are the best choice; they have the smallest overheads. Do not require multiple round-trips to get commonly required data or the fixed cost of the round trip will end up dwarfing the actual work you're trying to do.
These queries are supposed to represent a snapshot in time according to whatever isolation model your data has (which comes back to the requirements created by your use cases and your unit of work). If you force people to use your object interface to read raw data you will suffer horrible performance and you will likely have logically inconsistent views of your data. Don’t do that.
One of the reasons that systems like Linq to SQL were as successful as they were (from various perspectives I think) is that they obeyed these general guidelines:
you can get a small amount of data or a large amount of data you can get objects or just data you can write back data chunks in units of your choice the failure mode for read/writes is clear, easy to deal with, and in-your-face (yes, reads can fail, too)Other data layers, while less general no doubt, would do well to follow the same set of rules.


Syndicate