.NET news » Search results
Search results for query "form" (46):
Using ASP.NET, Membership, and jQuery to Determine Username Availability
Chances are, at some point you've tried creating a new user account on a website and were told that the username you selected was already taken. This is especially common on very large websites with millions of members, but can happen on smaller websites with common usernames, such as people's names or popular words or phrases in the lexicon of the online community that frequents the website. If the user registration process is short and sweet, most users won't balk when they are told their desired username has already been taken - they'll just try a new one. But if the user registration process is long, involving several questions and scrolling, it can be frustrating to complete the registration process only to be told you need to return to the top of the page to try a different username.
Many websites use Ajax techniques to check whether a visitor's desired username is available as soon as they enter it (rather than waiting for them to submit the form). This article shows how to implement such a feature in an ASP.NET website using Membership and jQuery. This article includes a demo available for download that implements this behavior in an ASP.NET WebForms application that uses the CreateUserWizard control to register new users. However, the concepts in this article can be applied to ad-hoc user registration pages and ASP.NET MVC.
Read on to learn more!
Implementing the Store Locator Application Using ASP.NET MVC (Part 2)
Last week's article, Implementing the Store Locator Application Using ASP.NET MVC (Part 1), started
a two-part article series that walked through converting my ASP.NET store locator application from
WebForms to ASP.NET MVC. Last week's article stepped through the first tasks in porting the store locator application to ASP.NET MVC, including: creating the new
project; copying over stylesheets, the database, scripts, and other shared content from the WebForms application; building the HomeController; and coding
the Index and StoreLocator actions and views.
Recall that the StoreLocator action and view prompts the user to enter an address for which to find nearby stores. On form submission, the action interfaces
with the Google Maps API's geocoding service to determine if the entered address corresponds to known latitude and
longitude coordinates. If so, the user is redirected to the StoreLocatorResults action (which we create in this article) that displays the nearby stores in
both a grid and as markers on a map. Unlike the StoreLocator action created in Part 1, the StoreLocatorResults action uses a more intricate
model and a strongly-typed view.
Focusing and Selecting the Text in ASP.NET TextBox Controls
When a browser displays the HTML sent from a web server it parses the received markup into a Document Object Model, or DOM, which models the markup as a hierarchical structure.
Each element in the markup - the <form> element, <div> elements, <p> elements, <input>
elements, and so on - are represented as a node in the DOM and can be programmatically accessed from client-side script. What's more, the nodes that make up the DOM
have functions that can be called to perform certain behaviors; what functions are available depend on what type of element the node represents.
One function common to most all node types is focus, which gives keyboard focus to the corresponding element. The focus function is commonly
used in data entry forms, search pages, and login screens to put the user's keyboard cursor in a particular textbox when the web page loads so that the user can start
typing in his search query or username without having to first click the textbox with his mouse. Another useful function is select, which is available for
<input> and <textarea> elements and selects the contents of the textbox.
This article shows how to call an HTML element's focus and select functions. We'll look at calling these functions directly from client-side
script as well as how to call these functions from server-side code.
Managing View State in ASP.NET 4 Using the New ViewStateMode Property
The ASP.NET Web Forms model strives to encapsulate the lower level complexities involved in building a web application. Features like server-side event handlers, the page lifecycle, and view state effectively blur the line between the client and the server, simplify state management, and free the developer from worrying about HTTP, requests and responses, and similar matters. While these facets of the Web Forms model allow for rapid application development and make ASP.NET more accessible to developers with a web application background, their behavior can impact your website's behavior and performance.
View state is perhaps the most important - yet most misunderstood - feature of the Web Forms model. In a nutshell, view state is a technique that automatically persists
programmatic changes to the Web controls on a page. By default, this state is serialized into a base-64 encoded string and included as a hidden <input>
field in the Web Form. On postback, this state information is returned to the server as part of the POST request, at which point the server can deserialize it and
reapply the persisted state to the controls in the control hierarchy. (If this last paragraph made crystal clear sense, great! If not, consider reading
my article, Understanding ASP.NET View State, and Dave Reed's
article, ViewStateMode in ASP.NET 4, before continuing.)
One potential issue with view state is that it can greatly bloat the size of your web pages. Each new version of ASP.NET seems to include new techniques for
managing view state's footprint. ASP.NET 4 adds a new property to all Web controls, ViewStateMode,
which allows developers to disable view state for a page by default and then selectively enable it for specific controls. This article reviews existing view
state-related properties and then delves into the new ViewStateMode property.
Building Interactive User Interfaces with Microsoft ASP.NET AJAX: Rebinding Client-Side Events After a Partial Page Postback
The UpdatePanel is the workhorse of the ASP.NET Ajax library. It is responsible for defining regions of a web page that trigger partial page postbacks (as opposed to full page postbacks). Such partial page postbacks transfer less information between the client and server and have their user interfaces updated seamlessly, thereby leading to a more interactive user experience. (For more information on UpdatePanels, refer to Using the UpdatePanel.) One side-effect of a partial page postback is that the HTML elements within the UpdatePanel are replaced with the markup returned on postback. This behavior is not noticeable and is not an issue unless you have client-side event handlers wired up to the elements within the UpdatePanel. Such client-side event handlers are lost after a partial page postback.
Consider a very simple UpdatePanel that contains just a TextBox and a Button. Furthermore, assume we have JavaScript on the page that creates an event handler for the
TextBox's focus and blur events, which "highlights" the TextBox when the user focuses it and unhighlights it when losing focus. Initially, this
script works as expected - clicking on the TextBox will "highlight" it. However, things break down once the Button is clicked. When the Button is clicked the UpdatePanel
triggers a partial page postback and submits an asynchronous HTTP request back to the server. The requested ASP.NET page then goes through its life-cycle again, but this time
only the markup in the UpdatePanel (and the hidden form fields on the page) are returned to the browser. The UpdatePanel then overwrites its existing markup with the
markup just returned from the server. Unfortunately, this overwriting obliterates the focus and blur client-side event handlers, meaning that
selecting the TextBox no longer highlights it.
In short, if there are client-side event handlers attached to HTML elements within an UpdatePanel it is imperative that they be rebound after a partial page postback. This article looks at three different ways to accomplish this.
Performance and Design Guidelines for Data Access Layers
Many problems you will face are actually the building data access layer, sometimes thinly disguised, sometimes in your face; it’s one of the broad patterns that you see in computer science – as the cliché says: it keeps rearing its ugly head.
Despite this, the same sorts of mistakes tend to be made in the design of such systems so I’d like to offer a bit of hard-won advice on how to approach a data access problem. Mostly this is going to be in the form of patterns/anti-patterns but nonetheless I hope it will be useful.
As always, in the interest of not writing a book, this advice is only approximately correct.
The main thing that you should remember is that access to the data will take two general shapes. In database parlance you might say some of the work will be OLTP-ish (online transaction processing) and some of the work will be OLAP-ish (online analytical processing). But simply, there’s how you update pieces of your data and how your read chunks of it. And they have different needs.
At present it seems to me that people feel a strong temptation to put an OO interface around the data and expose that to customers. This can be ok as part of the solution if you avoid some pitfalls, so I suggest you follow this advice:
1. Consider the unit of work carefully
There are likely to be several typical types of updates. Make sure that you fetch enough data so that the typical cases do one batch of reads for necessary data, modify the data locally, and then write back that data in a batch. If you read too much data you incur needless transfer costs, if you read too little data then you make too many round trips to do the whole job.
You may have noticed that I began with a model where you fetch some data, change it locally, and write it back. This is a fairly obvious thing to do given that you are going to want to do the write-back in probably a single transaction but it’s important to do this even if you aren’t working in a transacted system. Consider an alternative: if you were to provide some kind of proxy to the data to each client and then RPC each property change back to the server you are in a world of hurt. Now the number of round trips is very high and furthermore it’s impossible to write correct code because two people could be changing the very same object at the same time in partial/uncoordinated ways.
This may seem like a silly thing to do but if the authoritative store isn’t a database it’s all too common for people to forget that the database rules exist for a reason and they probably apply to any kind of store at all. Even if you’re using (e.g.) the registry or some other repository you still want to think about unit-of-work and make it so the each normal kind of update is a single operation.
Whatever you do don’t create an API where each field read/write is remoted to get the value. Besides the performance disaster this creates it’s impossible to understand what will happen if you several people are doing something like Provider.SomeValue += 1;
2. Consider your locking strategy
Implicit in the discussion above is some notion of accepting or rejecting writes because the data has changed out from under you. This is a normal situation and making it clear that it can and does happen and should be handled makes everyone’s life simpler. This is another reason why an API like Provider.SomeValue = 1 to do the writes is a disaster. How does it report failure? And if it failed, how much failed?
You can choose an optimistic locking strategy or something else but you’ll need one. A sure sign that you have it right is that the failure mode is obvious, and the recovery is equally obvious.
I once had a conversation with Jim Gray where I told him how ironic it was to me that the only reason transactions could ever succeed at all in a hot system was that they had the option of failing. Delicious irony that.
Remember, even data from a proxy isn’t really live. It’s an illusion. The moment you say “var x = provider.X;” your ‘x’ is already potentially stale by the time it’s assigned. Potentially stale data is the norm, it’s just a question of how stale and how do you recover. That means some kind of isolation and locking choice is mandatory.
3. Don’t forget the queries
Even if you did everything else completely correctly you’ve still only built half the system if all you can do is read and modify entities. The OLAP part of your system will want to do bulk reads like “find me all the photos for this week”. When doing these types of accesses it is vital to take advantage of their read aspect. Do not create transactable objects just bring back the necessary data in a raw form. Simple arrays of primitives are the best choice; they have the smallest overheads. Do not require multiple round-trips to get commonly required data or the fixed cost of the round trip will end up dwarfing the actual work you're trying to do.
These queries are supposed to represent a snapshot in time according to whatever isolation model your data has (which comes back to the requirements created by your use cases and your unit of work). If you force people to use your object interface to read raw data you will suffer horrible performance and you will likely have logically inconsistent views of your data. Don’t do that.
One of the reasons that systems like Linq to SQL were as successful as they were (from various perspectives I think) is that they obeyed these general guidelines:
you can get a small amount of data or a large amount of data you can get objects or just data you can write back data chunks in units of your choice the failure mode for read/writes is clear, easy to deal with, and in-your-face (yes, reads can fail, too)Other data layers, while less general no doubt, would do well to follow the same set of rules.


Syndicate