XAML Playground
about XAML and other Amenities

Silverlight 3.0 RTW: A new HTTP stack ready for REST

2009-07-10T17:45:00+01:00 by Andrea Boschin

The paradigm behind REST services is very fascinating while it allow to manipulate resources over the network relying to Uri and Http Methods. For a few of you that are not aware of what REST means I suggest to think at the web because it is the more REST thing you know. When you are pointing your browser to a "resource" on the Internet, for example a product in a ecommerce website, you will write an Url in the browser that univocally identify the product. You now can imagine to use Http Methods to operate on this resource: the GET verb will retrieve information about it, the PUT updates the resource, the POST creates it and finally the DELETE remove the resource from the web. In a few words these are REST operations and is very cool to be able to use this paradigm to access resources from a Database. ADO.NET Data Services is born to implement this paradigm into the .NET Framework 3.5 SP1, and many libraries has been build to support different technologies, Windows Forms, WPF, ASP.NET. And obviously Silverlight 2.0 come with an implementation of a Data Service client to operate with REST resources but it is not really RESTful as you may believe the first time you use it.

The problem with Silverlight 2.0, due to limitations to the http stack coming from the use of the Browser API, is that it cannot handle all the required http methods but only the GET and POST verbs. So to workaround this limitation the Data Service library simply use POST to send PUT and DELETE operations. This workaround violate the REST paradigm and it imply you are not able to use 3rd party REST resources.

What I've briefly explained in the previous paragraphs is the main reason we really need a new http stack to overcome this limitations and to be able to implement a full REST interface. This is not the only benefit we can give for the addition of new verbs: let think for example to a new way to upload files to the server, using the PUT method.

The latest release of Silverlight 3.0 got new features going in the direction I've just explained. In the next paragraphs I will explain how to use them by exploring the pretty new http Stack, called ClientHttp.

BrowserHttpStack vs ClientHttpStack

First of all we need to know that the new http stack is completely separated from the old stack. This is an important note while we are sure the old stack remain unaltered and our applications relaying on it continue to work without any change. All the working things we already know continue to work over this stack and does not use the new one.

The ClientHttpStack in opposite to the BrowserHttpStack (these are the official names of the two actors) can be created using a static class called WebRequestCreator. This class can create instances of HttpWebRequest for both the browser and the client stack. Here is a sample showing the usage of the class.

   1: // create an instance of the Client stack
   2: HttpWebRequest rq = (HttpWebRequest)WebRequestCreator.ClientHttp.Create(uri);
   3:  
   4: ...or...
   5:  
   6: // create an instance of the Browser stack
   7: HttpWebRequest rq = (HttpWebRequest)WebRequestCreator.BrowserHttp.Create(uri);

The HttpWebRequest created by this call is exactly the same we have used since Silverlight 2.0 so it can easily replace the browser stack in every existing application with a few line of code. We can immediately test the new stack assigning the Method property with the "PUT" value:

rq.Method = "PUT";

As we expect while the old stack thrown a "method not supported" exception, the new stack accept this assignment. Not all the http methods are allowed. Using TRACE, TRACK or CONNECT raise an error probably for security reasons. We will go deep inside some of the available methods in the next paragraph.

But this is only the most evident difference. Watching for the properties of the HttpWebRequest we will find a CookieContainer. It let us manage the cookies of the request. The old stack rely on cookies issued by the browser and it shares them with the browser itself. In the new stack we have full control of issued cookies; we can access cookies sent by the server or create our own cookies and send them. The drawback is that we cannot share cookies between client stack and browser stack but only rely on automatic management of server cookies from the client stack to allow forms authentication scenarios.

The next sample show how to use the CookieContainer to send our cookies:

   1: HttpWebRequest rq = (HttpWebRequest)WebRequestCreator.ClientHttp.Create(uri);
   2: rq.Method = "GET";
   3: rq.CookieContainer = new CookieContainer();
   4: rq.CookieContainer.Add(
   5:     uri, new Cookie("Author", "Andrea Boschin"));

Another great news, many people will really appreciate, is that the new stack is capable of retrieving status codes from the http response. One of the most common problems with Silverlight 2.0, especially the first time someone try to use the HttpWebRequest is the unability to get errors from the server. This is because the browser do not send all status codes to the plugins but only 404 (Not Found) and 200 (OK). The forum is filled of disoriented people not understanding why they get a 404 from the server while they are calling an existing uri and the most common suggestion is to use fiddler to search the error in the traced call.

To read status codes from the response we have to catch the WebException and then get the HttpWebResponse from the exception. This may seems a bit strange but is really simple. Here is the code:

   1: rq.BeginGetResponse(
   2:     r =>
   3:     {
   4:         try
   5:         {
   6:             WebResponse rs = rq.EndGetResponse(r);
   7:             // do something with response...
   8:         }
   9:         catch (WebException ex)
  10:         {
  11:             HttpWebResponse rs = (HttpWebResponse)ex.Response;
  12:  
  13:             MessageBox.Show(
  14:                 string.Format("Webserver returned status code {0}: '{1}'", 
  15:                     (int)rs.StatusCode, 
  16:                     rs.StatusDescription));
  17:         }
  18:     }, rq);

These features is really useful but they are not used from existing classes. WCF and WebClient can only use the classic http stack. To have access to the ClientHttpStack we have to use the HttpWebRequest and dealing with problems like thread syncronization that are solved by the WebClient. But the new stack is very important and now it is time to have some samples about usage of the http methods.

GET, POST, PUT & DELETE

In the sample solution attached at the end of this article I've enclosed a sort of file system browser application. It use the new http stack to make calls to the server and retrieve xml containing the response. This kind of application was possible also with Silverlight 2.0 but to explaing usage of http methods I've used them to mimic some filesystem operations. GET will list the content of a folder, PUT upload a file or create a directory, POST is used to send credentials and finally DELETE remove files and directories.

Doing an http call is very similar to using the previous stack. We need to create an instance of the stack, set the method we want to use and then create a request as the server will expect it. The following sample make a call to get the directory content. I've splitted the operations in two methods, one doing the raw http call and the other using this low-level method create the request and parse the result. This is the low-level DoGet() method:

   1: private static void DoGet(Uri uri, Action<Stream> success, Action<Exception> fail)
   2: {
   3:     try
   4:     {
   5:         HttpWebRequest rq = (HttpWebRequest)WebRequestCreator.ClientHttp.Create(uri);
   6:         rq.Method = "GET";
   7:         rq.CookieContainer = new CookieContainer();
   8:         rq.BeginGetResponse(
   9:             new AsyncCallback(
  10:                 r =>
  11:                 {
  12:                     try
  13:                     {
  14:                         WebResponse rs = rq.EndGetResponse(r);
  15:                         success(rs.GetResponseStream());
  16:                     }
  17:                     catch (WebException ex)
  18:                     {
  19:                         fail(CreateFailFromWebException(ex));
  20:                     }
  21:                 }), null);
  22:     }
  23:     catch (Exception ex)
  24:     {
  25:         fail(ex);
  26:     }
  27: }

The code in this listing shows the usual way to place an http call. First of all the DoGet method require I specify two callback methods to be called when the request will succeed or fail. This pattern helps to simplify the code to write avoiding handling multiple event handler. Using lambda expressions the code become very compact.

After the creation of the client stack I initialize the "Method" property with the GET verb, then I create an instance of a CookieContainer to allow the request to handle cookies. Finally calling BeginGetResponse place the call to the specified uri. When this method is called a new thread is created; In my case the body of the inner lambda expression contains the code executed in the thread. It call the EndGetResponse method (this trigger the call to the webserver) and finally ask for the Stream containing the body of the response.

As I explain in the previous paragraph I have to catch the WebException to read http status codes and translate it to an call to the fail() callback. You have to be aware that the two try-catch blocks are executed in different threads so the external block is unable to catch exception coming from the body of the lambda expression. This is the reason for having two separated blocks calling the fail method. Now watch to the following code:

   1: public static void GetDirectory(string path, Action<FileSystemItem[]> success, Action<Exception> fail)
   2: {
   3:    Uri uri = new Uri(
   4:        DataSource.ServiceUri +
   5:        "?action=directory" +
   6:        "&path=" + HttpUtility.UrlEncode(path));
   7:  
   8:    DataSource.DoGet(uri,
   9:        r =>
  10:        {
  11:            try
  12:            {
  13:                XDocument document = XDocument.Load(r);
  14:                IEnumerable<FileSystemItem> directories = DataSource.DeserializeItems(document);
  15:                Deployment.Current.Dispatcher.BeginInvoke(() => success(directories.ToArray()));
  16:            }
  17:            catch (Exception ex)
  18:            {
  19:                Deployment.Current.Dispatcher.BeginInvoke(() => fail(ex));
  20:            }
  21:        },
  22:        e => Deployment.Current.Dispatcher.BeginInvoke(() => fail(e)));
  23: }

This is the high-level method to get the directory content.  It use the same asyncronous pattern but maps the calls to the callback methods to the UI thread using an instance of the Dispatcher. While the HttpWebRequest make the calls in a different thread we need to marshal the context of the call to avoid cross-thread exception when we try to reach some UI components.

Using PUT to upload a file

Showing all the methods one-by-one is too much long. But there are another (last)  method I want to show. It is the PUT method we can use to upload files to a webserver in an alternate (and most simple) way to the use of the POST method. While the POST method require encoding the resource to base64 and forge a complex request, the PUT method simply ask to enqueue the full binary content. Here is the client code:

   1: private static void DoPut(Uri uri, FileStream file, Action<Stream> success, Action<Exception> fail)
   2: {
   3:     try
   4:     {
   5:         HttpWebRequest rq = (HttpWebRequest)WebRequestCreator.ClientHttp.Create(uri);
   6:         rq.Method = "PUT";
   7:         rq.CookieContainer = DataSource.Cookies;
   8:         rq.BeginGetRequestStream(
   9:             new AsyncCallback(
  10:                 result => DataSource.DoPutUpload(result, file, success, fail)), rq);
  11:     }
  12:     catch (Exception ex)
  13:     {
  14:         fail(ex);
  15:     }
  16: }
  17:  
  18: private static void DoPutUpload(IAsyncResult result, FileStream file, Action<Stream> success, Action<Exception> fail)
  19: {
  20:     HttpWebRequest rq = (HttpWebRequest)result.AsyncState;
  21:  
  22:     using (Stream stream = rq.EndGetRequestStream(result))
  23:     {
  24:         int read = 0;
  25:         byte[] buffer = new byte[1024];
  26:  
  27:         while ((read = file.Read(buffer, 0, buffer.Length)) > 0)
  28:             stream.Write(buffer, 0, read);
  29:     }
  30:  
  31:     rq.BeginGetResponse(
  32:         r =>
  33:         {
  34:             try
  35:             {
  36:                 WebResponse rs = rq.EndGetResponse(r);
  37:                 success(rs.GetResponseStream());
  38:             }
  39:             catch (WebException ex)
  40:             {
  41:                 fail(CreateFailFromWebException(ex));
  42:             }
  43:         }, rq);
  44: }

The first method (DoPut) is very close to the previous method I described. The core of the upload is the DoPutUpload. It open a stream in the HttpWebRequest and write the entire content of the file to upload just before calling the EndGetRequest. When this call is made the file is send on the network to the webserver. I use an httphandler and when the PUT is receives read from the InputStream the file I write to the filesystem.

   1: private void ProcessFileRequest(string path)
   2: {
   3:     this.EvaluateIsValidUser();
   4:  
   5:     if (File.Exists(path))
   6:         throw new Exception("File already exists");
   7:  
   8:     using (FileStream stream = File.OpenWrite(path))
   9:     {
  10:         byte[] data = new byte[1024];
  11:         int read = 0;
  12:  
  13:         while ((read = this.Context.Request.InputStream.Read(data, 0, 1024)) > 0)
  14:             stream.Write(data, 0, read);
  15:     }
  16:  
  17:     this.WriteOk();
  18: }

The last code runs on the server. It receive the path of the file to write and create it in the file system. The final WriteOk() method show I can write something on the response to notify the client it have successful uploaded the file.

Some final words

The main benefit coming with the introduction of the new stack is the opportunity to be able to get access to rest resources on the internet. This is the reason for the creation of this stack someone can consider a duplication on something already exists. But the presence of new methods, the cabapility to better handle server status codes and cookies give to Silverlight 3.0 a new resource to implement great applications.

The code I attached to this article show a full implementation of all the http methods I've talk about. You have only to change the server path where the filesystem has to be explored. You can achive this by changing the BaseFolder property in the HttpRequestManager class.

Download: Elite.Silverlight3.ClientHttpStack.zip (2,1 MB)

Video: ClientHTTPStack.wmv (10,1 MB)

Categories:   Networking | News
Actions:   E-mail | del.icio.us | Permalink | Comments (2) | Comment RSSRSS comment feed

Comments (2) -

July 23. 2009 06:42

I would rename this as "Almost ready for REST".  The idea was to give us access to PUT and DELETE, but you still cannot set the Credentials property, nor can you set "Authorization" in the header.  Which means the new ClientHttp stack only works with anonymous access.  If you expose your PUTs and DELETEs to anonymous users, then everything works fine.  I'm hoping they will put out a patch to address this.  Otherwise I'll have to revisit it again next year for SL4.  For more info, see this article:  http://mark.mymonster.nl/2009/07/11/silverlight-3-did-we-get-support-for-credentials/

John

July 29. 2009 20:04

Couldnt you just use a custom cookie to achieve some sort of basic Authorization? Thats all that is really going on with the Browser Network Stack. The browser authenticates (like for Windows Auth) and passes the requests with each call. You could theoretically handle it that way. Or just store encrypted authentication information and parse it on the WCF Server Side. Either way it could be easier, but it should still be doable. I can also imagine now that the door is open for it, people could create extensions to build on top of this to add alot more ability to it.

Corey

Pingbacks and trackbacks (2)+