MassTransit approach for handling errors when consuming messages

MassTransit

What’s MassTransit?

It’s an awesome, open-source framework for delivering messages. From their docos:

MassTransit is a free, open source, lightweight message bus for creating distributed applications using the .NET framework. MassTransit provides an extensive set of features on top existing message transports, resulting in a developer friendly way to asynchronously connect services using message-based conversation patterns. Message-based communication is a reliable and scalable way to implement a service oriented architecture.

Why error-handling is so important here?

Let’s face it, things are going to fail, and bad things happen. One would hope that it would not happen often, but it would happen :). So what’s going to happen when message fails to be consumed properly? Should it be retried, should it be ignored, etc. For most systems, a retry is required.

MassTransit 3.0 Approaches

The design of MassTransit’s approach to handling errors seems to be influenced by NServiceBus way of handling errors. In NServiceBus, the coin the terms First Level Error Handling, and Second Level Retry. Similarly in MassTransit, there are basically 2 main approaches out-of-the-box to handle retrial:

First Level Retry (Global Retry Policy)

This is the global setting that you set when you create your transport channels and should be part of your global bus settings. There is a number of extension methods to help you set the retry polices based on the underlying transport you are using (RabbitMQ, Azure Service Bus, etc). In my case, I am using Azure Service Bus, so here is how we set up our global level retries:

MassTransit.Bus.Factory.CreateUsingAzureServiceBus(cfg =>
{
	// needs this to enable MassTransit to reschedule the delivery of messages 
	cfg.UseServiceBusMessageScheduler();
	var a = 3;
	var b = 5;
	var c = 30;
	var d = 10;
	var retryPolicy = new ExponentialRetryPolicy(new AllPolicyExceptionFilter(), a , TimeSpan.FromSeconds(b), TimeSpan.FromSeconds(c),TimeSpan.FromSeconds(d));
	cfg.UseRetry(retryPolicy);
				
	IServiceBusHost host = cfg.Host(SendEndpointUri, h =>
	{
		// Configuring the SAS here
	});

	cfg.ReceiveEndpoint(host, ec => { ec.LoadFrom(ctx);});
});

In this scenario, MassTransit will retry for the specified number of retries if handling a message failed. This means when MassTransit calls the Consume() method on the EventConsumer the method does not complete properly (ie throws an exception). Thus, if you are using this approach, you need to make sure that you bobble up your exceptions when consuming events, to allow MassTransit to redeliver the message again.

The good thing about this is that you do not need to check for how many times we tried or adding some waiting time between these retries. This is already done for us by Mass-transit retrial config extension. If you look at the code above where we configure the retrial policy, we tell MassTransit the following:

1. If a message failed to be consumed, then redeliver it for the max no of a.
2. If a message failed to be delivered, wait a period of time between b and c, with an interval value of d.

This is very useful as it gives our system (event consumers) a time to recover from transient failures before redelivering the message again. However, be warned, there are major implications to handling messages this way because your messages might end up being out of order, which could be critical for some systems. Thus, if this is the case, this approach might not be the best fit for your system.

Second Level Retry: (Explicit use of Context.Redeliver())

MassTransit provides another approach for redelivering (failed-to-consume) messages. This is more explicit and leaves it to the consumer to decide whether it wants to redeliver the message or not. this can be done like:

public class TestEventsConsumer : IConsumer<SomeImportantEvent>
{
	public Task Consume(ConsumeContext<SomeImportantEvent> context)
	{
		try
		{
			await someActionHere();
		}
		catch (Exception exception)
		{
			context.Redeliver(TimeSpan.FromSeconds(5));

			// some more logging and stuff here
		}			
	}
}

As you can see, in this approach, we are calling Redeliver() directly, so we are making the decision when a message should be retried. The good thing about this approach is that we decide what is retried and when. This could be important as you might not be able to find one-size-fits-all kind of solution, which would prevent you from using the global approach above.

On the other hand, the Redeliver() method does not check for any maximum number of retries. Therefore, if you do it like I did above, you will end up with a faulty message that just keeps being redelivered over and over again, until it’s time to live expires. This might be describable behavior for some systems (not in my case 🙂 ) but it is not really the norm in most cases that I have seen. Therefore, you would want to have a maximum number of retries, in which you can call the redeliver() method. We can modify the code above to do that like this:

public class TestEventsConsumer : IConsumer<SomeImportantEvent>
{
	const int MAX_NO_OF_SECOND_LEVEL_RETRY = 5;
	const int NO_OF_SECONDS_TO_WAIT_BEFORE_REDELIVERING = 5;
	public Task Consume(ConsumeContext<SomeImportantEvent> context)
	{
		try
		{
			await someActionHere();
		}
		catch (Exception exception)
		{
			int n;
			var redeliverCount = context.Headers.Get("MT-Redelivery-Count", "-1");
			if (int.TryParse(redeliverCount, out n) && n <= MAX_NO_OF_SECOND_LEVEL_RETRY)
				context.Redeliver(TimeSpan.FromSeconds(NO_OF_SECONDS_TO_WAIT_BEFORE_REDELIVERING));

			// some more logging and stuff here
		}
	}
}

Note, if you using RabbitMQ, then you might need to deploy and start the QuartzService service to enable MassTransit to schedule the redelivery of messages. More details can be found here. I hope you find this useful and I would love to hear your comment, if you have something to say 🙂

Inter-App Communications in Xamarin.iOS and Why OpenUrl Freezes for 10+ sec

Inter-App Communications

Mobile apps are meant to communicate using multiple channels, but the most popular, recommended, and widely used is using Scheme Uri. If you have not used Scheme URI then you should consider adding them to your app, it takes less than a minute to add support to your app, and it provides you a great way to get users to your app.

Setting the Stage

One scenario that I had was App A was launching App B and querying data, App B was in turn looking up the request, and returning data to App A. This is a common practice and can be seen in the diagram below.

Scheme uri ios
Scheme uri ios

The Investigation

The problem here was that App B was freezing for up to 10+ sec before returning the result to App A. At first, I thought that this might be because the app initialisation or the data lookup taking long time, so I added diagnostic trace statements like below to time the operation and see where the time is spent.

public class AppDelegate
{
	...
	
	public override bool FinishedLaunching(UIApplication app, NSDictionary options)
	{
		Console.WriteLine("FinishedLaunching started: " + DateTime.Now.ToString("F"));

		...
		
		Console.WriteLine("FinishedLaunching complete: " + DateTime.Now.ToString("F"));
	}

	public override bool OpenUrl(UIApplication application, NSUrl url, string sourceApplication, NSObject annotation)
	{
		Console.WriteLine("OpenUrl started: " + DateTime.Now.ToString("F"));

		...
		

		Console.WriteLine("OpenUrl complete: " + DateTime.Now.ToString("F"));
	}
}

I found that my app was starting in less than 1 Sec, which is quite impressive, and I am very happy about 🙂 but the problem was in returning the data to the launching app (App A). The traces were like these:

Launch Services application launch failed - timeout waiting - Trace logs
Launch Services application launch failed – timeout waiting – Trace logs

This is telling me that App B was not able to launch App A to return back the data, which was quite surprising. I found that if you move that code to your pages/viewControllers things work fine. I thought that this was a bizarre situation, then I found this StackOverflow post, which explained the problem.

The Solution

Apparently, the iOS was having a race-condition like in trying to launch an app while the app itself was not fully launched. So the suggested solution was to add some delay or run it on another thread. Running the launch of App A on another thread would not work as it needs to be on the mainUiThread, so here is the solution I came up with:


public class AppDelegate
{
	...
	
	public override bool OpenUrl(UIApplication application, NSUrl url, string sourceApplication, NSObject annotation)
	{
		// handle opening url and look up data

		...

		Task.Delay(500).ContinueWith(_ => 
				{									
					this.InvokeOnMainThread( () => 
					{			
						var uri = NSUrl.FromString(callbackUri);
						UIApplication.SharedApplication.OpenUrl(uri);
									
					});

				});

		return true;
		
	}
}

: LaunchServices: application launch failed – timeout waiting for launch.

This works like a charm :). First, we got rid off the launch service error (see below). Second, App B is now returning the results to App A in less than 3 sec. If App B is in the background then it would return the data in less than 1 sec. Otherwise, it would take up to 3 sec to return the data. yayyy 🙂

A Journey of Hunting Memory leaks in Xamarin

My client was reporting performance issues with an existing app that was developed internally, and I had to find the problems. So, here is my journey on finding the issues and resolving them. The resolution was a reduction of the memory usage to 1/4 of what it was and usage was stabilised to this level (1/4). I am hopeful that this blog post can help you too in refining your app and pro-actively resolving any performance issues.

Capturing Telemetry Data

The first step in optimisation must be setting your benchmark, and in order to do that, we need to know where we stand. Thus, I set up an integration with Azure Application Insights to capture all memory warnings, errors, warnings, and battery level monitoring. For details on integrating with Azure App Insights, you can read more in my previous post here. The relevant part for us in this post is capturing memory warnings. There are three ways to capture memory warnings in iOS as I listed them on StackOverflow, we will stick with AppDelegate, as this is applicable to both traditional Xamarin and Xamarin Forms.


public partial class AppDelegate
{
	...
	public override void ReceiveMemoryWarning (UIApplication application)
	{
		// this (MemoryWarningsHandler) is a helper that I created 
		// to capture more info when a memory warning is raised. Things like (nav Stack, running time, etc)
		MemoryWarningsHandler.Record ();
	}
}


Always, Always, listen to these memory warning notifications from the OS, even if you are not actioning them now

In Android, we could also do the same using Application.OnLowMemory (See Android docos) as below:


public class MyApp : Application
{
	...
	public void OnLowMemory ()
	{
		MemoryWarningsHandler.Record ();
	}
}

Once we received and captured these memory warnings on Azure App Insights, we will then know that we have a memory warning, and whoever is looking at the report will keep bugging you until you fix this problem, if the app was not crashing due to low memory 🙂

Investigating Low Memory Issues

Once we identified that there is a problem with Memory, we need to figure out where the problem is occuring. To do this, we could use Xamarin Profiler. At the time of this writing (March 2016), Xamarin Profiler is still in preview and has many known bugs, but it still provides a good starting point.
We can monitor a number of performance indicators using Xamarin Profiler including:

  • Memory Allocation
  • Dependency Cycles
  • CPU Time
  • Few more aspects of the app Performance

For this post, we are interested in memory leaks, so we can start a profiler session, by choosing Memory Allocation when the profiler starts. More info on starting a profiler session can be found on Xamarin website.

In previous versions of Xamarin Profiler, I was able to view the call tree which gave me a great view on where the issue exactly was. This was based on call stacks and it tells you exactly how much memory is used in every entity/method. Unfortunately, in this version, I could not get this to show me this detailed view, but I was able to capture few memory snapshots and monitor the growth of used memory. My diagram looked like this:

Memory Usage before optimisation
Memory Usage before optimisation

This made it very clear that we have a problem in our memory consumption, the memory usage was racking up to 600 MB in some scenarios. The important part was now finding where the problem is.

Identify Problematic Code

In the absence of the call tree view, I started using the app and monitoring the memory usage. I established that it was a particular Page (Xamarin Forms Page) that was causing the memory usage to grow rapidly. As you can see at the start of the application, things were quite alright. Then, I focused my attention to that page. Looking at the page, it seemed harmless. It’s only a screen-saver-like page for a kiosk app. I could see a couple of small problems, but these are minor and would not cause the memory to grow that quickly. The code of this screen saver page can be seen below:

    public class ScreenSaverPage : ContentPage
    {
        private readonly List<string> _imageSourceList;
        private readonly List<FileImageSource> _cacheImageSource = new List<FileImageSource>();
 
        private readonly Image _screenSaver;
        private int _currentImageIndex;
        private readonly CancellationTokenSource _cts = new CancellationTokenSource();

        public ScreenSaverPage()
        {
            _imageSourceList = new List<string> { "Screensaver1.png", "Screensaver2.png", "Screensaver3.png", "Screensaver4.png" };
            
            // Caching it to Reduce loading from File all the time
            foreach (string fileName in _imageSourceList)
                _cacheImageSource.Add(new FileImageSource {File = fileName});

            _screenSaver = new Image
            {
                HorizontalOptions = LayoutOptions.FillAndExpand,
                VerticalOptions = LayoutOptions.FillAndExpand,
                Aspect = Aspect.AspectFill,
                Source = _cacheImageSource.FirstOrDefault()
            };
            var tapGestureRecognizer = new TapGestureRecognizer();
            tapGestureRecognizer.Tapped += async (s, e) =>
            {
                _cts.Cancel();
                await Task.Run(async () => await App.ResetInactivity(typeof (BaseViewModel)));
            };
            
            Content = _screenSaver;
            _screenSaver.GestureRecognizers.Add(tapGestureRecognizer);
            // Configure the OnAppearing to kick off the ScreenSaver
            Appearing += async (sender, args) =>
            {
                try
                {
                    await Task.Run(async () =>
                           {
                               while (true)
                               {
                                   if (_cts.IsCancellationRequested)
                                   {
					App.Logger.LogInfo("CANCELLED - In the Loop");
                                       break;
                                   }

                                   await Task.Delay(5000, _cts.Token).ContinueWith(async t =>
                                   {
                                       try
                                       {
                                           if (_cts.IsCancellationRequested)
                                           {
						App.Logger.LogInfo("CANCELLED - In the Action");
                                           }
                                           else
                                           {
                                               // this is the unnecessary Task
                                               await Task.Run(() =>
                                               {
                                                   _currentImageIndex = _currentImageIndex < _imageSourceList.Count 1 ? _currentImageIndex + 1 : 0;
                                                   Device.BeginInvokeOnMainThread(
                                                   () =>
                                                   {
                                                       _screenSaver.Source = _cacheImageSource[_currentImageIndex];
                                                    });
                                               });
                                           }
                                       }
                                       catch (Exception ex)
                                       {
					   App.Logger.Log(ex);
                                           throw;
                                       }
                                   }, _cts.Token);
                               }
                           });
                }
                catch (OperationCanceledException e)
                {
		    App.Logger.Log(e);
                    Device.BeginInvokeOnMainThread(async () =>
                    {
                        await Navigation.PopModalAsync();
                    });
                }
            };
        }
    }

Now, please do not ask me why it’s done this way, because this is just what I have been given from the existing app. The three problems that stood out to me were:

1. Wiring OnAppearing event without unsubscribing.
2. GestureRecogniser is added but not removed.
3. A Task was being created every 5 sec unnecessarily.

However, these all were small compared to the main problem, and even removing all these together did not help in reducing the memory usage. so I switched off the part that swaps the screensaver images.


_screenSaver.Source = _cacheImageSource[_currentImageIndex];

At first, this looked harmless to me, and we were caching the FileImaeSource in a list, so we were not loading the images every time, we only loading them once and only swapping the source on the background image. However, it appeared that this was the root cause. Commenting this line out made the memory usage stay stable below the 200 MB mark, which was great news for me :).

Make sure that you have your benchmark before any optimisations, otherwise you would not know the impact of your changes.

Developing a Solution

To avoid swapping the image source, which by the way, I think it is a Xamarin Forms problem, but I will chase that separately, I started thinking of creating multiple static background images, and only toggle their visiblity. This meant that I would have 4 images loaded and all bound to fill the screen, but I only show (make visibile) one of them at a time. The Page code changed to be like this:


public class ScreenSaverPage : ContentPage
{
        private readonly List<string> _imageSourceList = new List<string> { "Screensaver1.png", "Screensaver2.png", "Screensaver3.png", "Screensaver4.png" };
        private readonly List<Image> _backgroundImages = new List<Image>();
	private  RelativeLayout _relativeLayout;
        private int _currentImageIndex;
        private readonly CancellationTokenSource _cts = new CancellationTokenSource();

        public ScreenSaverPage()
        {
            _relativeLayout = new RelativeLayout { HorizontalOptions = LayoutOptions.FillAndExpand, VerticalOptions = LayoutOptions.FillAndExpand };

		LoadImages (_relativeLayout);
		_backgroundImages [0].IsVisible = true;

         	var tapGestureRecognizer = new TapGestureRecognizer();
         	tapGestureRecognizer.Tapped += async (s, e) =>
            	{
                	_cts.Cancel();
                	await Task.Run(async () => await App.ResetInactivity(typeof (BaseViewModel)));
            	};
            
		Content = _relativeLayout;
		_relativeLayout.GestureRecognizers.Add(tapGestureRecognizer);
        }

	protected async override void OnAppearing ()
	{
		base.OnAppearing ();
		try
		{
			await Task.Run(async () =>
			{
				while (true)
				{
					if (_cts.IsCancellationRequested)
					{
						App.Logger.LogInfo("CANCELLED - In the Loop");
						break;
					}
					await Task.Delay(5000, _cts.Token).ContinueWith(async t =>
					{
						try
						{
							if (_cts.IsCancellationRequested)
							{
								App.Logger.LogInfo("CANCELLED - In the Action");
							}
							else
							{
								_currentImageIndex = (_currentImageIndex < _imageSourceList.Count -1) ? _currentImageIndex +1 : 0;
								Device.BeginInvokeOnMainThread(
								() =>
								{
									SetBackgroundVisibility(_currentImageIndex);
								});
							}
						}
						catch (Exception ex)
						{
							App.Logger.Log(ex);
							throw;
						}
					}, _cts.Token);
				}
			});
		}
		catch (OperationCanceledException e)
		{
			App.Logger.Log(e);
			Device.BeginInvokeOnMainThread(async () =>
			{
				await Navigation.PopModalAsync();
			});
		}
	}

	private  void LoadImages (RelativeLayout layout)
	{
		foreach (string fileName in _imageSourceList) 
		{
			var image = CreateImageView (new FileImageSource { File = fileName });
			layout.Children.Add (image, Constraint.Constant (0), Constraint.Constant (0), Constraint.RelativeToParent (parent => parent.Width), Constraint.RelativeToParent (parent => parent.Height));
			_backgroundImages.Add (image);
		}
	}

	void SetBackgroundVisibility (int currentImageIndex)
	{
		for (int i = 0; i < _backgroundImages.Count; i++) 
		{
			_backgroundImages [i].IsVisible = i == currentImageIndex;
		}
	}

	private static Image CreateImageView (FileImageSource source)
	{
		return new Image
		{
			HorizontalOptions = LayoutOptions.FillAndExpand,
			VerticalOptions = LayoutOptions.FillAndExpand,
			Aspect = Aspect.AspectFill,
			Source = source, 
			IsVisible = false
		};
	}
}

You would agree that this is a big improvement on what we had originally, and it shows clearly on the Xamarin Profiler when we run the app with this new change. The memory plot on the Xamarin profiler was looking like this:

Memory usage after optimisation
Memory usage after optimisation

This is a great reduction, and it is less than one third of what the app was using before (~ 600 MB), but I was still thinking that it needs to be optimised further.

Can We Do Better?

The graph above was showing me that the memory usage was still going up, not by much but still growing. Also, when I switch between screens/pages, I noticed that the screensaver page was taking lots of memory to start with (~ 50 MB), which is to create the images and FileSourceImage objects. However, I noticed that when we move away from this page (screen saver), these entities are not being cleared quickly enough by the GC. Thus, I added the following:


public partial class ScreenSaverPage : ContentPage
{
	...
	protected override void OnDisappearing ()
	{
		base.OnDisappearing ();

		PrepareForDispose ();
	}

	void PrepareForDispose ()
	{
		foreach (var image in _backgroundImages) 
		{
			image.Source = null;
		}

		_backgroundImages.Clear();
		_relativeLayout.GestureRecognizers.RemoveAt (0);
		_relativeLayout = null;
		_cts.Dispose ();
		Content = null;
	}
}

This helped dispose of the images and the gesture recogniser quickly enough and helped me keep the memory usage at around 130 – 160 Mb, which is a great result considering that we started with 600 MB. The other pleasing part is that memory usage was very stable and no major peak were found. It fluctuates slightly when you move between pages, which is perfectly normal but it goes back to a steady level around 130 – 160 MB.

I hope you find this useful and please do check your Xamarin apps before you release or whenever you get the time, as these things are hard to see but they could bite you when you go to prod 🙂

Why you should use Git over TFS

Git || TFS (Source: VisualStudio.com)
Git || TFS (Source: VisualStudio.com)

I have been an advocate of git for long time now and I might be biased a little bit, but take a moment to read this and judge for yourself whether git is the way to go or not.

If you are starting a new greenfield project, then you should consider putting your code on a git repository instead of TFS. There are many reasons why git is better suited, but the two main ones in my perspective are:

Cross-Platform Support

Git tools are available for all platforms and there are many great (and FREE) GUI tools like GitExtensions or SourceTree. In today’s development world, there is guaranteed to be multiple set of technologies, languages, frameworks, and platform-support in the same solution/project. Think of using NodeJS with Windows Phone app. Or building apps for Windows, Android, and iOS. These are very common solutions today and developers should be given the freedom of choosing the development platform of their choice. Do not limit your developers to use Visual Studio on Windows. One might argue that TFS Everywhere (which is an add-on to Eclipse) is available for other platforms, but you should try it and see how buggy it is and how slow it is in finding pending changes. Having used TFS Everywhere, I would not really recommend it to anyone, unless it is your last resort.

Developers should be able to do their work any time and anywhere

Work Offline

Developers should be able to do their work any time and anywhere. Having TFS relying on internet connection to commit, shelf, or pull is just not good enough. I have even had many instances where TFS was having availability problems, which meant that I was not able to push/pull changes for an hour. This is not good. Git is great when it comes to working offline, and this is due to the inherent advantage of being a fully distributed source control system. Git gives you the ability to a) Have full history of the repo locally. This enables you to review historical changes, review commits, and merge with other branches all locally. b) Work and commit changes to your branch locally c) Stash changes locally. d) Create local tags. e) Change repo history locally before pushing. And many other benefits that come out of the box.

Having TFS relying on internet connection to commit, shelf, or pull changes is just not good enough

Hosting Freedom

With TFS, you are pretty much stuck with Microsoft TFS offering. With git however, it is widely available and many providers offer free hosting for git, including VSTF, GitHub, and Bitbucket. With Visual Studio Online itself offering you to host your code in git repositories, there is really no reason why not take full advantage of git.

With VSO itself offering to host your code in git repositories, there is really no reason why not take full advantage of git

I have introduced git in many development teams so far, and I must say that I did get resentment and reluctance from some people at the start, but when people start using it and enjoying the benefits, everything settles in place.

Some developers are afraid of command-line tools or the complexity of push/pull/ and fetch and they want to stay simple with TFS, but this does not fit today’s development environment style. Last month, I was working at this client site where they were using Visual Studio to develop, debug, and deploy to iOS devices, and it was ridiculously slow. As a final resort, I opted to using Eclipse with TFS Everywhere plugin with my Xamarin Studio, and it was a lot better. I still had to suffer the pain of Eclipse-TFS not seeing my changes every now and then but compared to the time I was saving by choosing my own development IDE (Xamarin Studio), I was happy with that.

If you are starting a new project, or if you’re looking for a way to improve your team practices, then do yourself a favour and move to Git

So to summarise, if you are starting a new project, or if you are looking for a way to improve your team practices, then do yourself a favour and move to Git. You will be in a good company, especially if you enjoy contributing to open source project :). Teams that use VSTS as their ALM platform can still use VSTS, but host their code in git repositories on VSTS to take advantage of git and TFS together.

Finally, if you have any questions or thoughts, I would love to hear from you.

Universal Links: The Natural Evolution of Custom Scheme URI on the Mobile

What is Scheme URI?

In the context of mobile apps, Scheme URIs are used for communicating between mobile apps. Mobile developers can register a unique URI for each app in the application manifest (info.plist). The operating system would then use that to launch your application once the user clicks on a link that matches your uri. For instance, I could have my-cool-app:// as my scheme uri. I could then generate links (should start with my-cool-app://) to send in my email to my app’s users and once clicked, my app will be launched. These have become standard and they are widely supported on iOS, Windows, and Android.

What is Universal Links?

Universal Links is a new feature that was introduced by Apple in iOS 9. The idea is instead of registering a scheme Uri, you could use your own website domain name (which is inherently unique) and associate this domain name with your mobile app. Once the user clicks on a http/https link that links to your domain name, iOS would then launch your mobile application instead of opening the link in Safari. More details about Universal Links could be found in WWDC session here and Apple’s Docs here.

Why Do we need Universal Links?

If you have used Scheme URIs before to open your mobile apps, you would have felt the pain of trying to keep your links valid to open your app (when installed) and fallback to the web links when your app is not installed. But this was not a simple task, and there was no guarantee that this would work exactly the way you want. Therefore, Apple has introduced Universal Links, which is great. Imagine how many times you get url link via email and you click it to end up in the browser asking you for username and password. Most people do not remember their accounts (at least not for all accounts), or they want to see this content on the mobile app instead of Safari. This new feature (Universal Links) solves this problem.

How Can we Enable Universal Links?

To enable Universal Links, you need to follow the following steps:

1. Enable “Associated Domains” on your Application Id (in Apple Developer portal).

Enable Associated Domains in your App Identifier settings - Apple Dev Portal
Enable Associated Domains in your App Identifier settings – Apple Dev Portal

2. Create an apple-app-site-association file on your domain. This is just a text file that contains a JSON object to describe your universal links. You could associate multiple link paths with multiple apps. notice that your app id need to have your full app identifier (TeamName.Appid). Sample of the JSON content is here:


    {
        “applinks”: {
            “apps”: [],
            “details”: [
                {
                    “appID”: “ABC-Team-Id.com.my.bundle.id“,
                    “paths”: [ “/”, “*” ]
                }
            ]
        }
    }

The content of this file need to be served over HTTPS and notice that there is no extension. The name would have to match too (apple-app-site-association). Apple will try to retrieve this file from your domain before it launches your app for the first time. This is to verify the authenticity of your domain ownership before starting the app. If you are hosting this file on an azure website, you might also need to add a static content mapping in your web.config. The bottom line is you need to make sure that you can navigate to this file as https://your-domain.com/apple-app-site-association. There is also a small app that helps you verify your apple association file, you could find it here. Do not worry about the last 2 errors that it displays concerning signing your file. I have my Universal Links work without signing the association file, but this web page is still good to verify other aspects of the association file.

3. Add the Universal Links Entitlement to your info.plist. This can be done very simply by navigating to your Entitlements.plist and add the new applinks as follows:

Applinks in Entitlments-plist file
Applinks in Entitlments-plist file

4. Regenerate your provisioning profiles. This is very important since we changed our app id entitlements. Otherwise, you will get an error message when trying to build the app saying that you have entitlements that are not permitted by your provisioning profile.

5. OPTIONAL: Override this method (ContinueUserActivity) in your app’s delegate to capture the url link that the user clicked on. This is necessary when you want to take the user into the particular screen that they are after. For instance, if the user clicked on a link to view a product on your website, you could capture the link in this method and once your app opens up, view the details of that product on your mobile app.

public override bool ContinueUserActivity (UIApplication application, NSUserActivity userActivity, UIApplicationRestorationHandler completionHandler)
{
	Console.WriteLine ("ContinueUserActivity method has been called............");
	// you can get the url that the user clicked on here.		
	return true;
}

Once all the above is done, you can build and deploy to your device. I have tested my implementation by creating a link to my associated domain and sent it via email to the device. When I click on the link, I get redirected to my app directly (without going through Safari).

Applinks in the mobile app - screenshot
Applinks in the mobile app – screenshot

Notice that Apple give the user the option to stay on my app or go directly the browser (upper right link) or go back to the email that launched my app. I really like this design because it keeps the user in charge. Also, there are few posts that suggest once the user taps on the link to go to the browser, Apple would then bypass your app for this link and take the user directly to the website in the future. I have not verified this but you could find the details here.

How about devices that are still using older versions of iOS?

What we have seen above is very cool stuff and I have been hoping for this for years. However, this only solves the problem on a subset of devices. Remember there are still lots of devices out there that are not using iOS 9. If you are like me and required to support iPhone 4, then you need to have a fallback mechanism on devices that do not support Universal Links. For this scenario, we could have a simple JavaScript code that helps us redirect to our mobile app, if this redirection fails, then we assume that the app is not installed and we redirect the user to our web url. A sample of this is below, and more details can be found here:


// this could needs to sit in your website page
setTimeout(function() {
  window.location = "http://my-domain-app"; // or your could put  link to your app on iTunes
}, 25);

// If "my-cool-app://" is installed the app will launch immediately and the above
// timer won't fire. If it's not installed, you'll get an ugly "Cannot Open Page"
// dialogue prior to the App Store application launching
window.location = "my-cool-app://";

Future-Proofing

It would be very interesting to see how apple will use this to integrate apps more with the web. Apple has already done a great work in terms of search and user activity that is bridging the gap between web and native apps. Universal Links is another step in the right direction.

How about Android?

Although most of the contents of this post is related to iOS, Android implementation is quite similar. Google Introduced this in Android M and more details could be found in the documentation here.

PIN Number Password Fields for Windows Xaml

In Windows Xaml world, a bad design decision was made to have the passwordBox and the TextBox different, meaning that the PasswordBox does not inherit common properties from the TextBox. As a consequence, you cannot do many of the things that you normally do with a textbox like customising the appearance of the password textbox (say you want to make the text centre-aligned, or you want to bind the number key pad instead of the alpha keyboard). I had this exact requirement 2 weeks ago and I had to solve it, so this blog talks about the approach I took to make this happen.

Basically all I needed to do was to switch from using password boxes to normal textbox and then wire the key up event in the background to hide the entered password. You need to be careful though when handling these events, because you could end up making your text fields unusable. In my scenario, I needed the textbox to act like a PIN number (Password) field, so I only accepted numbers as you can see in the code below. Here is the code snippet:

<TextBox x:Name="PinNumberTextBox" HorizontalAlignment="Center" 
Margin="0,200,0,0" VerticalAlignment="Top" MinWidth="300" 
PlaceholderText="Enter 4-8 digits PIN no" 
InputScope="Number" KeyUp="PinNumberTextBox_KeyUp" 
TextAlignment="Center" />

As you can see above, I am using a TextBox in the Xaml for the PIN number and using normal TextAlignment to centre the text and InputScope for binding the Number Key pad instead of the default alpha keyboard. Also, notice that I am wiring the KeyUp event. Here is the code of the keyUp event.



    string _enteredPin = "";


    string _confirmPin = "";


    string _passwordChar = "*";



    private void PasswordTextBox_KeyUp(object sender, KeyRoutedEventArgs e, TextBox field, ref string pinCode)


    {
    
    
    //modify new passcode according to entered key
    
    
    pinCode = GetNewPasscode(ref pinCode, e);
    
        
    
    //replace text by *
    
    
    field.Text = Regex.Replace(pinCode, @".", _passwordChar);

    
    
    //take cursor to end of string
    
    
    field.SelectionStart = field.Text.Length;
    
    
    
    
    // stop the event from propagating further 
    
    e.Handled = true;

    } 

    

private string GetNewPasscode(ref string oldPasscode, KeyRoutedEventArgs keyEventArgs)

    {
    
        string newPasscode = string.Empty;
    
        switch (keyEventArgs.Key)
    
        {
        
            case VirtualKey.Number0:
            case VirtualKey.Number1:
        
            case VirtualKey.Number2:
        
            case VirtualKey.Number3:
                    
            case VirtualKey.Number4:
        
            case VirtualKey.Number5:
        
            case VirtualKey.Number6:
        
            case VirtualKey.Number7:
        
            case VirtualKey.Number8:
        
            case VirtualKey.Number9:

            
                var numKey = keyEventArgs.Key.ToString();
            
                var number = numKey.Substring(numKey.Length - 1, 1);
            
                newPasscode = oldPasscode + number; 
            
                break;
        

            case VirtualKey.Back:
            
                if (oldPasscode.Length > 0)
                
                    newPasscode = oldPasscode.Substring(0, oldPasscode.Length - 1);
            
                break;

        
            default:
            
                //others
            
                newPasscode = oldPasscode;
            
                break;
    
          }

    
          return newPasscode;

     }

Because I had two Textboxes (PinNumberTextBox and ConfirmPinNumberTextBox), I have the handling of the event in a method called GetNewPasscode() and that takes the relevant text box and updates the pinNumber (entered value).

As you can see, it is fairly simple, we listen to keys when the user types something, then we only accept numbers (this could be changed based on your requirements) and update the enteredPin that we hold in an instance property. We then use RegEx to mask the display of the PIN on the screen. Also, notice how we listen to back key press to clear the last char.

Hope this helps someone, and please do get in touch if you have any comments or questions. 


SQLite.Net.Cipher: Secure your data on all mobile platforms seamlessly and effortlessly

SQLite database have become the first choice for storing data on mobile devices. SQLite databases are just files that are stored on the file system. Other apps, or processes can read/write data to this database file. This is true for almost all platforms, you could root/jailbreak the device and get the database file to do with it whatever you like. That’s why it is very important that you start looking into securing your data as much as possible.

In a previous blog post, I talked broadly about how you could secure your data on mobile apps from an architectural point of view. In this post, I will show you how you can use SQLite.Net.Cipher to encrypt/decrypt data when stored/accessed in/from your database. This library helps you secure the data and do all the work for you seamlessly. All you need to do it annotate the columns that you want to encrypt with one attribute. The library will do the rest for you.

The Model

	public class SampleUser : IModel
	{
		public string Id { get; set; }

		public string Name { get; set; }

		[Secure] 
		public string Password { get; set; }
	}

Notice above that we have decorated our Password property with [Secure] attribute. This will tell the SQLite.Net.Cipher to encrypt the password property whenever storing data into the database, and it will decrypt it when reading out of the database.

The model needs to implement IModel, which enforces the contract of having a property with the name Id as a primary key. This is a common standard, and you could use other columns for PrimaryKey if you want and use backing properties to satisfy this requirement if you like.

The Connection

Your database connection entity needs to extend the SecureDatabase, which is provided to you by the SQLite.Net.Cipher as below:


	public class MyDatabase : SecureDatabase
	{
		public MyDatabase(ISQLitePlatform platform, string dbfile) : base(platform, dbfile)
		{
		}

		protected override void CreateTables()
		{
			CreateTable<SampleUser>();
		}
	}

You can use the CreateTable() method to create whatever tables you need. There is also another constructor that allows you to pass your own implementation of the ICryptoService if you like. This is the entity that is responsible for all encryption and decryption tasks.

See it in Action

Now to see the library in action, you could establish a connection to the database, insert some data and retrieve it:


	var dbFilePath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments), "mysequredb.db3");
	var platform = new SQLite.Net.Platform.XamarinIOS.SQLitePlatformIOS();
	ISecureDatabase database = new MyDatabase(platform, dbFilePath);
	var keySeed = "my very very secure key seed. You should use PCLCrypt strong random generator";

	var user = new SampleUser()
	{
		Name = "Has AlTaiar", 
		Password = "very secure password :)", 
		Id = Guid.NewGuid().ToString()
	};

	var inserted = database.SecureInsert<SampleUser>(user, keySeed);
		
	// you could use any desktop to inspect the database and you will find the Password column encrypted (and converted base64)

	var userFromDb = database.SecureGet<SampleUser>(user.Id, keySeed);

And that’s all 🙂 Assuming that you have installed the Nuget Package.

SQLite.Net.Cipher
SQLite.Net.Cipher

Dependencies

Please note that this library relies on the following great projects:
SQLite.Net-PCL
PCLCrypto

Both of these projects are really great and they support all major platforms, including builds for PCL libraries, so I would highly encourage your to look into them if you have not seen them before.

You could find the library on Nuget here, and the source code is on GitHub here, feel free to fork, change, and do whatever you like 🙂 I hope you find the library useful and I would love to hear any comments, questions, or feedback.

Azure ApplicationsInsights for Xamarin iOS

Azure ApplicationInsights (AI) is a great instrumentation tool that can help you learn about how your application is doing during run-time. It is currently in Preview mode, so bear that in mind when developing production ready apps. It gives you the ability to log lots of different kinds of information like tracing, page views, custom events, metrics and more.

Azure AI supports multiple platforms, but unfortunately they have not released a Xamarin package yet. There is one library which is for Xamarin.Forms since it uses the DependencyResolver. I have taken that and removed that dependency to make it compatible with Xamarin.iOS. You could do the same thing to use it for Xamarin.Android too if you like.

It’s very simple to use, all you need is your instrumentationkey which you can get from your Azure portal. Follow the steps from the MSDN tutorial as you can see here to create the Azure AI instance and get your key. Once done, you could download and reference the repository that I have created on GitHub, as you can find it here.

To start Azure AI on your Xamarin iOS app, you could do:

    
	AzureAIManager.Setup();

	AzureAIManager.Configure("my-user-or-device-name");

	AzureAIManager.Start();

The implementation of the AzureAIManager is as follows:


	public static class AzureAIManager
	{
		public static void Setup(string appKey = "your-azure-AI-instrumentation-key")
		{
			AI.Xamarin.iOS.ApplicationInsights.Init();

			var ai = new AI.Xamarin.iOS.ApplicationInsights ();
			ApplicationInsights.Init (ai);

			TelemetryManager.Init(new AI.Xamarin.iOS.TelemetryManager());
			ApplicationInsights.Setup(appKey);

		}

		public static void Start()
		{
			ApplicationInsights.Start();
		}

		public static void Configure(string userId = "" )
		{
			ApplicationInsights.SetAutoPageViewTrackingDisabled(true);

			if (string.IsNullOrEmpty(userId))
				ApplicationInsights.SetUserId(userId);
		}

		public static void RenewSession()
		{
			ApplicationInsights.StartNewSession();
		}
	}

I have not put this as a Nuget package because I am sure Microsoft will release one very soon, so until that happens, you can use this bindings to play around with Azure AI and you could even use it on your small projects.

Handling Complex objects Persistency and Messaging on Mobile

Introduction

Data persistency and messaging is a very common task that you almost certainly need it almost all of your apps. Mobile platforms have come a long way in supporting data persistency mostly through the SQLite engine. It has become the standard on all mobile platforms. However, it is still a very light weight engine and does not give you all the capabilities like an SQL server. This should not even be the case on a mobile device where persisting data is intended to be mostly for caching till the data reaches its ultimate destination on a server somewhere. With this in mind, how do we go about persisting complex objects on the mobile platform? How do we handle complex objects messaging? And this is exactly what we will discuss in this blog post. This is not an introduction on how to use an ORM to store some objects in the database, it is rather how to do handle the complex relationships between a data object and its children or siblings when storing or messaging that piece of data.

SQLite.NET PCL

I have been developing for mobile platforms for quite sometime, and I have been really enjoying the use of SQLite.Net PCL. This is a very simple, light-weight ORM that was built on the initial SQLite.NET with the added support for PCLs (Portable Class Library). This library has made it very easy to store data in the database on a mobile device and it gives you a unified API on all platforms to do all data persistency related tasks. So if you have not used this before, I would highly encourage you to have a look at it, and this blog post will assume the use of this framework for storing data locally.

Versioning

If your mobile application is intended for building a to-do list, then it might not be a trouble thing to loose one entry here or there, or to resolve conflicting items by taking the latest one, I am saying this and knowing that you might loose some users if you do that :). However, what if your mobile app was concerned emergency management, or used by health practitioners. This makes it a requirement to pre-define the logic for how items are stored, conflict is resolved, and you are required to always keep all bits of information on all devices. I had a similar requirement where I needed to maintain the versioning of all items and define how a conflict will be resolved. Thus, I decided to use versioning for my data objects. This design makes the following decisions:

  1. Any change will be applied to the data objects as a change log.
  2. Change logs are globally and uniquely identifiable via unique IDs.
  3. Data objects will have a VERSION attribute, which is unique. This VERSION attribute will refer to the most recent Change Log Id.
  4. Each data object will have a list of Change Logs in an ordered list. The ordering of these logs represents the timeline of when these change logs were applied.
  5. For the sake of simplicity, we will assume that a data object has a list of properties/attributes that will be represented as table of key/value pairs.
  6. Other deicsions/assumptions of the design will be ignored for the sake of this blog post. Such decision could include storing the type of change logs (change created by a user, result of data merge, etc), storing other security (authentication/authorisation) data on each data item, storing other meta data on each data object like who changed what and when.

With that in mind, our data object diagram should looks something like the following:

UML Diagram of our basic Item (versioning) design
UML Diagram of our basic Item (versioning) design

Enforcing Version-based Changes

Now that we have put together our basic design, we need to implement it in a way that is safe for a team of developers to work on. We cannot assume that we will ask the team members “hey, can you please use this method when you try to apply some changes, because it is necessary”, think that would work? :). I know you must be thinking now this is absurd, but I have seen this in some teams. Or if they do not say it this way, they would rely on some comments in the code or some other wiki/documentation. My approach is to make it fool-proof and to let the design document itself. I should not be required to explain this for people. Developers (my team members) should be able to use this without worrying about the internal implementation. So to do that we need the following:

1. Read Only Properties

To ensure that we are not going to change any property of our data object with out using change logs, properties need to be read-only. This makes sure that we cannot create a new version of the item without either using the applyChangeLog(log) method or using a constructor.

2. Fully defined constructor(s)

We cannot provide a constructor that would allow the consumer of our framework to create/instantiate a data object without specifying all its attributes. Therefore, our constructors should define all properties of an object at the creation time.

3. Easy composition

Our framework (data-object) need to have an easy way to construct objects from a serialised version or from another version of the item. This is very necessary for when storing these data objects to the database or when trying to message them over the network.

With all that out of the way, our basic object implementation could look something like this:

  // our change log first
  public class ChangeLog 
  {
     public string Id {get;set;}
     public int Order {get;set;}
     public string CreatedBy {get;set;}
     public ChangeLogType Type {get;set;}
     public DateTime CreatedOn {get;set;}
     public string ParentId { get; set; }
     public Dictionary&lt;string, object&gt; ChangingAttributes {get;set;}
   }

   public class RegisterItem : ModelBase
   {
     public RegisterItem (string id, string name) 
             : this(id, name, string.Empty, new Dictionary&lt;string,object&gt;(), new List&lt;ChangeLog&gt;())
     {
     }

     public RegisterItem (string id, string name, 
                          string version, Dictionary&lt;string, object&gt; attributes, List&lt;ChangeLog&gt; changeLogs)
     {
        Id = id;
        name = name;
        _version = version;
        _attributes = attributes;
        _changeLogs = changeLogs;
     }

     // This is needed for the internal use (serialisation/De-serialisation, and db storage).
     // Bcoz this ctor is Obsolete, it will through a warning and the warn will be escalated to an error if used.     
     [Obsolete(&quot;This ctor is only for the deserialiser and the db layer. Use other ctor with full params&quot;)]
     public RegisterItem ()
     {			
     }

     public string Version { get {return _version;} private set { _version = value;}  }
     private string _version {get; set;}

     public string Name { get{ return _name; } private set { _name = value;}  }
     private string _name { get; set; }

     [Ignore]
     public List&lt;ChangeLog&gt; ChangeLogs { get { return _changeLogs; } private set { _changeLogs = value;}  }
     private List&lt;ChangeLog&gt; _changeLogs { get; set; }

     [Ignore]
     public Dictionary&lt;string, object&gt; Attributes {get{return _attributes; } private set { _attributes = value;}}
     private Dictionary&lt;string, object&gt; _attributes { get; set;}
   }

So far so good, this far we have implemented our data objects with its basic versioning its children objects. Now the question is how do we store this in the database and how to serialise/deserialise the object to send it over the network. This is actually the second tricky part 🙂 because if you have worked with SQLite.Net before you would know that it is designed to enable mobile developers to store simple objects and basic typed attributes to the database. For our scinario, we have complex objects with children objects and other complex attributes (Dictionary).

Storing in the SQLite database

Our database will store basic information about the data objects (name, id, version) along with a full copy of the object that is serialised to a basic type like string (or could be a binary if you like). To make this smooth and simple to our consumers (developers who use this api), we added a property to the data object that is called AsJson. This property will serialise and store the full copy of the object when the object is stored to the database, and when the object is constructed from its basic attributes, it will populate the other properties (like children objects and other complex properties (ie Dictionary). A simple implementation of this property could be something like this:


[JsonIgnore]
public string AsJson 
{
  get
  {
     var json = MySerialiser.ToJson(this);
     return json;
  } 
  set
  {
     var json = value;
     if (!string.IsNullOrEmpty(json))
     {
         var originalObject = MySerialiser.LoadFromJson&lt;RegisterItem&gt;(json);
         if (originalObject != null)
         {
            //We could use something like AutoMapper here
            _changeLogs = originalObject.ChangeLogs;
            _attributes = originalObject.Attributes;
         }
      }									
  }
}

As you can see the property itself (AsJson) is ignored when serialising to Json because it will be a circular-dependency and it would not work. Plus, our developers would need to do anything when storing or pulling from to/from the database. Our AsJson property would do the work and get our items saved/constructed to/from the database. Also, notice how we had the [Ignore] attribute on our complex children objects. This belongs to our ORM (SQLite.Net) which will be understood as no need to store these objects to the database.

Messaging

A couple of months ago, I gave a talk at DDD Melbourne regarding messaging in Peer-2-Peer scenarios on mobile devices, you can find the slides deck here. And for this exact project, I needed to be able to serialise/deserialise my data object to send them over my P2P connections to the other party. This has been made much easier by the property that we discussed earlier which is called AsJson. The only tricky part is that when serialising, you need to modify the default settings of your Json serialiser as it needs to be able to serialise/deserialise private properties of the data objects. Assuming that we use something like Newtowonsoft.Json, our serialiser will be something like this:

   var resolver = new PrivateSetterContractResolver();
   var settings = new JsonSerializerSettings{ ContractResolver = resolver };
   var obj = JsonConvert.DeserializeObject<T>(input, settings);

And that’s it. Hope you find this useful and you have picked few ideas on how to handle storing and messaging of complex data objects on mobile. If you have a question, comments, or maybe a suggestion to do things in a different/better way, I would love to hear from you, so get in touch.

Sharing Sessions between HttpClient and WebViews on Windows Phones

Introduction

Before we can dive into the details of the blog post, it would be helpful to give some context of what we are trying to achieve, agree? 🙂 I have been working on a hybrid mobile application that requires displaying/containing a few mobile apps in a WebView control. In the background, some Http requests need to go through to collect data and do further processing. We need to maintain the same session in all of the web requests going through the mobile app. This means all web (http) requests originated by the Webview as well as our background (httpClient) requests need to share cookies, cache, etc. So how do we do that? This is what I will show you in this blog post.

System.Net.Http.HttpClient

HttpClient has become the go-to library for all things Http, especially with the support of HttpClient in PCLs (Portable Class Library), who can resist that. So my first thought when I considered this requirement was to use HttpClient with a HttpClientHandler, preserve the session cookies and share them with the WebView. I started my initial googling, and I found that somebody has done exactly that, you can find it here. This gave me some more confidence that it is doable and it worked for somebody, so this is the first approach that I could take.

This first approach would mean using HttpClient (along with a HttpClientHandler) to hold cookies and share them with the webview. However, this would be error-prone because I will need to continously monitor both cookies and update the other group of requets. Plus, sharing data cache between the WebView and HttpClient would still be an issue that I was not sure how to address.

Windows.Web.HttpClient

Before going further, I thought I would look for an alternative, and I found Windows.Web.HttpClient. This one seemed very similar to System.Net.Http.HttpClient, but the implementation is quite different, regardless of the exact matching of the name :). I found this video (below) from Microsoft //Build confernece, and it talks in details about this implementation of HttpClient which is more geared towards Windows Development as the name indicates.

Appearantly Windows implementation of HttpClient gives you the ability customise all aspects your http requests. The video above lists the following five reasons for why you should use Windows.Web.HttpClient:

  1. Shared Cookies, Cache, and Credentails (I was thinking this is too good to be true 🙂 )
  2. Strongly Typed headers => fewer bugs
  3. Access to Cookies and Shared Cookies
  4. Control over Cache and Sahred Cache
  5. Inject your code modules into the processing pipe-line => cleaner integration

When I read the first statement above, I really though that this is too good to be true, just exactly what I am looking for. So I decided to give it a go. As you can see some of the features listed for this HttpClient (Windows implementation) are similar to what we have in the System.Net world, but this gives us extra capabilities.

HttpClientHandlers vs HttpBaseProtocolFilter

It is worth mentioning that Windows.Web library does not have HttpClientHandlers that we are familiar with in System.Net, instead it gives you the ability do more with HttpBaseProtocolFilter, and this is the key point. HttpBaseProtocolFilter enables us developers to customise/manipulate the http requests (headers, cookies, cache, etc) and the changes will be applied across the board in your application. This applies whether you are making a http request programmatically using httpClient or via the user interface (using a webView for instance).

Code Time

  // creating the filter
  var myFilter = new HttpBaseProtocolFilter();
  myFilter.AllowAutoRedirect = true;
  myFilter.CacheControl.ReadBehavior = HttpCacheReadBehavior.Default;
  myFilter.CacheControl.WriteBehavior = HttpCacheWriteBehavior.Default;

  // get a reference to the cookieManager (this applies to all requests)
  var cookieManager = myFilter.CookieManager;

  // make the httpRequest
  using (var client = new HttpClient()) 
  {
     HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, "your-url-address"); 
     
     // add any request-specific headers here
     // more code been omitted

     var result = await client.SendRequstAsync(request);
     result.EnsureSuccessStatusCode();

     var content = await result.Content.ReadAsStringAsync();

     // now we can do whatever we need with the html content we got here 🙂
     // Debug.WriteLine(content);
  }

  // assuming that the previous request created a session (set cookies, cached some data, etc)
  // subsequent requests in the webview will share this data
  myWebView.Navigate(new Uri("your-url-address"));
  

Hopefully this short code snippet gives you a good idea of what you can do with Windows Implementation of HttpClient.

Other Apps?

One might ask about how will this impact other apps? and the answer is it would not. As you will see in the video (if you watch it :)), the Windows.Web library was designed to work across all requests in one app. Therefore, you do not need to be concerned about impacting other apps or leaking your data to other external request.

Conclusions

Someone wise once said “with great power, comes great responsibility”. This should be remembered when using HttpBaseProtocolFilter in your http requests as this can impact all your subsequent requests. Hope you found this useful and would love to hear your comments and feedback.

References:

  • https://channel9.msdn.com/Events/Build/2013/4-092
  • https://social.msdn.microsoft.com/Forums/windowsapps/en-US/6aa75d2f-05bd-4e8d-a435-0aa3407b73e6/set-cookies-to-webview-control?forum=winappswithcsharp
  • http://blog.rajenki.com/2015/01/winrt-shared-cookies-between-webview-and-httpclient/
  • http://blogs.msdn.com/b/wsdevsol/archive/2012/10/18/nine-things-you-need-to-know-about-webview.aspx#AN7