Καλώς ορίσατε στο dotNETZone.gr - Σύνδεση | Εγγραφή | Βοήθεια

Παρουσίαση με Ετικέτες

Όλες οι Ετικέτε... » Azure   (RSS)

Earlier this week the Windows Azure platform was named Best Cloud Service at the Cloud Computing World Forum in London. Now in its third year, the Cloud Computing World Series Awards celebrate outstanding achievements in the IT market.  This year’s winners were selected by an independent panel of industry experts.

“It’s fantastic for us to see this type of recognition for the Windows Azure platform. We’re seeing companies creating business solutions in record times, reinforcing the new possibilities created by the cloud,” said Michael Newberry, Windows Azure lead, Microsoft UK.

Click here to read the press release about this award.

Source: Windows Azure Blog

0 σχόλια
Δημοσίευση στην κατηγορία: ,


April's Update (v2.9) of Windows Azure Platform Training Kit is available now and you can get it here.

"The Windows Azure Platform Training Kit includes a comprehensive set of technical content including hands-on labs, presentations, and demos that are designed to help you learn how to use the Windows Azure platform, including: Windows Azure, SQL Azure and the Windows Azure AppFabric.   

The April 2011 update of the Windows Azure Platform Training Kit has been updated for Windows Azure SDK 1.4, Visual Studio 2010 SP1, includes three new HOLs, and updated HOLs and demos for the new Windows Azure AppFabric portal.   

Some of the specific changes with the April update of the training kit includes:

  • [New] Authenticating Users in a Windows Phone 7 App via ACS, OData Services and Windows Azure lab
  • [New] Windows Azure Traffic Manager lab
  • [New] Introduction to SQL Azure Reporting Services lab
  • [Updated] Connecting Apps with Windows Azure Connect lab updated for Connect refresh
  • [Updated] Windows Azure CDN lab updated for CDN refresh
  • [Updated] Introduction to the AppFabric ACS 2.0 lab updated to the production release of ACS 2.0
  • [Updated] Use ACS to Federate with Multiple Business Identity Providers lab updated to the production release of ACS 2.0
  • [Updated] Introduction to Service Bus lab updated to latest AppFabric portal experience
  • [Updated] Eventing on the Service Bus lab updated to latest AppFabric portal experience
  • [Updated] Service Remoting lab updated to latest AppFabric portal experience
  • [Updated] Rafiki demo updated to latest AppFabric portal experience
  • [Updated] Service Bus demos updated to latest AppFabric portal "

Enjoy!

PK

0 σχόλια
Δημοσίευση στην κατηγορία: ,

Windows Azure SDK 1.4 was released yesterday with no breaking changes and a lot of stability fixes. You can get the bits from here. Among an important fix, specially if you use a source control system, where the web.config was being locked but Windows Azure Tools still needed write access to update the machine key, there are new features for the CDN (quoting from the release announcement):

Windows Azure CDN for Hosted Services
Developers can use the Windows Azure Web and VM roles as “origin” for objects to be delivered at scale via the Windows Azure Content Delivery Network. Static content in your website can be automatically edge-cached at locations throughout the United States, Europe, Asia, Australia and South America to provide maximum bandwidth and lower latency delivery of website content to users.

Serve secure content from the Windows Azure CDN
A new checkbox option in the Windows Azure management portal to enable delivery of secure content via HTTPS through any existing Windows Azure CDN account.

Get the bits and enjoy, I'm already updated :)

PK.

 

0 σχόλια
Δημοσίευση στην κατηγορία:

I posted an article at CodeProject explaining how you can use Windows Azure AppFabric Cache (CTP) in your applications.

You can find the article here -> http://www.codeproject.com/KB/azure/WA-AppFabric-cache.aspx

Please let me know if you liked or not and of course any comments are more than welcome!

Thank you,

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: ,

A lot of interesting things have been going on lately on the Windows Azure MVP list and I'll be try to pick the best and the ones I can share and make some posts.

During an Azure bootcamp another fellow Windows Azure MVP, had a very interesting question "What happens if someone is updating the BLOB and a request come in for that BLOB to serve it?"

The answer came from Steve Marx pretty quickly and I'm just quoting his email:

"The bottom line is that a client should never receive corrupt data due to changing content.  This is true both from blob storage directly and from the CDN.
 
The way this works is:
·         Changes to block blobs (put blob, put block list) are atomic, in that there’s never a blob that has only partial new content.
·         Reading a blob all at once is atomic, in that we don’t respond with data that’s a mix of new and old content.
·         When reading a blob with range requests, each request is atomic, but you could always end up with corrupt data if you request different ranges at different times and stitch them together.  Using ETags (or If-Unmodified-Since) should protect you from this.  (Requests after the content changed would fail with “condition not met,” and you’d know to start over.)
 
Only the last point is particularly relevant for the CDN, and it reads from blob storage and sends to clients in ways that obey the same HTTP semantics (so ETags and If-Unmodified-Since work).
 
For a client to end up with corrupt data, it would have to be behaving badly… i.e., requesting data in chunks but not using HTTP headers to guarantee it’s still reading the same blob.  I think this would be a rare situation.  (Browsers, media players, etc. should all do this properly.)
 
Of course, updates to a blob don’t mean the content is immediately changed in the CDN, so it’s certainly possible to get old data due to caching.  It should just never be corrupt data due to mixing old and new content."

So, as you see from Steve's reply, there is no chance to get corrupt data, unlike other vendors, only old data.

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: ,

Amazon announced the AWS Free Usage Tier (http://aws.amazon.com/free/) last week, which will start from November the 1st. I know some people are excited about this announcement and so am I because I believe that competition between cloud providers always brings better service for the customer, but in Amazon's case, it's more like a marketing trick than a real benefit and I'll explain why during this post. Let me remind you at this point that this is strictly a personal opinion. Let me also say that I have experience on AWS too.

Certainly, having something free to start with is always nice, but what exactly is free and how does it compare to Windows Azure platform? First of all, Windows Azure has a similar free startup offer, called Introductory special which gives you free compute hours, storage space and transactions, a SQL Azure web instance, AppFabric connections and ACL transactions, free traffic (inbound and outbound), all at some limit of course. Then there is the BizSpark program, which gives you also a very generous package of Windows Azure Platform benefits to start developing on and of course let's not forget the MSDN Subscription Windows Azure offer which is even more buffed up than the others.

Ok, I promised the Amazon part, so here it is. AWS billing model is different from Windows Azure. It's very detailed, a lot of things are broken into smaller pieces, each one of them being billed in a different way. Some facts:

  • Load balancing in EC2 instances, it's not free. Not only you pay compute hours but you're also charged for traffic (GB) that went through your balancer. Windows Azure load balancing is just there and it just works and of course you don't pay compute hours and traffic just for that.
  • On EBS you're charged for every read and write you do (I/O), charged for the amount of space you use, snapshot size counts not in the total but on its own and you're also charged per snapshot operation (Get or Put). On Windows Azure Storage you have 2 things. Transactions and amount of space you consume. Also on snapshots only your DELTA (differences) is counted against your total, not the whole snapshot.
  • SimpleDB is charged per machine hour* consumed and GBs of storage. Windows Azure Tables you only pay your storage and transactions. You might say that I have to compare this to S3, but I don't agree. S3 is not close to Windows Azure Tables as SimpleDB is. What is even more disturbing on S3 is the fact that there is a durability guarantee of 99.99% which actually means you can lose (!!) data of 0.01%.
  • There is no RDS instance (based on MySQL) included in the free tier. With introductory special you get a SQL Azure Web database (1GB) for 3 months or for as long as you have a valid subscription when you're using the MSDN Windows Azure Offer where you actually get 3 databases.

For me, the biggest difference is the development experience. Windows Azure offers a precise local emulation of the cloud environment on your development machine, called DevFabric which ships with Windows Azure Tools for VS2008/VS2010. All you have to do, is click F5 on your Cloud project and you get the local emulation on your machine, to test, debug and prepare for the deployment. Amazon doesn't offer this kind of development environment. There is integration with Eclipse and other IDEs but every time you hit the Debug button, you're actually hitting real web services with your credentials, consuming from your free tier and as soon as you're done consuming that you start paying to develop and debug. Free tier is more like a "development tier" for me. Windows Azure offers you both, the development experience you expect without any cost on your local machine with DevFabric and a development experience on the real cloud environment where you can deploy and test  your application also without any cost, unless of course you consume your free allowance.

Some may say you can't compare AWS to Windows Azure, because they are not the same. AWS is mostly IaaS (Infrastructure as a Service) and Windows Azure is PaaS (Platform as a Service) and I couldn't agree more. But what I'm comparing here are features that already exist on both services. I'm not comparing EC2 instances sizes to Windows Azure instances sizes but I'm comparing the Load Balancing, SimpleDB etc.

* Machine hour is a different concept to compute hour and it's beyond the scope of this post.

Thank you for reading and please feel free to comment.


PK.

0 σχόλια
Δημοσίευση στην κατηγορία: , ,


Recently, I’ve been looking a way to persist the status of an idling Workflow on WF4. There is a way to use SQL Azure to achieve this, after modifying the scripts because they contain unsupported T-SQL commands, but it’s totally an overkill to use it just to persist WF information, if you’re not using the RDBMS for another reason.

I decided to modify the FilePersistence.cs of the Custom Persistence Service sample in WF 4 Samples Library and make it work with Windows Azure Blob storage. I’ve created two new methods to Serialize and Deserialize information to/from Blob storage.

Here is some code:

   1: private void SerialiazeToAzureStorage(byte[] workflowBytes, Guid id)
   2:      {
   3:          var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
   4:          var container = account.CreateCloudBlobClient().GetContainerReference("workflow_persistence");
   5:  
   6:          var blob = container.GetBlobReference(id.ToString());
   7:          
   8:          blob.Properties.ContentType = "application/octet-stream";
   9:          using (var stream = new MemoryStream())
  10:          {
  11:              stream.Read(workflowBytes, 0, workflowBytes.Length);
  12:              blob.UploadFromStream(stream);
  13:          }
  14:      }
  15:  
  16:      private byte[] DeserialiazeFromAzureStorage(Guid id)
  17:      {
  18:          var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
  19:          var container = account.CreateCloudBlobClient().GetContainerReference("workflow_persistence");
  20:  
  21:          var blob = container.GetBlobReference(id.ToString());
  22:  
  23:          return blob.DownloadByteArray();
  24:      }

Just make sure you’ve created “workflow_persistence” blob container before using these methods.

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: , ,


Once again, Microsoft proved that it values its customers, either big enterprise or small startups. We’re a small private-held business and I personally have a major role in it as I’m one of the founders. Recently, I’ve been giving some pretty nice presentations and a bunch of sessions for Microsoft Hellas about Windows Azure and Cloud computing in general.

I was using my CTP account(s) I have since PDC 08 and I had a lot of services running there from times to times all for demo purposes. But with the 2nd commercial launch wave, Greece was included and I had to upgrade my subscription and start paying for it. I was ok with that, because MSDN Premium subscription has 750 hours included/month, SQL Azure databases and other stuff included for free. I went through the upgrade process from CTP to Paid, everything went smoothly and there I was waiting for my CTP account to switch on read-only mode and eventually “fade away”. So, during that process, I did a small mistake. I miscalculated my instances running. I actually missed some. That turned out to be a mistake that will cost me some serious money for show-case/marketing/demoing projects running on Windows Azure.

About two weeks ago, I had an epiphany during the day and I was like “Oh, crap.. Did I turned that project off? How many instance do I have running?”. I logged on the billing portal and, sadly for me, I was charged like 4500 hours because of the forgotten instances and my miscalculation. You see, I’ve did a demo about switch between instance sizes and I had some instances running like big VMs. That’s four (4) times the price per hour.

It was clearly my mistake and I had to pay for it (literally!). But then I tweeted my bad luck to help others avoid the same mistake and the thing I was been warning my clients all this time and some people from Microsoft got interested in my situation, I explained what happened and we ended up in a pretty good deal just 3 days after I tweeted. But, that was an exception and certainly DON’T count on it.

Bottom line is be careful and plan correctly. Mistakes do happen but the more careful we are, the more rare they will be.

* I want to publicly say thank you to anyone who was involved in this and helped me sort things out so quickly.

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: , ,


Paspartu is French for “one size fits all”. Recently I’ve been coming across posts explaining and “promoting” the idea of spawning threads inside a worker role each one of them with a unique work to be done. All are sharing the same idea and all of them are describing the same thing.

The idea

You have some work to do, but you want to do it with the most efficient way, without having underutilized resources, which is one of the benefits of cloud computing anyway.

The implementation

You have a worker process (Worker Role on Windows Azure) which processes some data. Certainly that’s a good implementation but it’s not a best practice. Most of the time, your instance will be underutilized, unless your doing some CPU and memory intensive work and you have a continuous flow of data to be processed.

In another implementation, we created a Master-Slave pattern. A master distributes work to other slave worker roles, roles are picking up their work, do their stuff, return result and start over again. Still, in some cases that’s not the best idea either. Same cons as before. Underutilized resources, high risk of failure. If the master dies, unless properly designed, your system dies. You can’t process any data.

So, another one appeared. Inside a worker role, spawn multiple threads, running their own processes or methods, doing their work and return result. Underutilization is minimized, Thread Pool is doing all the hard work for us and as soon as .NET 4.0 is supported on Windows Azure, parallelization is easy and, allow me to say, mandatory. But what happens if the worker instance dies? Or restarts? Yes, your guess is correct. You lose all threads and all the processing done by that moment, is lost, unless you persist it somehow. If you had multiple instances of your worker role to imitate that behavior, that wouldn’t happen. You’ll only lose data from the instance that died.

As Eugenio Pace says “You have to be prepared to fail” and he’s right. Every single moment, your instance can die, without a single notice and you have to be prepared to deal with it.

Oh, boy.

So really, there is no single solution or best practice. For me, it’s best guidance. Depending on your scenario, one of the solutions above or even a new one, can fit better for you than for others. Every project is unique and has to be treated as such. Try to think out of the box and remember that this is deep water for everyone. It is just some of us swim better..

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: , ,

 

Yes, it is.

Table storage has multiple replicas and guarantees uptime and availability, but for business continuity reasons you have to be protected from possible failures on your application. Business logic errors can harm the integrity of your data and all Windows Azure Storage replicas will be harmed too. You have to be protected from those scenarios and having a backup plan is necessary.

There is a really nice project available on Codeplex for this purpose: http://tablestoragebackup.codeplex.com/

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: ,


Just three days ago on Feb 13th, SQL Azure got an update. Long requested features like downgrade and upgrade between Web and Enterprise Edition is finally implemented. It’s easy, just with a single command to switch between versions. Also some DMVs were introduced to match on premise SQL Server. Idle session timeout was also increased.

In details*:

Troubleshooting and Supportability DMVs

Dynamic Management Views (DMVs) return state information that can be used to monitor the health of a database, diagnose problems, and tune performance. These views are similar to the ones that already exist in the on-premises edition of SQL Server.

The DMVs we have added are as follows:

· sys.dm_exec_connections – This view returns information about the connections established to your database.

· sys.dm_exec_requests – This view returns information about each request that executes within your database

· sys.dm_exec_sessions – This view shows information about all active user connections and internal tasks.

· sys.dm_tran_database_transactions – This view returns information about transactions at the database level.

· sys.dm_tran_active_transactions – This view returns information about transactions for your current logical database.

· sys.dm_db_partition_stats – This view returns page and row-count information for every partition in the current database.

Ability to move between editions

One of the most requested features was the ability to move up and down between a Web or Business edition database.   This provides you greater flexibility and if you approach the upper limits of the Web edition database, you can easily upgrade with a single command. You can also downgrade if your database is below the allowed size limit.

You can now do that using the following syntax:

ALTER DATABASE database_name

{

    MODIFY (MAXSIZE = {1 | 10} GB)

}

Idle session timeouts

We have increased the idle connection timeout from 5 to 30 minutes. This will improve your experience while using connection pooling and other interactive tools

Long running transactions

Based on customer feedback, we have improved our algorithm for terminating long running transactions. These changes will substantially increase the quality of service and allow you to import and export much larger amounts of data without having to resort to breaking your data down into chunks.

* Source: MSDN Forums announcement

Provide your feedback on http://www.mygreatsqlazureidea.com. There are some great features requested like the ability to automatically store a backup on BLOB storage.

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: , ,


Windows Azure configuration files support an osVersion attribute where you can set which version of the Windows Azure OS should run your service.

This feature doesn’t make much sense at the moment as there is only one version WA-GUEST-OS-1.0_200912-01 but in the future it’s going to be very handy.

You can learn more about it here.

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: ,


Recently at MSDN Forums there were people asking how they can detect if their web application is running on the cloud or locally (Dev Storage). Well besides the obvious part, if you have code inside a Web Role or a Worker Role Start() method, this only exists on a cloud template but what if you want to make that check somewhere else, for example inside a Page_Load method or inside a library (dll)?

If you’re trying to detect it on the “UI” level, let’s say Page_Load, you can simply check your headers. Request.Headers["Host"] will do the trick. If it’s “localhost” or whatever you like it to be to can be used to determine if it’s running local.

But how about a Library? Are there any alternatives?

Well, it’s not the most bullet proof method, but it served me well until now and I don’t think it’s going to stop working as it’s a fundamental architecture element of Windows Azure. There are specific Environment properties that are raising a SecurityException as you’re not allowed to read them. One of them is MachineName. So, if Environment.MachineName is raising an exception then you’re probably running on the cloud. As I said it’s not bullet proof because if an IT administrator applies a CAS that restricts specific properties, it can still raise an exception but you get my point. A combination of tricks can give you the desired result.

PK.

0 σχόλια
Δημοσίευση στην κατηγορία: , ,


You’re trying to create a queue on Windows Azure and you’re getting a “400 Bad Request” as an inner exception. Well, there are two possible scenarios:

1) The name of the queue is not valid. It has to be a valid DNS Name to be accepted by the service.

2) The service is down or something went wrong and you just have to re-try, so implementing a re-try logic in your service when initializing is not a bad idea. I might say it’s mandatory.

The naming rules

  • A queue name must start with a letter or number, and may contain only letters, numbers, and the dash (-) character.
  • The first and last letters in the queue name must be alphanumeric. The dash (-) character may not be the first or last letter.
  • All letters in a queue name must be lowercase.
  • A queue name must be from 3 through 63 characters long.

    More on that here.

  • Thank you,
    PK.

    0 σχόλια
    Δημοσίευση στην κατηγορία: ,


    Windows Azure training kit it the best starting point if you want to get involved in Azure development. It helps you understand the basics of Windows Azure, its components and the whereabouts of the service.

    December’s release includes some updates and samples from PDC 09 so don’t miss it.

    You can download the kit from here –> http://www.microsoft.com/downloads/details.aspx?FamilyID=413E88F8-5966-4A83-B309-53B7B77EDF78&displaylang=en

     

    PK.

    0 σχόλια
    Δημοσίευση στην κατηγορία:


    One of the latest features introduced on SQL Azure is the abillity to apply firewall settings on your database and allow only specific IP ranges to connect to it. This can be done through SQL Azure Portal or through code using stored procedures.

    If you want to take a look at which rules are active on your SQL Azure database, you can use:

    select * from sys.firewall_rules

    That will give you a view of your firewall rules.

    If you want to add a new firewall rule, you can use the "sp_set_firewall_rule". The syntax is "sp_set_firewall_rule <firewall_rule_name> <ip range start> <ip range end>". For example:

    exec sp_set_firewall_rule N'My setting','192.168.0.15','192.168.0.30'


    If you want to delete that rule, you can use:

    exec sp_delete_firewall_rule N'My setting'


    PK.

    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,


    When you have your service running on Windows Azure, the least thing you want is monitoring every now and then and decide if there is a necessity for specific actions based on your monitoring data. You want the service to be, in some degree, self-manageable and decide on its own what the necessary actions should take place to satisfy a monitoring alert. In this post, I’m not going to use Service Management API to increase or decrease the number of instances, instead I’m going to log a warning, but in a future post I’m going to use it in combination with this logging message, so consider this as a series of posts with this being the first one.

    The most common scenario is dynamically increase or decrease VM instances to be able to process more messages as our Queues are getting filled up. You have to create your own “logic”, a decision mechanism if you like, which will execute some steps and bring the service to a state that satisfies your condition because there is no out-of-the-box solution from Windows Azure. A number of companies have announced that their monitoring/health software is going to support Windows Azure. You can find more information about that if you search the internet, or visit the Windows Azure Portal under Partners section.

    In the code below I’m monitoring the messages inside a Queue at every role cycle:

       1: CloudQueue cloudQueue = cloudQueueClient.GetQueueReference("calculateP");
       2:  
       3: cloudQueue.CreateIfNotExist();
       4: cloudQueue.FetchAttributes();
       5:  
       6: /* Call this method to calculate your WorkLoad */
       7: CalculateWorkLoad(cloudQueue.ApproximateMessageCount);

    and this is the code inside CalculateWorkLoad:

       1: public void CalculateWorkLoad(int? messages)
       2: {
       3:   /* If there are messages, find the average of messages 
       4:      available every X seconds
       5:   X = the ThreadSleep time, in my case every 5 seconds */
       6:   if (messages != null)
       7:    average = messages.Value / (threadsleep / 1000);
       8:  
       9:   DecideIncDecOfInstances(average);
      10: }

    Note that if you want to get accurate values on queue’s properties, you have to call FetchAttributes();

    There is nothing fancy in my code I’m just finding an average workload (number of messages in my Queue) every 5 seconds and I’m passing this value at DecideIncDecOfInstances(). Here is the code:

       1: public void DecideIncDecOfInstances(int average)
       2: {
       3:     int instances = 2;
       4:  
       5:     /* If my average is above 1000 */
       6:     if (average > 1000)
       7:         OneForEveryThousand(average, ref instances);
       8:         WarnWeNeedMoreVM(instances);
       9: }

    OneForEveryThousand count is actually increasing the default number of instances, which is two (2), by one (1) for every thousand (1000) messages in Queue’s average count.

    This is the final part of my code, WarnWeNeedMoreVM which logs our need for more or less VM’s.

       1: public void WarnWeNeedMoreVM(int instances)
       2: {
       3:     if (instances == 2) return;
       4:  
       5:     Trace.WriteLine(String.Format("WARNING: Instances Count should be {0} on this {1} Role!", 
       6:                    instances, RoleEnvironment.CurrentRoleInstance.Role.Name), "Information");
       7: }

    In my next post for this series, I’m going to use the newly released Service Management API to upload a new configuration file which increases or decreases the number of VM instances in my role(s) dynamically. Stay tunned!

    PK.

    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,


    In general, there are two kind of updates you’ll mainly perform on Windows Azure. One of them is changing your application’s logic (or so called business logic) e.g. the way you handle/read queues, or how you process data or even protocol updates etc and the other is schema updates/changes. I’m not referring to SQL Azure schema changes, which is a different scenario and approach but in Table storage schema changes and to be more precise only on specific entity types because, as you already now, Table storage is schema-less. As in In-Place upgrades, the same logic applies here too. Introduce a hybrid version, which handles both the new and the old version of your entity (newly introduced properties) and then proceed to your “final” version which handles the new version of your entities (and properties) only. It’s a very easy technique and I’m explaining how to add new properties and of course remove although it’s a less likely scenario.

    During my presentation at Microsoft DevDays “Make Web not War”, I’ve created an example using a Weather service and an entity called WeatherEntry, so let’s use it. My class looks like this:

       1: [DataServiceKey("PartitionKey","RowKey")]
       2: public class WeatherEntry : TableServiceEntity
       3: {
       4:     public WeatherEntry()
       5:     {
       6:         PartitionKey = "athgr";
       7:         RowKey = string.Format("{0:10}_{1}", DateTime.MaxValue.Ticks - DateTime.Now.Ticks, Guid.NewGuid());
       8:     }
       9:     public DateTime TimeOfCapture{ get; set; }
      10:     public string Temperature{ get; set; }
      11: }

    There is nothing special at this class. I use two custom properties, TimeOfCapture and Temperature and I’m going to make small change and I’ll add “SchemaVersion” which is needed to achieve the functionality I want. When I want to create a new entry, all I do now is instantiate a WeatherEntry, set the values and use a helper method called AddEntry to persist my changes.

       1: public void AddEntry(string temperature, DateTime timeofc)
       2: {
       3:    this.AddObject("WeatherData", new WeatherEntry { TimeOfCapture = timeofc, Temperature = temperature, SchemaVersion = "1.0" });
       4:    this.SaveChanges();
       5: }

    I’m using TableServiceContext from the newly released StorageClient and methods like UpdateObject, DeleteObject, AddObject etc, exist in my data service context where AddEntry helper method relies. At the moment my Table schema looks like this:

    schema-before-change  

    It’s pretty obvious there is no special handling during saving of my entities but this is about to change in my hybrid version.

    The hybrid

    I did some changes at my base class and I’ve added a new property. It’s holding the temperature sample area, in my case Spata where Athens International Airport is.

    My class looks like this now:

       1: [DataServiceKey("PartitionKey","RowKey")]
       2: public class WeatherEntry : TableServiceEntity
       3: {
       4:     public WeatherEntry()
       5:     {
       6:         PartitionKey = "athgr";
       7:         RowKey = string.Format("{0:10}_{1}", DateTime.MaxValue.Ticks - DateTime.Now.Ticks, Guid.NewGuid());
       8:     }
       9:     public DateTime TimeOfCapture{ get; set; }
      10:     public string Temperature{ get; set; }
      11:     public string SampleArea{ get; set; }
      12:     public string SchemaVersion{ get; set;}
      13: }

    So, this hybrid client has somehow to handle entities from version 1 and entities from version 2 because my schema is already on version 2. How do you do that? The main idea is that you retrieve an entity from table storage and you check if SampleArea and SchemaVersion have a value. If they don’t, put a default value and save them. In my case my schema version number has to be 1.5 as this is the default schema number for this hybrid solution. One key point to this procedure is before you upgrade your client to this hybrid, you roll-out an update enabling “IgnoreMissingProperties” flag on your TableServiceContext. If IgnoreMissingProperties is true, when a version 1 client is trying to access your entities which are on version 2 and have those new properties, it WON’T raise an exception and it will just ignore them.

       1: var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
       2: var context = new WeatherServiceContext(account.TableEndpoint.ToString(), account.Credentials);
       3:  
       4: /* Ignore missing properties on my entities */
       5: context.IgnoreMissingProperties = true;

    Remember, you have to roll-out an update BEFORE you upgrade to this hybrid.

    Whenever I’m updating an entity to Table Storage, I’m checking its version Schema and if it’s not “1.5” I update it and put a default value on SampleArea:

       1: public void UpdateEntry(WeatherEntry wEntry)
       2: {
       3:     if (wEntry.SchemaVersion.Equals("1.0"))
       4:     {
       5:         /* If schema version is 1.0, update it to 1.5 
       6:          * and set a default value on SampleArea */
       7:         wEntry.SchemaVersion = "1.5";
       8:         wEntry.SampleArea = "Spata";
       9:     }
      10:     /* Put some try catch here to 
      11:      * catch concurrency exceptions */
      12:     this.UpdateObject(wEntry);
      13:     this.SaveChanges();
      14: }

    My schema now looks like this. Notice that both versions of my entities co-exist and are handled just fine by my application.

    schema-after-change

    Upgrading to version 2.0

    Upgrading to version 2.0 is now easy. All you have to do is change the default schema number when you create a new entity to version 2.0 and of course update your “UpdateEntry” helper method to check if version is 1.5 and update the value to 2.0.

       1: this.AddObject("WeatherData", new WeatherEntry { TimeOfCapture = timeofc, Temperature = temperature, SchemaVersion = "2.0" });

    and

       1: public void UpdateEntry(WeatherEntry wEntry)
       2: {
       3:    if (wEntry.SchemaVersion.Equals("1.5"))
       4:    {
       5:        /* If schema is version 1.5 it already has a default
       6:         value, all we have to do is update schema version so 
       7:         our system won't ignore the default value */
       8:        wEntry.SchemaVersion = "2.0";
       9:    }
      10:    /* Put some try catch here to 
      11:     * catch concurrency exceptions */
      12:    this.UpdateObject(wEntry);
      13:    this.SaveChanges();
      14: }

    Whenever you retrieve a value from Table Storage, you have to check if it’s on version 2.0. If it is, you can safely use its SampleArea value which is not the default any more. That’s because schema version is changed when you actually call “UpdateEntry” which means you had the chance to change SampleArea to a non-default value. But if it’s on version 1.5 you have to ignore it or update it to a new, correct value.

    If you do want to use the default value anyway, you can create a temporary worker role which will scan the whole table and update all of your schema version numbers to 2.0.

    How about when you remove properties

    That’s a really easy modification. If you remove a property, you can use a SaveChangesOption called ReplaceOnUpdate during SaveChanges() which will override your entity with the new schema. Don’t forget to update your schema version number to something unique and put some checks into your application to avoid failures when trying to read non-existent properties due to newer schema version.

       1: this.SaveChanges(SaveChangesOptions.ReplaceOnUpdate);


    That’s all for today! Smile

    P.K

    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,


    In a previous post I’ve described what a VIP Swap is and how you can use it as an updating method to avoid service disruption. This particular method doesn’t apply to all possible scenarios and if not always, most of the times, during protocol updates or schema changes you’ll need to upgrade your service when its still running, chunk-by-chunk and without any downtime or disruption. By In-Place, I mean upgrades that take place during which both versions (old version and new version) are running side-by-side. In order to better understand the process below, you should read my “Upgrade domains” post in which there is a detailed description of what Upgrade domains are, how they affect your application, how you can configure the number of domains etc.

    To avoid service disruption and outage Windows Azure is upgrading your application domain per domain (upgrade domain that is). That will result in a particular state where your Upgrade Domain 0 (UD0) is running a newer version of your client/service/what_have_you and your UD1, UD2 etc will run an older version. The best approach is to have a two-step phase upgrade.

    Let’s call our old protocol version V1 and our new version V2. At this point, you should consider introducing a new client version called 1.5 which is a hybrid. What this version does is understanding both protocols used in both versions but always use protocol V1 by default and only respond by protocol V2 if they request is on V2. You can now start pushing your upgrades either by Service Management API or using Windows Azure Developer portal to completely automate the procedure. By the end of this process, you’ll achieve a seamless upgrade to your service without any disruption and all of your clients will upgrade to this hybrid. As soon as your first step is done and all of your domains are running version 1.5, you can proceed to step two (2).

    In your second step you’ll be repeating the same process but this time your version 2 clients will use protocol V2 by default. Remember, your 1.5 clients DO understand protocol V2 and they respond to it properly once called upon with. To make it simple, this time you’re deploying version 2 of your client which uses version 2 of your protocol only. Old legacy code for version 1 is removed completely. As your upgrade domains complete the second step you’ll be having all your roles using version 2 of your protocol, again without any service disruption or downtime.

    Schema changes have a similar approach but I’ll make a different post and actually put some code on it to demonstrate that behavior.

    0 σχόλια
    Δημοσίευση στην κατηγορία: , , ,


    Windows Azure automatically divides your role instances into some “logical” domains called upgrade domains. During upgrade, Azure is updating these domains one by one. This is a by design behavior to avoid nasty situations. Some of the last feature additions and enhancements on the platform was the ability to notify your role instances in case of “environment” changes, like adding or removing being most common. In such case, all your roles get a notification of this change. Imagine if you had 50 or 60 role instances, getting notified all at once and start doing various actions to react to this change. It will be a complete disaster for your service.

    logical-upgrade-domain 
    Source: MSDN

    The way to address this problem is upgrade domains. As I said, during upgrade Windows Azure updates them one by one and only the associated role instances to a specific domain get notified of the changes taking place. Only a small number of your role instances will get notified, react and the rest will remain intact providing a seamless upgrade experience and no service disruption or downtime.

    role-upgrade
    Source: MSDN

    There is no control on how Windows Azure divides your instances and roles into upgrade domains. It’s a completely automated procedure and it’s being done on the background. There are two ways to perform an upgrade on a domain. Using Service Management API or the Windows Azure Developer portal. On the Developer Portal there are two more options. Automatic and manual. If you select automatic, Windows Azure will upgrade your domains without any hassle about what is going on. If you select manual, you’ll have to upgrade all of your domains one by one.

    This is some of the magic provided by Windows Azure operating system and Windows Azure platform to provide scalability, availability and high reliability for your service.

    PK.

    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,


    Today, during my presentation at Microsoft DevDays “Make Web not War” I had a pretty nice question about concurrency and I left the question somehow blurry and without a straight answer. Sorry, but we were changing subjects so fast that I missed it and I only realized it on my way back.

    The answer is yes, there is concurrency. If you examine a record on your table storage you’ll see that there is a Timestamp field or so called “ETag”. Windows Azure is using this field to apply optimistic concurrency on your data. When you retrieve a record from the database, change a value and then call “UpdateObject”, Windows Azure will check if timestamp field has the same value on your object as it does on the table and if it does it will update just fine. If it doesn’t, it means someone else changed it and you’ll get an Exception which you have to handle. One possible solution is retrieve the object again, update your values and push it back. The final approach to concurrency is absolutely up to the developer and varies between different types of applications.

    As I mentioned during my presentation, there a lot of different approaches to handle concurrency on Windows Azure Table Storage. There is a very nice video on the “How to” section on MSDN about Windows Azure Table Storage concurrency which can certainly give you some ideas.

    PK.

    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,


    Performing an in-place upgrade on Windows Azure to change your service definition file it’s not possible unless you stop the service, upgrade and then start it again. Otherwise you can do a VIP Swap. VIP stands for Virtual IP and VIP swaps can be either done by the Developer portal or using the Service Management API by calling “Swap Deployment” method.

    If you use VIP Swap and as long as you the endpoints between the old and new service definition are identical, the upgrade process is seamless and pretty straightforward without any service interruption. But if, for example, you introduce a new endpoint or delete and older one, then this process is not possible and you have to stop, upgrade, start.

    So, how can you perform this operation from the Developer portal. Simply logon to your account, go to Summary page, open your target project, open the service and then upload the new service definition file on Staging. Now, if you click Run on Staging, both versions of your service will work just fine (one in Production, one in Staging). When you hit the Upgrade button (the one with arrows in the middle) then Azure will upgrade your service from Staging to Production, changing the service definition file and complete the process just fine thus performing a VIP swap in the background.

    You can perform the same operation using Service Management API and “Swap Deployment” method as I mentioned before.

    For more information about VIP Swap using Service Management API, go here –> MSDN, Azure Service Management API, VIP Swap

    PK.

    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,
    It’s been a while since Windows Azure caught the attention of a broader audience rising all kinds of questions from the most simple “How do I access my table storage” to the most complex and generic “How do I optimize my code to pay less”. Although there Διαβάστε περισσότερα
    0 σχόλια
    Δημοσίευση στην κατηγορία: , , ,


    Its been a while since Windows Azure caught the attention of a broader audience rising all kinds of questions from the most simple “How do I access my table storage” to the most complex and generic “How do I optimize my code to pay less”. Although there is a straight answer on the first one, that’s not the case on the second. Designing an application has always been a fairly complex scenario when it comes to high scale enterprise solutions or even on mid-sized businesses.

    There are a few things to consider when you’re trying to “migrate” your application to the Azure platform.

    You should have in mind that, you literally pay for your mistakes. As long as you’re not a developer account, which by the way you “pay” but in a different way, you are on a pay as you go model for various things, like Transactions, Compute hour, Bandwidth and of course storage. Oh and database size on SQL Azure. So every mistake that you make, let’s say un-optimized code producing more messages/transactions that necessary, you pay for it.

    What’s a Compute hour? Well, it’s the number of instances multiplied by “consumed” (service) hours. And what’s a consumed (service) hour? It’s a simple medium size Azure instance having an average of 70% CPU peak, on a 1.60 GHZ CPU with 1.75 GB of RAM and 155 GB of non-persistent data. That means it’s not “uptime” as many of you think it is. So, when it comes to compute hours, you should measure how many hours does your application consume and optimize your code to consume less. Disable any roles, either Worker or Web when you don’t need them, it will cost you less.

    SQL Azure as your database server. There are some catches here also. You should consider re-designing and re-writing the parts where your application is using BLOBs, specially when they are big ones, to use Azure table storage. That’s because of two reasons. First, there are only 2 editions of SQL Azure available as of now, 1GB and 10GB database size limit. You can pretty easily hit the limit when you have large amounts of BLOBs (images, documents etc). The second reason is that you don’t take advantage of the Azure CDN (Content Delivery Network) which means all your clients are served by the server(s) hosting your database even if this is slower because of the network latency. If you use Azure CDN , your content is distributed at various key points all around the world (Europe, Asia, USA etc) and your clients are served by the fastest available server nearby their location. And that’s not only that. Azure storage is using REST which means your content is also cached by various proxies increasing the performance even more.

    Any thoughts?

    PK.

    0 σχόλια
    Δημοσίευση στην κατηγορία: , ,
    It's been a interesting week for Windows Azure. Since last Friday (13th of November) the latest Windows Azure Tools was released and strangely the installer was marked to be a 1.0 release although I'm not sure it really is one.
    • Project Dallas has been announced which is a MarketPlace for Vendors to publish their Azure Services for consuming and start making money out of it.
    • SQL Azure Data Sync Beta 1 was released. You can use it to synchronize cached data on your offline client with SQL Azure on the cloud just like using any other Synchronization Adapter/Manager on Microsoft Sync Framework. In order to install it and mess around with samples you need Microsoft Sync Framework 2.0 SDK.
    • .NET Services was renamed to Windows Azure AppFabric to match terms with Windows Server AppFabric which includes former named projects Velocity and Dublin in one single product. .NET Services consist of Access Control (Claims based authentication) and a Service bus to communicate with other applications.
    • New APIs were released to provision and manage Windows Azure Services.
    • There have been various enhancements on the platform and the Web UI like how many seconds does it take to switch from Staging to Production, a TCO Calculator has been introduced on the main website etc.

    More to come, stay tuned.

    0 σχόλια
    Δημοσίευση στην κατηγορία: , , ,